top of page

Unity for the Visual Effects Artist 101 Part 1

Hi Everyone, I’m Richard, A Junior games VFX Artist. Almost a year ago now I made the decision to switch from environment art to real-time visual effects. This blog is a chronicle of my journey. It began back in September to log my attempts at making better effects. This year I’m aiming to get more courageous and more extravagant. So, from now until I inevitably forget this blog exists and stop updating it, let’s get weird.

For this season I’m going to be living in the Unity Engine. Unity is often described as the programmer’s engine. It’s easy to rapidly prototype mechanics and concepts and programmers aren’t shackled by building blueprint actors. Instead, they can write a script and drop it on an object. But, where does that leave the artists? C# is great but it’s a lot harder to visualise what you need to be able to write than it is with UE4 blueprints. A lot of VFX based tutorials out there focus more on the shader graph and the particles system, which I will be covering, but not a lot tell you how to setup an animation, or how to link an animation playing and a particle effect to a button press. For the first two blogs of the new year, I’ll be demonstrating some of the lesser explained features and techniques for a VFX artist getting started in the Unity Engine.

In the first part we will be covering:

· My deonstration effect

· Unity’s rendering pipelines and Package Downloader

· Learning to C#

· Setting up animations

The Effect Being Made

This is the effect I’ll be using to demonstrate Unity's features:

Effect built in the Unity Engine

It’s the poundland equivalent of the legendary hideout opening effect in sea of thieves.

It’s made up of 10 emitters, 8 materials, 4 shaders, 2 scripts and 1 animated Mesh.

Unity’s Rendering Pipelines and Package Downloader

Unity is a very flexible engine. It can do a lot if targeted correctly. Something that helps Unity achieves this is the wide array of addons that can be added to engine to assist with various goals. These can be found on the Unity asset store which houses various addons made by the community, or for inhouse addons built by Unity the Package manager window that’s built into the engine.

This time I’m just going to use inbuilt packages. The Package manager is located under the window drop down box. For the best stuff, preview packages need to be discoverable. These can be activated from the advanced drop-down box.

How to get the good packages

In my file I’ve downloaded the Core Render pipeline library. This is needed to use the new Lightweight Render Pipeline and the HDRP (High Definition Render Pipeline), this enables us to use the Unity Shader graph addon. I’m also downloading the post processing stack addon. The shader graph and the post processing stack are going to be explained further on in the blog, but what are the render pipelines used for?

Back in 2018 Unity released two new render pipelines for the engine, the High Definition Render Pipeline, and the Lightweight Render Pipeline. I’ve linked the two blogs for anyone whose interested in doing some light reading, but the gist is the LWRP is built with performance in mind and the HDRP is built with high end visuals in mind.

For this effect I’m using the HDRP, but for the next blog I’m going to be using the LWRP.

Learning to C#

Unity is very programmer heavy which is why a lot of art projects veer towards UE4. It’s friendlier to artists that can’t code very well. However, the amount of knowledge needed to perform basic tasks in Unity really isn’t a lot. A common misconception when it comes to programming based tasks is that a full expertise of a language is needed to do it. Someone needs to be fluent and know everything there is to know in C# if they want a door to open on command. That is not accurate at all. Hell, even the programmers', people who spend every waking moment coding in C# don’t even know half of it. What they do know is how to break down a task into smaller tasks and find out the answer from there.

Programming isn’t about writing a planet into existence. It’s about finding the correct question to ask a genie. Instead of asking how do I make a door open with a key press, break it down into smaller questions. How do I activate an animation, use a keypress, how do I assign the keypress a variable so I can change what key activates the key press?

Variables get explained further down. This image is just here to break up the text.

I’m going to break down my two most used scripts but first anyone looking to work in Unity needs to build up at the very least a surface level understanding of C# code. Mainly just the basics like if statements, functions, the difference between coroutines and methods. For that I recommend watching the Brakeys C# YouTube series. He does a tremendous job in explaining all the individual components simply. It can be found here

When making VFX the two biggest challenges I need a solution for is:

· How to make an animation Trigger

· How to make a particle sequence play on command

Before looking at the scripts I use, something I can’t stress enough is that it’s ok not to know something. Never be afraid to ask for help.

As far as triggering animations the script I use looks like this:

Here's where those variables are getting explained.

The first part:

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

Think of this as adding another bucket of Lego bricks to your collection. In each bucket is good code written by competent people that we can use in our current builds.

The next part is declaring what the class is.

public class Animation_Trigger : MonoBehaviour



Saying something is public means that it can be accessed outside the script. In this case I’m declaring the class itself is public. Classes always need to be the same name as the script otherwise the compiler will error the code. The “MonoBehaviour” part is something for people who are much smarter than me to touch. It gets added by default whenever you make a new C# script so I assume it’s important.

Next I’m declaring my variables.

Animator animator;

bool hasOpened = false;

public float delay = 1f;

public KeyCode key = KeyCode.Space;

I’ve declared an Animator (more on that soon), a Boolean, a float, and a KeyCode. A Boolean is just a yes/no branch that gets used for if statements and other logic gates. A float is a decimal number, and a Keycode is a key on a keyboard or a button on a mouse.

Two of my variables are public which means they can be edited outside the script when they get added to an object as a component.

The fun part of the script is the methods. The two methods I use is the start method and the update method. The Start() and update() methods are part of the “MonoBehavoir” that gets called earlier. That part that I didn’t really know what it was but assumed it was important. Normally methods get made in the scripts themselves. Think of them as the events from UE4s blueprinting. There are some intrinsic events but for anything custom, a new one needs to be created.

void Start()


animator = GetComponent<Animator>();


In my start method I’m assigning the animator variable we made earlier to the Animator component from the asset.

Animation componant from an animated mesh

This allows the script to access features in the animation controller for the stairs asset that this script is being added to. We’ll see why that’s needed in a second.

Next is the Update method.

void Update()


if (Input.GetKeyDown(key) && !hasOpened)





This method is using If logic to start a coroutine. If a button is pressed and the Boolean “hasOpened” is false then the coroutine shoot starts. && just means both these things need to be true and “!hasOpened” is a shorthand way of writing “hasOpened = false”.

Finally the IEnumerator.

IEnumerator Shoot()


hasOpened = true;

yield return new WaitForSeconds(delay);



IEnumerators are one of those things that a lot of beginners have troubles with. In general, I don’t fully understand what they are or how they work. What I do know however is that they are the only thing that allows me to use something that resembles a delay node from UE4. What this is doing is when Shoot() becomes active it sets the “hasOpened” Boolean to true so the script doesn’t activate multiple times. It delays the scripts by several seconds that are dictated by our public float variable delay and finally it activates the trigger in our animator controller Open.

That is all that’s needed to activate animations on environmental pieces that only ever have 1 animation. Whether it’s a door that needs to be opened, stairs that go down like from my effect or Houdini simulations that need to be activated. That frame work above will allow a developer to activate an animation through a script.

How about triggering particle effects?

That is even simpler. The script I use to achieve this is here:

The particle script is long. But that's because of how many repeated elements there are.

It looks like a lot but when viewed closely it can be broken down into 3 sections.

  • The variables where I declare my particle system and give it a name.

  • The start function where I tell the particle to stop().

  • The Update function where I use the same If logic found on the animation trigger to play() the particle systems.

It’s super easy and can be customised with little to no effort. This script is specialised around the particle systems in the stairs opening. But, if I wanted to be more modular what I could do is write multiple scripts where instead of giving my particle system a name I call it System_1, System_2 and so on so forth. To apply this to a particle system, what I do is make an empty game object, add this script to it and finally make all the particle systems children of that game object.

How the script looks as a componant.

The take away that I hope everyone gets from all this is that coding isn’t difficult, a VFX artist doesn’t need to know every bit of code to find success, and even if they do know something, they don’t have to understand it. The key skill is learning the right questions to ask the magic 8 ball.

Setting up Animations

Animated meshes have 3 major components to make them work in games. The animation itself, the animated controller, and a script to trigger the animations. In 3DS Max I built the stairs for the effect, added a bone for each step and added a root as the parent to everything. My final hierarchy looked like this.

Hieracy in 3Ds Max.

When that gets imported into Unity it comes in as this asset:

Imported asset in Unity.

The only thing we’ll ever need to touch in this is the Animation clip which here is named Take 001. Once I’ve dragged it into the scene, I need to assign an animation controller to the controller box in the animator component.

Animator Componant

An animator controller gets made by right clicking the content browser, hovering over create and left clicking the animation controller near the bottom of the drop down. Double clicking on the controller brings up the Animator window. Here is where the flow charts that dictate an assets animation get implemented. My asset only has the one animation, so the flow chart looks nice and simple.

Unity Animator Graph

It has two states added into it, a delay state that doesn’t have an animation which lets the chart rest there, and an animation state that has the animation active. In the panel to the side I have one parameter which is the trigger open.

Trigger called Open.

This is the trigger I mentioned in the C# section where I broke down the animation script. It can be made by clicking on the small + icon. Make sure the parameter being made is a trigger and not a Boolean. I have made that mistake a few times in the past.

Delay Animation State

Mostly everything in the delay state was left on default, the only thing I did was add a transition to the animation state that relied on the trigger open.

Movement Animation State.

For the animation state I added the stairs animation to motion. I also reduced the speed to 0.25 making it 4 times slower than my original animation. My timing isn’t great so that’s an easy way to smooth out my motion without going back to 3DS Max and editing keyframes. I’ve also got a transition to exit. At some point I knew the purpose for it but currently that knowledge has been lost. I also turn off the has exit time checkbox. The check box is a special transition that doesn’t rely on a parameter, instead it takes a normalised time of the state and makes the transition specified in the settings drop down. I always turn it off because I like to have more direct control of my transitions, however for things like doors exit time can be used to make them closed after a certain period. Also in the past, leaving that checkbox ticked has broken my animations, so it’s a reactionary habit to just turn it off.

The final thing that needs to be done to make this playable is to add the control script I made earlier.

Controller Componant.

In the component I set the public variables to what I want, so for example I want the animation to take place after 7 and a half seconds and I want to activate it via the space key.

As with all process driven blogs that I do, I must stress that this isn’t the only way to do a task. There are 101 ways to build an effect. This is just a collection of techniques that I’ve built up over my time using Unity that has helped me on a few projects before.

Hopefully whether you are brand new RT VFX artist, or you’ve been doing it for several years there was something here that helped you out. If you do find any creative uses for the things, I’ve talked about here please email me a gif at – or tweet me @stokes_richard. I’d love to see what you can create.

Please check back soon for part two where I’ll be not only demonstrating but also the shader graph, the particle system and post processing stack as well as give an overveiw of the similarities between them and their unreal engine counter parts for any artists making the transition over.

109 views0 comments

Recent Posts

See All
bottom of page