3D skills

Let‘s assume you want to create an asset for your 3D project. It can be an object with some type of hard surface parts, or something organic that you need to animate. There is a multitude of ways you could create your objects. In fact, you probably would use different techniques to create the different parts of your objects.

The 3D pipeline usually involves different skillsets that you need to learn if you want to create assets for an animated movie or a game yourself. In many studios you have people that specialize in only one or two of these skillsets, but for a solo developer or a small studio, it is necessary to at least have a basic understanding of every step in the production pipeline.

Other toolbox-content:

3D Software Where to get assets

This overview is meant for absolute beginners. We don’t go into details, but if you need more information, there is a ton of excellent tutorials on Youtube and other platforms. 😉

Modeling

In modeling, you work directly with the things that constitute a mesh: vertices, edges and faces. Typically, you start with either a single vertex and build your model by adding more vertices, or with a so-called “primitive”, i.e. a simple 3D object like a sphere or a box (the default cube). You can extrude, resize, cut, create edge loops etc. to get the shape you need. You can also combine several objects by some boolean operation. These techniques are called polygonal modeling, box modeling and boolean modeling.

Another way to approach the creation of an object is by using nurbs or curves. Similar to the technique where you use vertices, you can create a smooth curve, and then give it some volume.

Procedural modeling is an approach that is becoming more popular today. It’s a technique that is based on mathematical operations to create something instead of manual manipulation of mesh components. An example is the geometry nodes technique in Blender.

While you can achieve the same visual result either way, some techniques or workflows are more efficient for a specific task. So it’s good to think about which approach you want to use, before you start.

 

Sculpting

In sculpting you don’t care about polygons. Instead, you work on a 3D object like it’s a piece of clay. Especially for organic objects it feels more natural to just freely form your mesh as you like. You can add layers of clay, pull it, expand or deform it, cut through it – all the fun stuff. Usually, you start with a low resolution mesh, create a rough sketch of your model and get the proportions right. Then you can increase the resolution to work on the finer details.

While sculpting is mostly used for organic models, you can also create hard surface objects by using the sculpt tools.

The “downside” of sculpting is, that you usually end up with a dense mesh with millions of polygons. This is neither good for animation nor is it really usable in a game, since the high number of polygons requires a lot of processing power. So the next step in the pipeline is unavoidable: Retopology.

 

Retopology

Retopology means creating a low resolution mesh on the basis of your highpoly sculpt. You can either use software to do it automatically for you, or you manually create a mesh with nice edge flows, which is especially necessary for assets that you want to animate.

For automatic retopology you can usually do that right inside your software of choice like Blender, ZBrush or 3D Coat. The retopo algorithms create a lowpoly mesh with just a few clicks. For static meshes this is usually the way to go. For characters however, there is no good automated solution yet. You will need to create your lowpoly mesh by using retopology tools. There are nice addons for Blender that help you do it more efficiently and easily.

 

UV unwrapping

Once your object is ready, you need to prepare it for texturing. In some cases you can skip this step and just apply a shader to the model. Many types of materials can be created procedurally and don’t require UV maps. In 99% of the cases, though, you will need UVs.

UV unwrapping means creating a 2D representation of your 3D model. Like in retopology you can just let the software do it for you, or you do it manually if you need. Depending on your model you might want to hide some cuts (UV seams), since these might be visible after you apply textures to your model. You can select specific edges, define them as seams and then unwrap the model.

 

Texturing / Shading

Once you have your UV maps, you can go ahead and bring some color into your scene. Most renderers are using PBR materials (physically based rendering), which contain information on the color of the surface, whether it metallic or not, how rough or shiny the surface is and some other properties such as bump or normal maps, opacity or emission.

Some software uses layers to create a material (e.g. Quixel Mixer or Substance Painter). Others use a node-based approach, where you connect nodes with certain information or mathematical operations (e.g. Blender or Unreal Engine).

The sum of such properties is called a shader. It tells the renderer how light interacts with an object. You can either create the look of your material procedurally or paint it manually. You can apply textures (images) or use images to stencil paint them on parts of the object. At the end, all the information is stored in the UV set of your model, which you can bake or export as images.

A complete asset typically consists of the object itself, i.e. the mesh, and the corresponding textures (base color map, roughness map, normal map, etc.).

 

Rigging

So, you have your asset all ready and want to make it move. Whether it’s a robot, a human character or a creature – if you want something to move, you need a skeleton which is also called a rig.

Rigging is the process of creating a meaningful set of bones that control the mesh. It is a complex subject involving all sorts of mechanisms such as FK (forward kinematics) and IK (inverse kinematics), vertex weights and drivers, bone constraints, or creating control rigs where you don’t manipulate the bones themselves, but rather a small set of controls which then influence one or several bones simultaneously.

In most cases, however, a simple skeleton is absolutely sufficient. After creating your bones you can parent your mesh to it so that when you move a bone, the mesh deforms with it. Here is where topology comes into play. Bad topology will lead to bad mesh deformations.

Some software such as Blender offer a set of predefined rigs for humanoid or four-legged creatures. You just need to adjust the bones to your specific character, click on “generate rig” and you are done.

 

Animation

Now you want your objects to move. This means you need to define an object’s location and rotation at certain points in time. This is done via keyframes. For character animation you can either set the keyframes manually or use motion capture data. Mocap data also consists of keyframes. However, these keyframes are set automatically by interpreting live performance either from a mocap suit or by analyzing a video file.

You can also use physics simulation or texture displacement instead of manually animating everything. Especially for things like clothes, hair or grass movement there are different tools and techniques to help you do that.

For certain effects like fire, smoke, rain or water you would use particle simulation. In Blender you would use mantaflow for such effects. Other specialized software like Embergen or Houdini are however more efficient in this type of simulations.

 

Lighting

Lighting your object or scene may seem trivial. However it can really make or break your final render. Though technically it isn’t as challenging as other skills, it is good to learn about the different types of light sources and how to use them in order to achieve the look you are going for.

An image might look like it is only lit by sunlight. But lighting a 3D scene involves some fakery as well. A good lighting setup often consists of 10 or more light sources of different type, intensity and color.

 

Rendering

When all is said and done, the final step is to render out your image. Typically there are two main rendering approaches. One rendering approach is ray tracing or path tracing, which is physically correct, but takes a long time to render. A single image sometimes might take hours to be rendered, depending on the number of polygons in the scene, the shader complexity and lighting setup.

The other approach is real-time rendering, which is used by e.g. Eevee in Blender and of course game engine renderers. Real-time rendering fakes the lighting of a scene by approximating real behavior. It might not be as realistic compared to path tracing but it is extremely fast. Especially for games you want 60 frames per second instead of 60 seconds per frame. 😉

 

Additional thoughts

This list of 3D skills to learn is not exhaustive and there is so much more to learn. Every step of the way you will encounter difficulties and problems, where something doesn’t work as you expected it to work. There is a good amount of frustration and disappointment involved. But the reward of finally getting it is great.

Also, a lot of people have been where you are. For almost every problem you might encounter, someone has already asked about it on the internet. If you don’t find anything, engage with the community. Especially if you are learning Blender, you will see that the Blender community is amazing and extremely helpful.