Japanese animation seeks success on the world market. In the past, formal beauty was created as a result of well-honed, hand-drawn expression, and that craft became even more refined over time. With the release of Pixar’s “Toy Story” in 1995, full CG animation took the world by storm. Pixar’s subsequent success is a well-known story. When staging and arranging, it’s important to consider 3DCG animation under the same conditions as one would an actual shooting situation; it is difficult to express the dynamism of deformation or trompe l’oeil the way 2D animation is able to. The know-how acquired from 2D animation and its ability to dissemble could be a reason why it’s been difficult to develop full 3DCG animation in Japan. In its place there is a new, ground-breaking expression called “2.5D”, which takes 2D graphic projections using 3DCG animation rendered into cells. Camera work that is difficult to do with hand-drawn animation becomes incredibly easy in a 3D space. It is possible to convey a sense of dynamism in the work, and the data has been modeled, so the characters’ style and expressions will be consistent. Depending on one’s creativity and illustrations, it’s possible to reduce the number of days spent on a project, and you can complete an entire scene this way, for example. Nonetheless, you will often see a difference in quality when comparing a seasoned animator with an inexperienced 3DCG animator.
“The Snack World” Challenge
“The Snack World”
Go to the Official Website: Click here
In the spring of 2017, a challenging new animation series appeared on air. It is a full 3DCG animation work called “The Snack World”, broadcast on TV Tokyo on Thursdays at 19:25. Each episode in the year-long series is half an hour long, and the challenge with this is that while it is a full animation (*1) its dynamic expression is thanks to a trick of limited animation (*1). In March of 2018, it was awarded the VFX-JAPAN AWARD 2018 in the television animation CG category. The silky movement of individual hairs, as well as how characters speak, their gestures and their actions – these movements appear limited as though frames have been cut. The animator smoothed out the grain of characters that had substantially limited movements in the pilot episode over the course of the year-long broadcast, and brought out that style within the movement of full animation, thereby creating a production framework that would balance the two.
I was given the opportunity to participate in the production of several episodes, (*2) and gained a lot of experience. Regarding live-action shooting, animation has its own distinct sense when it comes to the timing of image resolution, and I think that sense will infuse new life into live-action shooting.
*1- With full animation, if it’s 24fps, then all the frames are in motion. Limited animation is within 24fps, but it might look as though there are just 12-18fps; that is, it appears are though the frames have been cut. With limited animation, what makes it so convenient is that you can reduce the number of hand-drawn frames compared to full animation, and the frame displacement creates the possibility of dynamic expression.
*2- (story 2, story 5, story 8, story 11, story 14, story 17, story 20, story 26, story 31, story 41)
Key Frame and High Frame Rate
In animation, in addition to image expression, there are a few other important elements. They are the “key frame” and “frame rate video playback.”
Key frames are poses (silhouettes) that are best suited for communicating the characters’ emotional and physical state within a scene, and that of the in-between poses. For a video, you will pick up the lines of the key frame and create frames that go in between key poses.
This method is similar to how you would approach MPEGs. You create the key frame and then you prepare the artwork to use in between the key frames. If the key frame is too open, then it becomes hard to figure things out. This is the same in animation: when there aren’t many key frames, the video becomes confusing you can’t pick up the lines of the key frame, and the resulting illustration becomes vague.
In Japan in particular, the number of frames depends on the animator, and in order to maintain consistency in character and quality of the animation, the key frame is extremely important.
A lot of people work on key frames and supervise the quality of key frames, but the Animation Director is responsible for the overall quality of the finished product. It takes someone with considerable experience and technical ability to hold that position.
Key frames are art, but you want to be able to tell a character’s situation immediately based on their silhouette. Not a type but an archetype. With the kind of clarity that isn’t “kind of like this” but is instead “this.” For this reason they are often rendered Deformed.
But here you have to bring out your creativity as an artist, otherwise you risk creating a stereotype, and you run into the danger of having a situation where someone says, “Haven’t I seen this before somewhere?”
Expressions are now recognized as a type of formal aesthetics. For example: “A person raises his chin, and you can’t see his eyes through his glasses because of the reflection coming off the lens. The light reflecting off the lens implies a strong sense of determination.” We should use these expressions while continuing to seek out new possibilities.
This is also connected to live-action performance.
When the performance is not sufficient, many times you won’t be able to express that key frame’s significant moment in the work very well.
However, unlike with art, if the expression becomes too representative it can be accused of being “cheesy” – what is ideal is if you can find a way of expressing something that is so subtle that you can barely discern the movement – whether it’s a split second or an eyeline movement. Someone who is really talented will be able to express nearly everything in one second without really moving.
“Tame (HFR)” and “Tsume (non-HFR)”
This is the magic by which dynamism appears. By increasing or decreasing the number of frames per second. When have a scene where you’re throwing a shot put, you would use 6 frames per second at the start of the motion to the motion of throwing the ball, and then for the shots of throwing it, you would use 3 frames per second.
A high frame rate is when you use many frames to provide more detail, and when you want to make the movement appear faster you have a lower frame rate per second. Compared to full animation, limited animation is able to showcase this more effectively. In order to create a clear distinction, this is an important element.
The current animation process
Even if you are engaged in the image production business, if you are not a specialist, there aren’t many opportunities to familiarize yourself with the animation production process. Let’s take a look at the current animation creation process. There are some styles of production and times when things won’t work this way at all, but we will provide you with a broad outline.
When the script is ready, the director and the storyboard artist use it as their foundation for creating the storyboard. The background is illustrated to go with the storyboard. In the case of 2D, we determine camera angles and the positions of the characters in minute detail at this stage, and then confirm the layout. The reason for this is to make sure it all goes with the background scenery, otherwise it wastes time if we have to make changes.
In the case of 3D, we build a set with large props, as assets. We develop small items and artwork as props. Alongside that, we begin our character animation preparation.
With character design as the starting point, the lines expressing the characters must be organized so that the Animation Director can work with the animators more smoothly. Even if – as a still image – the original looks good, when it comes to creating a sense of motion, it has complicated lines and data, so we need to make adjustments, omissions, for it to work.
The 3D model is created (modeling), and then you build-in (rigging / set up) the rig (the skeleton and the aggregate of the different scripts we are going to use for the surrounding polygons). By operating the controller that was incorporated with the script, you are able to manipulate the characters with greater ease.
If you take your time to go through these processes very carefully prior to all of the animators starting work, you can run more trials to improve efficiency and quality during the animation process. It is the same for live-action, the preparation is important.
During the early stages, we shoot the story and, as with animation where there’s a timeline, we prep the storyboard shoot with a timeline, and in some cases we prescore the voice actor’s performance ahead of time. This is often the case for 3DCG in particular, to capture lip sync using 3DCG software.
For 3DCG, once you have assembled the models, the props, and the set assets, then you can begin the layout. You lay things out following the vision presented in the storyboard. At this time one important element is the size of the lens (mm).
In any case, you will be able to detect distortion of the subject better than you would be able to in live-action. In 2D, even with a very wide shot, you can make it appear as though the distance between the nose and eyes and eyebrows is proportional, as if you had used a telephoto lens. But since you will be shooting a 3D character model, if you shoot with a wide angle, the distance between the eyes broadens and the shape of the cheeks becomes more round, and on screen it looks chic.
The animators continue to add more and more animation.
Rendering occurs after the layout check has been conducted, and it is then passed on to an animator who then checks it. When a performance doesn’t turn out as expected, the animator provides detailed instructions and then it is re-checked. Animators must be actors, and they also give direction and solidify the performances. That is the job of staging.
We add the completed frames to the storyboard (animatics) timeline, cut the sections that don’t work, and cut the length to suit the length of the broadcast. We don’t do this at this stage in the 2D process: it gets done before the additional dialogue recording (ADR). I guess it’s kind of like when it’s offline.
Writing (2D color specification and completion)
We then add lighting to the finished animation and layout, and then proceed with rendering. This is when we shoot the live-action work.
The operation is extremely time-consuming. There are a lot of polygons, so it can take anywhere from 8 hours to over a day. For a single computer to complete 1 second, it will take over 24 days? …But it’s okay. We have lots of computers lined up on the render farm. And if we work with 24 computers, we can get 1 second’s worth completed in a single day.
So this is why we try not to make any significant changes past this point. (But sometimes we have to!)
Effects (Special effects / Composition)
What we couldn’t express with 3DCG, we use optical effects or a bit of layered processing. This is where 2.5D comes in. And this is where live-action takes the stage.
For 3DCG, all that is left is to add acoustics and MA (sound), and in the case of 2D, what follows is the ADR.
We took a quick look at the animation process, and I hope it gave you a sense of how animation takes the best aspects of live-action as well as 2D methodology. I think that we can uncover new potential by looking at the expression and methodology of live-action and animation.
Though late in the game, the movie “The Matrix” is a famous example that embodies this high-level dimension. They didn’t use flash CG; they relied on staging methods and I think that their approach to frame rate and key frames is worth taking a look at. I think it’s good to try both of these approaches.
©LEVEL-5 / The Snack World Project & TV Tokyo