Shibuya Crossing
Stats
Softwares Used:
|
Project Length: 3 Weeks
Render: Mantra Size: 720 x 480 Render Time: 4.5 minutes per frame Number of Agents: 1500 |
The Concept
source: Tom Page
|
Located in Tokyo, Japan, Shibuya Crossing is the busiest pedestrian crossing in the world. When at it's peak, up to 3,000 pedestrians will cross during the 2 min crossing cycle.
|
Research
Shibuya Live Stream
|
The best way to know crowd behavior is to study it. For this project, I was fortunate enough to have an on going live stream to watch as I worked. I also use it to focus in on general crowd behaviors.
The images below are screenshots from the Shibuya Live Stream. This stream may not play on my site. Click here to go to the YouTube page. |
Motion Capture
The actor suited for motion capture was David Pressler. Also, behind the scenes was Erin O'neal and Ali Nikkhouy.
The three main cycles that I focused on were a walk, jog, and rest. These are the bare minimum cycles I need to recreate the crossing. We captured data using Vicon Blade with a 12 camera stage. We recorded the same action several times in case of tracking issues. I had our actor, David, walk through the volume instead of walking on a treadmill. |
Character Creation
Meet Dale! I created him using Adobe Fuse and created his rig through Mixamo. He will be our main character for this project.
Fuse is a simple and yet powerful 3D Character creator. It connects with mixamo.com where the character goes through an auto-rigger that can even rig the fingers. Mixamo also has motion capture animations that can be baked onto the Fuse character. For this project, we had to create our own animations so I did not use any of their data. |
Motion Builder and Baking Animation
I used the best data that was captured and exported fbx files of the skeleton from Blade. I used Motion Builder for clean up and baking the motion capture animation onto Dale's rig.
Character Setup
Environment Setup
Terrain
For creating the terrain, I used Google Maps to take a top-down screenshot of Shibuya Crossing. To make the image easier to read, I traced simple shapes outlining the street and buildings in Photoshop. From there, I brought the maps into Houdini and extruded curves to form various parts of the city.
ObstaclesObstacles are a hidden force used to help guide the crowd. I created them from the buildings (in blue) and also created a wall in the middle of the street (in orange).
It is very important that the obstacles sink lower into the ground. I found with some obstacles that are on top of the grid, agents may not recognize them and walk straight through them. |
Crossing Light
The stop light is the core of controlling the crowd. It signals them to either stop or go. A switch node is used to change the light after so many frame. It toggles between 0 and 1 with 0 being the green material and 1 being the red material. The code behind it is: floor(1-sin($FF*0.8)/2). Currently, the lights will switch just under 4 seconds. Lowering the 0.8 will increase the length of the light.
Using this information, I created several Attributes that reference the toggle.
Using this information, I created several Attributes that reference the toggle.
So let's define some boundaries!
Behind the scenes, I created the stop light’s bounds. This means that anything inside them would be influenced by the color. Think of this space as a way for the agents to see if the light wants them to stop or to go. There are three to keep up with: Sidewalk Bounds, Street Bounds, and Street Inner Bounds. |
Street BoundsStreet Bounds are for the agents on the sidewalk to read if the light changes.
When the light is Green (attribute value of 0), the agents will see that they can safely start to cross the street. When the light is Red (attribute value of 1), the agents will see that it is not safe to cross the street and will stop at the edge of the sidewalk. |
Street Inner BoundsInner Bounds are for the agents in the crosswalk to read if the light changes.
When the light is Green (attribute value of 0), the agents will calmly continue to walk across the street. When the light is Red (attribute value of 1), the agents will realize that need to finish crossing and they will start jogging. |
Sidewalk BoundsThe sidewalk bounds are agents that started jogging across the street and they have made it back on the sidewalk.
When the light is Red (attribute value of 1), the agents will realize that they are back on the side walk and will stop jogging and start walking. |
Houdini Crowd Mind
Crowd GoalI used a POP Steer Seek node to control where the agents were going. By setting the 'Attraction Type' to Points, having the Match Method to ‘Point per Particle,’ and check on the ‘Particle ID’ and ‘Goal ID,’ Each agent will randomly choose a point to walk toward. I created the points using the same network that I used to create the spawn and added a Scatter node to create the points. I set the number of points to a similar number as the amount
of agents. This gave the agents their own unique goal. Originally, I was using the POP Steer Seek node with the ‘Position’ attraction type. I did not use this method because it caused all the agents to go towards one defined point. What I did learn from using this method was that some agents will hover and stay at the point and other agents reach the point and continue past it. |
The Brain
Crowd Transitions control how the crowd will move. By carefully configuring Crowd Trigger node, the agents will *mostly adhere to the commands.
For grasping the concepts of a stop-and-go crowd, I referenced the Houdini Crowd Street Example. This example helped me with understanding attributes. As helpful as this reference was, the example was created in Houdini 15.5 and there were several changes with the crowds tool in Houdini 16 (the version that was used).
For grasping the concepts of a stop-and-go crowd, I referenced the Houdini Crowd Street Example. This example helped me with understanding attributes. As helpful as this reference was, the example was created in Houdini 15.5 and there were several changes with the crowds tool in Houdini 16 (the version that was used).
Constant Factors
The 'Current State' trigger check if the agent has been in that certain state for at least 0.1 seconds.
There are triggers for each of the various boundaries. Each one of them continuously checks if the agent is behaving correctly with the boundaries.
There are triggers for each of the various boundaries. Each one of them continuously checks if the agent is behaving correctly with the boundaries.
Material Stylesheets
As you may or may not notice when watching my crowd, the agents start off very colorful and then about half way through, they all turn blue. This is because after I called "Final" on the project, I went back into the file and tested material stylesheets. I was able to cache and rerender only about half the frames working.
The first step to defining any material is to make shaders. I created 7 principle shaders that each have different colors associated with them. There was no color scheme to them, but I wanted a clear variety of colors.
The first step to defining any material is to make shaders. I created 7 principle shaders that each have different colors associated with them. There was no color scheme to them, but I wanted a clear variety of colors.
Open the Data Tree tab and pick Material Style Sheets as the viewer. Drop down the obj level and right click on crowdsource and select 'Add Style Sheet Parameter.' Then right click on Style Sheet Parameter and select 'Add Style.' Right click on 'Style' and add a target and an override. Finally, right click on 'Target' and add condition. I repeated the steps from 'Add Style' until I have created styles for all 7 shaders.
|
The final setup should look something similar to the image on the left, but there are a few more steps to setup. For the 'Condition,' I changed the Type to 'Primitive Group' and set the Value to @color== and a integer. In the 'Override', I changed the Type to 'Set Material,' and Override Type to 'Material,' and I pathed the material in the Override Value.
Quick Tip: Shift + Left click on the plus boxes. This will open all the minimized tabs within a folder. |
And that is how this crowd was made...
The Problems...
Problem 1: Exploding CharacterIn early stages, I had multiple characters in the same
crowd source node, but about 70% of the agents would explode. What I think happened was the rigs for each character were going on to the wrong agents and weights for the characters were causing the agents to break. The easiest way around this was having to scale back to using only 1 character. A possible way around this could have been having multiple crowd sources. In order to do this, I need to have collisions working properly so that the two crowd sources do not phase through one and another. |
Problem 2: Float or SinkThis was a fun little problem. The agents would start off walking on the ground and after a light change, they learned how to either fly or sink.
I believe this issue was caused because the crowds only had a 'groundplane' to define the floor. After I added the modeled street as a terrain object, the agents were able to walk properly. |
Further Exploration
More Animation CyclesI had several more Motion Capture actions I intended to use. From standing in place while texting to walking while texting. After presenting the final version above to the class, I learned how to successfully implement a variety of states. Within the 'crowd_sim' DOP Network, using a crowd state node, I needed to add more clips in the 'Clip Selection' section. The problem with using this method is the inability to adjust the Gait Speed of each clip separately.
|
With More Time...
Longer Stop Light TimeI want the first wave of agents to finish crossing the
street before the next wave starts. I also want to polish up the stopping signal so that there is a bit of a warning for crowd still on the side walk to stop. Due to time constraints and number of frames, I focused on the crowd behavior verses a finial, more realistic polish. |
Remember to keep OBJ Level Clean
Original PDF
![]()
|
Here is the original breakdown pdf. I will warn that there are several spelling mistakes and other grammatical errors. Please refer to this webpage for corrected information.
|