Walls are a core mechanic of OhShape VR gameplay. We want them to feel organic, accurate and easy to read. In today’s devlog we show you our workflow from a choreography performed by professionals (thanks to the amazing dancers from ON Dance Studios Sevilla) to 3D models integrated in Unity, the graphic engine we are using.
1. First of all we take screenshots for each pose from the live footage recorded with our choreographers. The script casts a shadow over the dancer. This is necessary to set the coordinates of the headset and the controllers.
2. Once the shadow is adjusted to the pose, we simplify the silhouette to make it perfectly readable as a basic human form. This may not be 100% exact to the dancer silhouette, but we need to create a clean figure that can be read as fast as an icon.
3. We merge the screenshot of the dancer and the silhouette (all in one picture). Then we model the silhouettes in a transparency mode to see through the design. We model different halves to create different combinations.
4. Now we have to create the UV maps for texturing and divide the silhouette in identifiers (IDs) to create a precise tracking of the each body part. It’s also important to check the pivots, flip the normals and create material channels in order to have the models perfectly organized before exporting them to Unity.
5. We export the model and materials to Unity. We check the graphics are fine and then we put the sensors in the wall that will determine if the player is successful. Putting the sensors needs some testing to make an accurate detection. Once we are happy with the sensors, the model is exported with all this information and can be used in level design.