What is “stitching” in Virtual Reality?
One of the key aspects of achieving full immersion in virtual reality, whether it is on a game or an eLearning module (and even just 360 images), is to provide the user with an environment where they can turn around and see different things to interact with. This might sound like something simple to do, but there is a lot of things involved in making this happen.
We are just going to focus on the process of generating the images that cover every point of view in the virtual environment, more specifically in the process of stitching them.
Let’s begin by understanding how the images are captured using a 360 camera, like the GoPro Max camera we used to capture the footage for the virtual reality eLearning module we are creating for ACCES Employment:
(Image source: https://www.gearbooker.com/en/rent-gopro-action-cameras-livestreaming-cameras-camera-gopro-max-360-in-paris-7487-l)
As we can see in the picture above, the camera utilizes 2 spherical lenses which cover approximately 95% of its surroundings, only leaving out it’s own body and whatever is directly above or below. However, with two lenses working, it doesn’t mean we are getting the 360 image right away, what we actually get, is something more like this:
(Image source: https://digital-photography-school.com/introduction-taking-360-degree-photos/)
As we can see, the initial result is a composition of two semi-spherical images that can now be processed to generate what is called an “equirectangular” image.
And here is where the process of stitching comes in. Without getting too technical, through a series of mathematical calculations and image processing techniques, the computer deforms and stitches the edges and “spread” the visible portions of the semi-spherical images. Let’s thing of this as using play-doh, if you have two spheres made out of play-doh and you want to cover a flat surface, you just start spreading it and combining those two spheres. Another way to view this process is to think about a World Globe, the ultimate end is to have all the meridians and parallels flat on a surface. Our next result will be one single image that covers all the environment, although with a particular deformation:
(Image source: https://www.flickr.com/groups/equirectangular/)
Once we have the result, we can proceed to using programming software or authoring tools to generate the actual VR interaction, which will basically “map” the equirectangular images onto a sphere.