Experienced Points

Experienced Points
Just How Does the Oculus Rift Work?

Shamus Young | 23 Sep 2014 15:00
Experienced Points - RSS 2.0

And it gets worse, because the lenses don't bend all wavelengths of light equally. If you've ever used a magnifying glass to focus sunlight, you'll notice that around the edges of the light (let's assume you're not just focusing the sunlight down to a pinpoint) has a kind of rainbow effect, because different wavelengths of light end up bent at different angles by the lenses. This is called chromatic aberration. So that shader we were using to "un-pinch" the image must un-pinch different colors in different shapes, or the viewer will see goofy color distortions. Again, this takes more computing power which makes it even harder to hit our frame rate targets, etc etc.

So now the user has a convincing 3D image. We've successfully fooled the eyes. Now we have to deal with the vestibular system.

If you're wearing screens on your eyes and if your brain is convinced in what you're seeing, then if you turn your head the brain will expect your view to turn as well. If it doesn't, then this is very uncomfortable. At best, the illusion is ruined. More likely, you'll begin to feel some degree of VR sickness.

So we add some gyroscopes to keep track of how the device (and thus your head) is turned. But gyroscopes don't really tell you which way you're pointed. All they tell you is how fast the device (and thus the user's head) is turning. So we take these gyro inputs and add them up to make a solid guess about how the head is tilted. These readings will "drift" over time, slowly building up errors.

To picture how this works: Imagine that I tell you to close your eyes and turn ninety degrees left. Then I tell you to turn around 180 degrees. Then right 90 degrees. Those turns are easy and you'll probably have a pretty good picture in your head of which way you're facing. But the longer you do this, the more the little errors in your movements will add up and pretty soon you'll have no idea which way your body is really facing. It works the same way for the Rift. It senses all the little turns, but the data is imperfect and over time it loses its place.

We can add accelerometers to correct for your side-to-side head tilt, and a magnetometer to correct your heading, which (mostly) solves the drift problem and gives the Oculus a stable way to keep track of which way you're looking. It's not perfect, but it's good enough to fool your eyes. So now when you look to the left, the simulation can adjust the camera to show things off to the left. Suddenly the simulation is that much more convincing to the brain.

However, we have a new challenge: We need speed. In the real world your eyes and head are always moving together. In VR, the gyroscope has to take a measurement, then it has to send its data through the operating system and driver layers to the application. The application has to crunch the numbers and figure out what the new position is, and then you'll see your head-turn reflected the next time a frame is drawn. Smartphones have had gyros and accelerometers in the past, but they were never built for speed. If you're turning your phone over in your hand, it's totally fine if it takes the device half a second (or even a couple of seconds) to figure out what's going on. So these systems were not originally designed with low latency in mind. Now we need to go over all of these devices and drivers and cut out all the half-ass bottlenecks that were there because speed didn't matter until now.

RELATED CONTENT
Comments on