Topics covered in this article:
VR (Virtual Reality) is a computer high-tech means to construct a virtual realm. Participants can get the same feeling as reality when using VR devices.
When you put on a special helmet and gloves, you will find that you are already in a historical museum. When you walk forward or turn your head, the scene you see will also change. You can go through the hall and open the front door. When you see an exquisite exhibit, you can even observe carefully up and down, inside and out.
VR technology is the use of computer simulation to generate a three-dimensional virtual world. It provides users with the simulation of vision, hearing, touch, and other senses, allowing users to observe the three-dimensional space in a timely and unlimited manner as if they were immersed in the environment.
VR (Virtual Reality)
When the user moves the position, the computer can immediately perform complex calculations and return accurate 3D world images to produce a sense of presence. The scenes and characters seen are all fake. It is to substitute people's consciousness into a virtual world.
When people look at the world around them, the images obtained are slightly different due to the different positions of the two eyes. This difference allows us to perceive depth and make things three-dimensional. VR technology also uses this visual difference to arrange different pictures for our eyes, so that we can feel the three-dimensionality of the picture. Unlike 3D movies, VR emphasizes 360-degree panoramic interaction. It not only has a strong sense of immersion and three-dimensionality but more importantly, allows users to interact with the virtual world.
In an artificial environment, each object has a position relative to the coordinate system. The position and the direction of the head determine the scene that the user will see. Virtual reality headsets that track head movements can perceive our head movements.
In this way, when we are moving, we in the virtual world can move in the same way. When we look to the left, the virtual reality headgear can recognize this action. And then the hardware will render the scene on the left in time. In this way, when we look to the left, we can see the scene on the left. When we look to the right, we can see the scene on the right.
In 1956, photographer Morton Heilig invented Sensorama, a 3D interactive terminal that integrates somatosensory devices. It integrates 3D displays, stereo speakers, scent generators, and vibrating seats. Users sitting on it can experience 6 cool short films. Of course, it looks huge, like a medical device. It’s unable to become a mainstream entertainment facility.
In 1961, Headsight came out, which is the world's first head-mounted display. Philco continued to develop it. Its product integrates CCTV surveillance systems and head tracking functions. However, its main purpose is to view secret information, not entertainment equipment.
This GAF Viewmaster is one of the earliest 3D head devices. It uses two built-in lenses to watch slides. It has a certain 3D effect, but it is more like a children's toy than professional audio-visual equipment. Its follow-up version also added audio functions to realize simple multimedia functions.
The Massachusetts Institute of Technology laboratory developed the Sword of Damocles-a head-mounted display. It is complex and its components are heavy, so it needs a robotic arm to hang the head to use.
This Eye Tap looks very similar to Microsoft's HoloLens. Strictly speaking, it is indeed an augmented reality device that can be connected to a computer camera to superimpose the data in front of your eyes. Of course, people today have a clearer definition of virtual reality and augmented reality. But Eye Tap still has a certain significance for the development of virtual reality technology.
RB2 is the first commercial VR device. Its design is very similar to the current mainstream products. It is equipped with somatosensory tracking gloves for operation. However, it has been as high as US$50,000, which was undoubtedly a sky-high price in 1984.
NASA head-mounted display
In 1985, NASA (National Aeronautics and Aviation Administration) developed a true LCD optical head-mounted display. Its design structure is widely adopted by current virtual reality head-mounted displays, but the LCD is replaced with the OLED that has lower power consumption and better display effect. In addition, it also has a head and hand tracking system, which can achieve a more immersive experience.
The well-known game manufacturer Sega planned to release a virtual reality head-mounted display based on its MD game console in 1993. The equipment looks very avant-garde. Regrettably, Sega cancelled the project on the grounds that the experience was too real and the manufacturer worried that the player would be harmed.
In 1995, Nintendo released the 32-bit game console Virtual Boy, which is a very alternative game console. Its host is a head-mounted display, but it can only display red and black. In addition, due to technical limitations at the time, the game content is basically 2D effects, coupled with a lower resolution and refresh rate. So it is easy to cause users to feel dizzy and discomfort. In the end, Nintendo’s virtual reality game plan failed in less than a year.
In 1995, the students of the University of Illinois developed the "CAVE" virtual reality system. By creating a three-wall projection space and matching stereo LCD shutter glasses to achieve an immersive experience, it has greatly promoted modern virtual reality technology.
There is no doubt that Oculus Rift has revived VR technology and brought it back to the public. In 2009, its founder launched a crowdfunding campaign on Kickstarter. In a short period of time, it gained more than 10,000 supporters and attracted much attention. Since then, third-party funds have continued to pour in, allowing Oculus Rift to develop rapidly.
In 2014, the social giant Facebook announced the acquisition of Oculus for US$2 billion. After several DK versions, Oculus Rift officially opened consumer version pre-orders in January this year and shipped in more than 20 countries and regions around the world in March. At this point, virtual reality has truly entered the consumer electronics market.
From sci-fi to reality, from 1957 to the present, many people described the development of VR equipment as a process of "turning dreams into reality". Although there are still shortcomings and difficulties, we still have reasons to expect that VR will have a bright future. After all, every technology has stumbled all the way like this and gradually grows.
In comparison, it is not too difficult to use computer models to generate graphic images. If we have a sufficiently accurate model and enough time, we can generate accurate images of various objects under different lighting conditions, but the key here is real-time. For example, in a flight simulation system, image refresh is very important, and the requirements for image quality are also very high. Coupled with a very complex virtual environment, the problem becomes quite difficult.
When a person looks at the surrounding world, the images obtained are slightly different due to the different positions of the two eyes. These images are in the brain to form an overall view of the surrounding world, which includes information about distances. Of course, the distance information can also be obtained by other methods, such as the distance of the focal length of the eye, the comparison of the size of the object, and so on.
In the VR system, binocular stereo vision plays a big role. The different images seen by the user's two eyes are generated separately and displayed on different displays. Some systems use a single display. After the user wears special glasses, one eye can only see the odd-numbered frame image, and the other eye can only see the even-numbered frame image. The difference between odd and even frames is the parallax creates a three-dimensional effect.
In traditional computer graphics technology, the field of view is changed through the mouse or keyboard. The user's visual system and motion perception system are separated, and head tracking is used to change the view angle of the image. The user's visual system and motion Perception systems can be linked together, and it feels more realistic.
Users can not only recognize the environment through binocular stereo vision but also observe the environment through head movement. Therefore, when we move in the real world, we can also move in the virtual reality world.
When we look to the left, the head tracking technology can recognize this movement. And the hardware will render the scene on the left in real-time so that we can see the scene on the left when we look to the left.
In fact, even mobile phone VR boxes have the above-mentioned key hardware technologies (mobile phones simulate head tracking through a gyroscope), but eye tracking is still relatively rare in the VR field.
Eye tracking technology depends on tracking our pupils. Its algorithm can change the depth of field according to the scene we are looking at, thus bringing a better immersive experience.
For example, we extend a finger and hold it in front of our eyes. When we look at it, the scene in front of the finger will become blurred, and when we look at the background, the finger will become blurred again. This is the change brought about by different depths of the field. Eye tracking technology can know where we are looking, thereby simulating changes in depth of field, making the experience even better.
Speaking of the most important technology in the VR field, eye-tracking technology is definitely worthy of close attention by practitioners. Oculus founder Palmer Latch once called it the "heart of VR" because it detects the position of the human eye and can provide the best 3D effect for the current angle of view, making the VR headset display the image More natural.
At the same time, since eye tracking technology can know the real gaze point of the human eye, the depth of field of the viewpoint position on the virtual object can be obtained. Therefore, most VR practitioners considered eye tracking technology an important technological breakthrough to solve the problem of vertigo in virtual reality helmets.
People can well determine the direction of the sound source. In the horizontal direction, we rely on the phase difference and intensity difference of the sound to determine the direction of the sound, because the time or distance of the sound reaching the two ears is different.
Common stereo effects are achieved by hearing different sounds recorded in different positions in the left and right ears, so there is a sense of direction. In real life, when the head is turned, the direction of the sound heard will change. But in the current VR system, the direction of the sound has nothing to do with the movement of the user's head.
In a VR system, the user can see a virtual cup. You can try to grab it, but your hand does not really touch the cup. It is possible to pass through the "surface" of the virtual cup, which is impossible in real life. The common device to solve this problem is to install some vibrating contacts on the inner layer of the glove to simulate the sense of touch.
In VR systems, voice input and output are also very important. This requires that the virtual environment can understand people's language and can interact with people in real-time. It is quite difficult for a computer to recognize human speech because speech signals and natural language signals have their "multilaterality" and complexity.
There are two ways of using gesture tracking as an interaction. The first is to use optical tracking. And the second is to wear a data glove with a sensor on the hand.
The advantage of optical tracking is that the threshold is low, the scene is flexible. Users don't need to put on or take off the device on the hand. In the future, it is very feasible to directly integrate optical hand tracking on the integrated mobile VR headset as an interactive method for mobile scenes. Its disadvantage is its narrow field of view. And the use of gesture tracking will be tiring and unintuitive. This requires good interaction design to compensate.
Data gloves generally integrate inertial sensors on the gloves to track the movement of the user's fingers and even the entire arm. Its advantage is that there is no field of view limitation. And it is possible to integrate feedback mechanisms (such as vibration, buttons, and touch) on the device.
Its drawback lies in the high threshold of use. Users need to put on and take off the device. Its use scene as a peripheral is still limited: it is like saying that it is impossible to use a mouse in many mobile scenes.
However, these problems have no absolute technical threshold. It is entirely conceivable that highly integrated and simplified data gloves like the ring will appear in the future VR industry. Users can carry them and use them at any time.