Most people feel confident that they know a fair amount about their own body, in terms of general health and what they look like from the outside. However, most of us haven’t taken a look inside—literally speaking. Ed Barton and his UK-based startup Curiscope is hoping to change that with a unique blend of virtual reality (VR) and augmented reality (AR). Using an anatomy VR app and the company’s Virtuali-Tee, a t-shirt, they are allowing people to see inside of their own chest cavities.
Barton explained to Wired: “We use a mix of VR and AR to see inside the anatomy…With positionally tracked AR, you can position VR experiences physically within your environment.”
Barton and Curiscope co-founder Ben Kidd have so far raised almost $1 million in seed funding from LocalGlobe, and they’ve already sold almost 3,000 of the Virtuali-Tees.
High Tech T-Shirt
Barton told Wired that, using positional tracking, “we have a blurring of physical and digital items, and an experience more tightly connected to reality.” He continued, “With the Virtuali-Tee, AR is your interface and VR is used to transport you somewhere else. The technologies should be merging.”
This technology works using a highly-stylized QR code printed onto the front of the t-shirt. When you scan the code with the corresponding app, you can explore throughout the chest cavity, including the heart and lungs.
This t-shirt, specifically, could be a fantastic tool for the curious. It can be used for educational purposes, allowing anatomy and biology to be a fun experience that students can really wrap their minds around. Even outside of a formal educational setting, this device could allow us to better connect with our own biology. Virtuali-Tee could help people to better understand their own inner workings, and how the things we do every day—from what we eat to how we exercise—might affect our health.
At the Oculus Connect 4 keynote, happening now in San Jose, Facebook CEO and founder Mark Zuckerburg just unveiled a device that could make virtual reality (VR) technologies more accessible. Introducing the Oculus Go: a standalone VR headset that won’t cost you an arm and a leg.
At only $199, Zuckerberg called it “the most accessible VR headset yet.” Compared to the Oculus Rift, which costs about $600, the Oculus Go is undeniably inexpensive. Of course, at that price, it won’t offer you exactly the same VR experience. The Oculus Go will allow you to spin around while in virtual reality but not move around freely.
The Oculus Go is scheduled to ship early next year, but dev kits for VR developers are expected to be available by November — and Facebook is encouraging developers to give it shot. Hugo Barra, Facebook’s VP of VR, said that it’s “hands-down the easiest way for developers to get involved with VR.” The Oculus Go comes with a “fast-switch” WQHD LCD screen, wide-field lenses, and spatial audio. It is lightweight and “feels incredibly soft to wear,” Barra said.
Further details about the device’s specs are still to come, but one thing’s for sure — the Oculus Go could open up VR technology to a wider market, as well as free them from relying on Samsung or others for VR gear.
The International Space Station (ISS) is humanity’s home away from home, and has been since its assembly was completed by space agencies from various countries in 2000. For those who want to experience life off-world in the ISS, years of astronaut training and the luck of the draw are significant obstacles. However, technology is now making the ISS feel more accessible in the form of a 360-degree video of a spacewalk.
The film represents the first 360-degree video shot in outer space ever published, allowing the viewer to join Russian cosmonauts on a spacewalk thanks to virtual reality (VR) technology. Released by Russia Today (RT), the video lets viewers see what it’s like to maintain the ISS and release satellites into low-Earth orbit. It provides a unique insight into the international science laboratory that NASA and other space agencies are using to learn more about working and living in space — thereby gaining knowledge we can apply to our future space-exploration missions.
VR Taking Us Everywhere
RT collaborated with Roscosmos, Russia’s space agency, and spacecraft manufacturer RC Energia to produce the 360-degree video. Just like other 360-degree videos, it is best to watch it using a VR headset; if you don’t have access to one, use a mobile device so you don’t need to click around the screen to see everything.
Palmer Luckey, the co-founder of virtual reality (VR) company Oculus, is exploring the use of nerve-stimulating implants to facilitate more immersive VR experiences.
Speaking on adult VR content at a recent event, Luckey stated, “This is one of the things I’m experimenting with…virtual reality implants that are able to do stimulation into the nervous system to provide a sense of touch and to allow you to move around in virtual reality without actually moving.”
Earlier this year, Luckey alluded to his work in this area on Twitter, venting his frustrations with finding medical professionals willing to implant experimental devices, noting that doing so was “really, really hard.”
Luckey’s implants would completely change how VR systems interact with our bodies, delivering an entirely new level of immersion, but plans to bring brain-computer interfaces (BCIs) to fruition are even more ambitious. Elon Musk’s Neuralink and Bryan Johnson’s Kernel are two high-profile examples of brain computer interfaces that would give humans a type of superintelligence previously only seen in the realm of science fiction.
The research on these devices is in its infancy, so their full potential is still unknown. However, several immediate applications have already emerged, including the ability to help disabled persons gain greater mobility or augment the strength of laborers, allowing individuals to do more work in less time.
From healthcare to space exploration, implants could radically transform how humans navigate nearly every industry — a better experience in virtual reality is just the tip of the iceberg.
Disclosure: Bryan Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.
A team of researchers has developed a method of using virtual reality headsets to view 3D models of genetic data. The simulations bring together data from genome sequencing, information about DNA interactions, and microscopy data.
“By combining data on the genome sequence with data on gene interactions we can create a 3-D model that shows where regulatory elements and the genes they control sit relative to each other,” said Prof. Jim Hughes, Associate Professor of Genome Biology at Oxford University, in a press release. “It makes it easier to understand the processes going on within a living cell.”
Each of the 37 trillion cells in an adult human body holds two meters of DNA within its nucleus. We’ve had the capability to sequence DNA for a long time, but the way the two-mile strand of DNA is folded up might directly influence gene expression. If we can visualize the specific arrangement, we might be more effective at finding important insights into human genetic disease, because humans are very good at visual pattern recognition.
The researchers are currently using this visualization technique to study diabetes, cancer, and multiple sclerosis. The long-term goal is for the project to help with efforts to establish a method of correcting faulty genes and introducing them to the body.
Facebook could be working on a virtual assistant, along the lines of Siri and Alexa, for the Oculus Rift virtual reality headset. Over the weekend, an anonymous Reddit user posted details of the project, having supposedly been contracted to work as a tester.
The leaker claims that the software can be used to issue voice commands relating to the Rift’s capabilities, as well as Facebook activities like replying to messages or confirming attendance at an event. There’s also support for common virtual assistant functionality, like checking weather reports and sports results.
However, this information comes from an anonymous source on Reddit, so it should be taken with a grain of salt — especially since it references games that not received official Rift support, like Alien: Isolation. The user account, the original post, and a gallery of screenshots submitted as proof have now all been deleted.
We now have more proof that yesterday’s science fiction is today’s science fact.
In the world of Star Trek, the “holodeck” is a space capable of simulating any virtual world. It serves as a staging environment for various virtual reality simulations — the Danger Room where the X-men train is very similar. According to University of Freiburg biology professor Andrew Straw, such an environment could be particularly useful at helping us understand how the brain works.
“Until now, we have envied an invention from the world of science fiction: a holodeck like they have in Star Trek,” Straw said in a press release. “Something like the holodeck from Star Trek would enable key experiments in which we could artificially decouple an animal’s movement from its perception.”
Researchers needn’t envy the USS Enterprise any longer as Straw and his international team of researchers have developed a working model of this holodeck — but it won’t be used by space explorers or even people. The team built their flexible system to be suitable for mice, fish, and flies, the three animals most commonly used in behavior research and neurobiology.
“We created an immersive, 3-D virtual reality in which the animals could move freely,” Straw explained, “because we wanted our visual scenery to tie in naturally with the animal’s own action-perception cycle.” Through their research, which has been published in Nature Methods, the team has been able to study how the brain’s spatial recognition works.
A Different Kind of Interaction
Typically, animals and humans form a mental map of the world around us using every available sensation to make it as accurate as possible. To understand how the brain processes these different bits of information, the researchers had to separate the processes involved in movement from those involved in sensation.
Their holodeck allowed them to do this.
Using several high-speed cameras, the team tracked and recorded the precise 3-D position of the animals as they navigated virtual environments featuring everything from pillars and plants to a swarm of video game space invaders. The idea was to test and even control how the animals would interact with the virtual space.
They looked at the flight direction of flies, noted whether mice had a fear of virtual heights, and examined at how fish would move between two different virtual worlds. In one simulation, they even tested how a fish would swim in a swarm of space invaders. The computer-generated and controlled swarm was programmed to interact with the fish as one of their own, which in turn affected how the fish behaved.
By creating this interactive virtual space, the team has found a way to directly manipulate interactions between multiple individuals, solving a fundamental problem in collective behavior research, and they’ve only just begun to tap into the potential of their small-scale holodeck inspired by sci-fi.
As technologies like artificial intelligence (AI), augmented and virtual reality (AR/VR), big data, 5G, and the internet of things (IoT) advance over the next generation, they will reinforce and spur one another. One plausible scenario is a physical world so enhanced by personalized, AI-curated digital content (experienced with what we today call augmented reality) that the very notion of reality is called into question.
Immersion can change how we interact with content in fundamental ways. For example, a fully immersive AR environment of the future, achieved with a wide-field-of-view headset and full of live content integrated with the built environment, would be intended by design to create in the user an illusion that everything being sensed was “real.” The evolution toward this kind of environment raises a host of ethical questions, specifically with attention to the AI that would underlie such an intelligent and compelling illusion.
When watching a movie, the viewer is physically separated from the illusion. The screen is framed, explicitly distinct from the viewer. The frame is a part of traditional art forms; from the book to the painting to the skyscraper, each is explicitly separated from the audience. It is bounded and physically defined.
But with digital eyewear, things change. Digital eyewear moves the distance of digital mediation from the screen (approximately 20 feet) to the human face, which is at zero distance, and almost eliminates the frame. It starts raising inevitable questions about what constitutes “reality” when much of one’s sensory input is superimposed on the physical world by AI. At that stage of the technology’s evolution, one could still simply opt out by removing the eyewear. Although almost indistinguishable from the physical world, that near-future world would still be clinging precariously to the human face.
The next step would be moving the source of the digital illusion into the human body – a distance of less than zero – through contact lenses, implants, and ultimately direct communication. At that point, the frame is long gone. The digital source commandeers the senses, and it becomes very hard to argue that the digital content isn’t as “real” as a building on the corner – which, frankly, could be an illusion itself in such an environment. Enthusiasts will probably argue that our perception is already an electrochemical illusion, and implants merely enhance our natural selves. In any case, opting out would become impractical at best. This is the stage of the technology that will raise practical questions we have never had to address before.
What Happens to Individual Choice?
At that point, what is real? How much agency are we humans deprived of when we are making decisions based on AI-generated content and guidance that may or may not be working at cross-purposes to our needs? How would we even know? In the longer term, what happens to our desire to control our own lives when we get better outcomes by letting those decisions be made by AI? What if societal behavior became deliberately manipulated for the greater good, as interpreted by one entity? If efficiency and order were to supersede all other criteria as ideal social values, how could an AI-driven AR capability be dissuaded from manipulating individual behavior to those ends? What happens to individual choice? Is a person capable of being good without the option to be bad?
Perhaps the discussion surrounding the next generation of AI-informed AR could consider the possibility that the ethical questions change as the source of digital content gets closer to the human body and ultimately becomes a part of it. It’s not simply a matter of higher-fidelity visuals. First, the frame disappears, which raises new questions of illusion and identity. Then, the content seems to come from within the body, which diminishes the possibility of opting out and raises further questions about agency and free will.
This combination of next-generation technologies might well find its ultimate expression after we have collectively engaged questions of philosophy and brought them right into the worlds of software development and corporate strategy.
Movies, advertising, and broadcasting have always been influential, but there was never a confusion between the content and the self as we will likely see in the next generation. Having these conversations about ethics and thinking through the implications of new technologies early in their development (i.e. right now) could help guide this remarkable convergence in a way that benefits humanity by modeling a world that reflects our best impulses.
Part of the appeal of virtual reality (VR) is the ability to control the digital world using only your hands and simple movements. Startup company Neurable, in collaboration with the Madrid-based company Estudiofuture, is eliminating controllers and hand movements altogether with their first game: Awakening, which aims to show what it’s like to have telekinetic abilities.
Neurable Vice President Michael Thompson announced the game last week ahead of its appearance at the computer graphics conference SIGGRAPH. The game, set to be released in VR arcades in 2018, has a story similar to that of the Netflix series Stranger Things: “You are a child held prisoner in a government science laboratory. You discover that experiments have endowed you with telekinetic powers. You must use those powers to escape your cell, defeat the robotic prison guards, and free yourself from the lab.”
Speaking with IEEE Spectrum, Neurable CEO Ramses Alcaide explained that his company’s headset strap, attached to a modified HTC Vive headset, uses several electrodes positioned in specific areas that detect brain signals known as “event-related potentials.” These small electric changes in the brain are tied to movements, sensory experiences, or thoughts as they happen.
More Than Just Gaming
Though Neurable’s technology might be exciting for gamers, such brain-computer interfaces (BCIs) are being researched for much more widespread applications: from neuroscience research to mind-controlled web development, to brainwave-based marketing and tracking brain activity the way many track their steps. The technology is also being developed to help those with locked-in syndrome — unable to move or talk — communicate with the outside world.
Some researchers have expressed skepticism that this technology can ever be commercially viable; Jack Gallant, head of UC Berkeley’s Neuroscience Lab, told the Guardian it was “conceptually trivial but just about impossible to do” due to the difficulty of decoding brain signals through the thick human skull. But Alcaide seems to think the ease with which people have used Awakening bodes well for the tech’s future.
“A lot of people come in highly skeptical, because BCI has been a disappointment so many times before,” Alcaide told IEEE. “But as soon as they grab an object, there’s a smile that comes over their faces. You can see the satisfaction that it really works.”
Advancements in deep learning, virtual reality (VR), and artificial intelligence (AI) may signal an end to issues engrained within the practice of clinical psychology — such as subjectivity and the difficulty of conducting large-scale studies — perhaps leading us into a new era of diagnosing and treating mental disorders.
This new branch of study is known as computational psychiatry. It operates on the tenet that researchers can better understand and treat mental illnesses using the aforementioned technologies. Application vary, but some researchers in the field apply mathematical theories of cognition to data mined from long-standing observations to effectively diagnose and predict cognition, while others use virtual experiments to enable the pure study of human behavior.
Sarah Fineburg of Yale University in New Haven recently published a study that used computational psychiatry to explore borderline personality disorder (BPD), a condition that the National Institute of Mental Health (NIMH) reports includes symptoms such as “ongoing instability in moods, behaviors, self-image, and functioning,” as well as “impulsive actions and unstable relationships.”
For her study, Fineburg observed the responses of people with BPD to events in virtual environments. She used a game called Cyberball in which avatars pass a ball to one another, with the patient in control of one avatar. Though they believe the remaining avatars are controlled by other people, their actions are actually determined by computer systems.
The game allowed Fineburg to monitor the patients’ emotional responses to the frequency with which they were passed the ball. She found that BPD sufferers experienced greater feelings of rejection than non-sufferers when they did not receive the ball, and they also experienced more negative feelings than non-sufferers even when they received the ball more often than the other avatars.
Not only can computational psychiatry be used to study the emotions of BPD patients, it can also help researchers understand their language use, which some have posited was different from that of non-sufferers. However, the data was previously too vast to analyze.“We and others have identified language features that mark psychological states and traits,” Fineberg told MIT Technology Review. “Computational models based on word-use patterns can predict which writers have psychosis or will progress to psychosis.”
An App for Depression
The two strands of computation psychiatry explored by Fineburg — using virtual environments as clinical spaces and using AI to find patterns in large swathes of data — are being used by other researchers to study other disorders.
The use of AI to diagnose disorders and recommend treatments has gained traction in the world of apps, which are acting as “virtual psychotherapists” to treat a variety of mental disorders.
A prime example is Woebot, a chatbot that uses cognitive behavioral therapy principles to help combat depression. The results from a small test of the app were promising, with the majority of users reporting a significant reduction in depression symptoms. Alison Darcy, a lecturer at Stanford who pioneered the app, told Business Insider, “The data blew us away. We were like, this is it.”
The app does have the potential to help people, but there are also some inherent problems with it. Due to the novelty of such systems, no one has yet studied whether or not psychiatric interactions with a computer over an extended period of time are beneficial for patients. Darcy’s study only had 70 total participants and lasted just two weeks, which is likely too short a time period to produce any certainty about the app’s impact.
Virtual environments seem to have fewer pitfalls when used for psychiatric studies. The whole idea of psychology is to study how a person’s perception colors empirical data, so if the senses are sufficiently fooled into believing a virtual scenario is “real,” the results of a VR supported study are just as valid as one conducted in the real world.
In fact, these environments give researchers the ability to learn more than they could from a traditional environment as the VR world can be modified in virtually limitless ways. This enables the study of events that may not be possible in the real world, which gives the researchers a more robust data pool and potentially more clarity on their patient’s cognition.
Indeed, virtual realities and digital environments have already demonstrated their ability to help researchers study and even treat mental disorders. VR can be used to help ex-soldiers overcome symptoms of PTSD, and it has also been shown to help people overcome depression by increasing their self-compassion.
Computational psychiatry could potentially help millions of people, but as with anything that involves the brain, we must careful in how we apply this technology. Without knowing precisely what effect these treatments can have on the mind, we set ourselves up to potentially do more harm than good.
With the release of the Oculus Rift in March 2016, the age of virtual reality (VR) truly began. VR tech had been generating buzz since the 1990s, but the Rift was the first high-end VR system to reach the consumer market, and early reviews confirmed that it delivered the kind of experience users had been hoping for.
Virtual reality was finally real.
Research into VR exploded in this new era, and experts soon started to find innovative ways to make virtual experiences more immersive…more real. To date, VR technologies have moved beyond just sight and sound. We’ve developed technologies that let users touch virtual objects, feel changes in wind and temperature, and even taste food in VR.
However, despite all this progress, no one would mistake a virtual environment for the real world. The technology simply isn’t advanced enough, and as long as we rely solely on traditional headsets and other wearables, it never will be.
Before we can create a world that is truly indistinguishable from the real one, we will need to leave the age of virtual reality behind and enter a new era — the era of neuroreality.
Neuroreality refers to a reality that is driven by technologies that interface directly with the human brain. While traditional VR depends on a user physically reacting to external stimuli (for example, swinging a controller to wield a virtual sword on a screen) a neuroreality system interfaces directly with the user’s biology through a brain-computer interface (BCI).
Notably, this technology isn’t some far-flung sci-fi vision. It’s very real.
To rehash the basics: BCIs are a means of connecting our brains to machines, and they can be either invasive (requiring an implant of some sort) or non-invasive (relying on electrodes or other external tech to detect and direct brain signals). Experts have predicted that advances in BCIs will lead to a new era in human evolution, as these devices have the potential to revolutionize how we treat diseases, learn, communicate…in short, they are set to utterly transform how we see and interact with the world around us.
In fact, some companies are already innovating in the newly emerging field of neuroreality.
Founded by physicist Dan Cook in 2013, EyeMynd’s goal is to create a VR system that allows the user to navigate a virtual world simply by thought—no immersion-breaking controller required.
“When you’re in the virtual world—whether you’re playing a game or something else—you don’t want to have to keep thinking about what you’re doing with your hands,” Cook told Digital Trends in November. “It’s much better to have pure brainwave control. It will be a much more satisfying experience and will allow for a much greater level of immersion. You can forget about your live human body, and just focus on what’s going on in front of you.”
Cook likens the experience to dreaming. “In a dream, you can run around without moving your physical legs. That dreaming and imagining creates brain signals that we can read,” he told The Guardian. “With what we want to do, you won’t need eyeballs to see, or ears to hear, or hands and feet. We can bypass all of that.”
EyeMynd’s system is non-invasive, meaning it wouldn’t require the user to undergo any sort of device implantation. Instead, they would wear a headset that includes EEG sensors to track their brainwaves.
Cook’s isn’t the only company exploring the use of brainwave-detecting external tech to make the VR experience feel more seamless. Boston-based startup Neurable, bioinformatics company EMOTIV, and social networking giant Facebook are all working on non-invasive devices that would allow users to navigate the virtual world through thought alone.
However, as Joy Lyons, chief technology officer of audio tech startup OSSIC, told Vice at the 2016 VRLA Summer Expo, the ideal hardware for creating a new reality isn’t an external headset, no matter how advanced. It’s “a chip in the brain.”
A World in Your Mind
Earlier this year, serial entrepreneur Elon Musk founded Neuralink, a company with the goal of developing cutting-edge technology that connects a person’s brain to the digital world through an array of implanted electrodes. Shortly before Musk’s announcement, Braintree founder Bryan Johnson announced a similar venture—that he is investing $100 million to unlock the power of the human brain and make our neural code programmable. Johnson’s company, Kernel, is working to create the world’s first neuroprosthesis
Musk himself has predicted that we’ll eventually be able to create computer simulations that are indistinguishable from reality, and if these brain interfaces come to fruition, they could act as the platform through which we experience those simulations, allowing us to not only see a realistic world but touch it and truly feel it.
In a detailed report announcing the launch of Neuralink, Tim Urban described the potential impact of this proposed tech on our understanding of reality. Instead of relying on external hardware like goggles, gloves, and headphones to trick our senses into believing that what we encounter in the virtual world is real, we could program realities that trigger the same parts of our brains that would be engaged if the experiences actually were real.
“There would be no more need for screens of course — because you could just make a virtual screen appear in your visual cortex. Or jump into a VR movie with all your senses,” asserted Urban. “You’ll be able to actually experience almost anything for free.”
The same part of your brain that is stimulated when you taste pizza could be triggered to engage when you bite into a slice in this new reality, and the same part that lets you smell the ocean air in reality could be simulated and provide that feeling while standing on the shore of a virtual Atlantic ocean.
The difference between the real world and the virtual one would be undetectable. For all intents and purposes, a difference would not exist.
Figuring out the tech to actually make this happen won’t be easy, and overcoming the non-tech related obstacles will present an additional challenge (such as developing a comprehensive map of the human brain and all our neurons). Elective brain surgery is an extremely controversial subject, and past experiments haven’t yielded such promising results. Neuralink and like-minded companies will need to engage in years of research before their devices will be ready for human implantation, and even then, they’ll have regulatory hurdles to overcome.
Still, BCI research is progressing rapidly, so while a system of electrodes that can effectively project an entirely new world directly into our brains might seem like a sci-fi pipe dream, it really shouldn’t. After all, just two decades ago, the virtual reality experience delivered today by the Rift felt woefully out of reach, and now, anyone with $600 can bring it home with them (and the price is dropping at a remarkable rate).
As Cook told The Guardian, we aren’t as far as we may think from the day when navigating virtual worlds using just our thoughts is the norm: “Ten years from now, this will seem obvious.”
Disclosure: Bryan Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.
If you’ve been in the market for a more affordable alternative to Facebook’s Oculus Rift virtual reality headset, you might be in luck: a report from Bloomberg on Thursday claims that the social media giant is planning to unveil a cheaper, standalone version of the tech by next year.
The current market for virtual reality tech runs the gamut from thousand-dollar setups to lower cost versions that require tethering to a smartphone or computer. Facebook’s concept for their new VR product will be something in between — both in functionality and cost. According to Bloomberg’s report, the model will retail for $200. A representative from Facebook told Variety that, although they don’t have a product to share at the moment, the company“can confirm that we’re making several significant technology investments in the standalone VR category.”
The new model, which Facebook hopes to have on the market by 2018, will likely be similar in functionality to existing VR headsets: ideal for immersive video game experiences and integration with social networking. The specs for the new headset haven’t been finalized yet, but as it is slated to be a more affordable option, it is at least certain that more people will be able to access the technology. Facebook is reportedly planning to reach out to app and game designers this fall ahead of the 2018 product launch.
Outside of Facebook’s $200 billion acquisition of Oculus Rift back in 2014, companies like Apple, Sony, and Google have all thrown their hats into the VR ring. Back in May, Google partnered with HTC and Lenovo to create the new Vive standalone virtual reality headset, and in January, Lenovo announced their newest VR headset, which was priced between $300 to $400.
For the study, 105 participants were put in a virtual reality (VR) scenario during which they drove around suburbia on a foggy day. They then encountered unavoidable dilemmas that forced them to choose between hitting people, animals, and inanimate objects with their virtual car.
The previous assumption was that these types of moral decisions were highly contextual and therefore beyond computational modeling. “But we found quite the opposite,” Leon Sütfeld, first author of the study, told Science Daily. “Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”
Better Than Human
A lot of virtual ink has been spilt online concerning the benefits of driverless cars. Elon Musk is in the vanguard, stating emphatically that those who do not support the technology are “killing people.” His view is that the technology can be smarter, more impartial, and better at driving than humans, and thus able to save lives.
Currently, however, the cars are large pieces of hardware supported by rudimentary driverless technology. The question of how many lives they could save is contingent upon how we choose to program them, and that’s where the results of this study come into play. If we expect driverless cars to be better than humans, why would we program them like human drivers?
As Professor Gordon Pipa, a senior author on the study, explained, “We need to ask whether autonomous systems should adopt moral judgements. If yes, should they imitate moral behavior by imitating human decisions? Should they behave along ethical theories, and if so, which ones? And critically, if things go wrong, who or what is at fault?”
Just how safe driverless vehicles will be in the future is dependent on how we choose to program them, and while that task won’t be easy, knowing how we would react in various situations should help us along the way.
Computing giant Intel has announced a multi-year deal, stretching through the 2024 Olympic games, with the International Olympic Committee (IOC) to bring the company’s new technologies to enhance the upcoming events. According to Advertising Age, Intel is hoping to bring “virtual reality, 360-degree video, artificial intelligence, and drones, to enhance the Olympic Games.”
The partnership aims to attract the interest of a younger demographic to help quell the loss of overall viewership. The president of the IOC, Thomas Bach said “There are many young people that are living a digital life. So we have to go where they are in the digital world, in their virtual reality.”
At eMerge 2017, Magic Leap founder Rony Abovitz gave a number of updates concerning his company’s first product, which is currently in production. He revealed in his speech that the technology is “up and running and live” — it is hands free, does not require looking through a video display, and introduces an entirely new class to the technology which he coined as “spatial computing.”
Another exciting piece of news is that it is being priced for “affordability” — Abovitz stated “if you’re willing to pay for a premium mass consumer device, you’ll be happy with us.” He also said the “launch is not that far away,” and will focus on the “U.S. first, but definitely not U.S. only.”
Abovitz also said that potential consumers are not the only group enthusiastic about Magic Leap. He has seen an outpouring of people who want to become developers. He stresses that he has an extremely loose definition of the word, which can extend from artists to film-makers to programmers to “kids in garages.” In order to foster this developing community when the release comes, Abovitz says that he and his team “want to make sure we’re learning to serve developers and creators properly first.”
The Potential of Magic Leap
Magic Leap is neither augmented reality or virtual reality but, as Abovitz explained at eMerge, a “Spacial Ambiance, using digital light fields to create a personal computer that is ambient, always around you […] and is always contextually aware.”
While Andre Iguodala gave some vague information about his demo experience — including that the technology is controlled by eye movements and modulates lights in a user’s environment, that it has a voice assistant like siri, and that it takes the form of a belt pack with connected glasses. The company has neither confirmed or denied his claims, nor provided much more information.
What we do know, though, is that it has the potential to change almost any industry in the world. David Erwalt, of Forbes, got a rare interview with the founder and concluded that:
This technology could affect every business that uses screens or computers and many that don’t. It could kill the $120 billion market for flat-panel displays and shake the $1 trillion global consumer-electronics business to its core.
While the eMerge announcement gives us just a taste of the technology to come, we hope all of our questions will be answered very soon when we get to try the product for ourselves.
The technology for virtual reality (VR) has been growing in leaps and bounds over the past few years. From teaching us chemistry to helping us design cars, the virtual world is becoming more and more dominant in our everyday lives.
However, VR is still hampered by the necessity for users to wear clunky headsets and possibly other gear. While these are becoming more user friendly, we wanted to know when we will be able to bypass all that equipment to simply connect VR to our minds directly. We asked Futurism readers what they thought and got a range of predictions.
The decade with the most votes was the 2030s, taking 36 percent of readers’ votes. One such vote came from Kevin Kealey, who noted our progress in mapping the human brain and predicted this knowledge would soon allow VR techs to place electron inputs and outputs in the right places.
“[T]he whole brain will be mapped and fully understood very soon,” Kealey commented. “We know where emotions come from. We know how to control them. We know most invasive techniques to place things within the body.” While we still have a long way to go before we truly comprehend the complexity of the human brain, we are making progress in mapping it — even using VR to help get the job done.
What The Experts Have to Say
These guesses aren’t too different from some that are coming from experts in the field. For example, Dan Cook, founder of EyeMynd BrainwaveVR, has been working on tech that will allow users to interact in VR using their brainwaves — no headset or controller needed.
“Ten years from now, this will seem obvious,” Cook said in an interview with the Guardian. “Computers are becoming fast enough that we can detect and interpret all the signals of the brain in real time.” Cook bases his technology off of the principles we observe when people dream. The mind can “see” and “hear” without using eyes or ears, and we should be able to harness that neurological ability, Cook argues.
Others in the field are more skeptical about the state of brain-computer interface (BCI) technology. An international group of researchers determined that methods for interacting with virtual environments through our thoughts “remain in their infancy,” as they wrote in a study published in Computer. “Major research challenges must be tackled for BCIs to mature into an established means of communication for VR applications,” the researchers concluded in the paper.
While we may have to wait a number of years before we can enjoy a virtual world sans headset, companies are continuing to invest in BCI and VR technology. Who knows where the science will be by the time they finally release Magic Leap.
See all of the Futurism predictions and make your own predictions here.
Virtual reality games often try to test how you’d fare in a job. Flairtender is a new game for the HTC Vive and Oculus Rift that puts you in the shoes of a bartender. You take orders, mix drinks, and earn cash to better improve your mixology.
Virtual reality software is changing the design industry for the better. Seymourpowell has developed a VR app that offers a virtual car design studio at room-scale that multiple collaborators can simulataneously use remotely.
Nothing can snap you out of an immersive virtual reality (VR) experience faster than tripping over a cord, which is why VR companies have been scrambling to develop wireless systems. Google is taking that freedom a step further, working to develop a headset that’s not only wireless, but also works anywhere you want to go.
The company has reportedly been working on the stand-alone VR system for more than a year now. The goal has been to create a headset that could handle everything needed to deliver a stunning virtual experience sans PC, phone, or gaming console — not easy when you consider the whole thing needs to also be comfortable to wear.
This week, Variety reported that the world could potentially finally get a glimpse of this standalone headset in action at Google’s I/O developer conference. Sure enough, yesterday afternoon a standalone VR headset was announced at the event, but Google’s name wasn’t the only one attached to it.
According to Backchannel, a “reference model” of the device has already been created with the help of Qualcomm, and the final commercial version should be ready in the next few months. It’s expected to cost in the mid-hundreds range, making it financially comparable to the already released Oculus Rift and HTC Vive.
Google hasn’t been the only company to pursue powerful standalone VR systems — Facebook is working on its own, as is Samsung. While a release date for Google’s system is still forthcoming, clearly we won’t have to wait much longer for the virtual world to reach that next level of freedom.
Mars 2117 is a virtual reality experience revealed by Terminal Future Media at the 2017 World Government Summit. The experience draws from plans from scientific communities about how an advanced human colony on Mars would look and operate. You may not be able to go to Mars in real life, but an immersive virtual world like this may be the next best thing.
Immersive experiences in virtual reality can better prepare students for scenarios in real life. SuperChem VR is one example of an app that preps students to work in an actual chemistry lab, testing their ability to measure, pour, and handle lab equipment.
In an interview with Vox, Second Livestock’s founder and associate professor at the Iowa State University’s College of Design Austin Stewart explained that this virtual free range is really a social experiment between chickens living in a VR world. “I feel like this is more of a design project, to get people to have a conversation about animal husbandry,” he said. “We live in boxes, just like the chickens we eat.”
In reality, the chickens in Second Livestock live close to each other. In their virtual world, however, they live a life in the open. The world features real-life scenarios too, complete with its own creepy crawlers and water sources for the chicken to feed on and drink from. “[A]nd there are no predators, obviously,” he explained in the interview. “So they’re free to just worry about chicken business.”
Virtually More Humane?
And worry about chicken business they do, except these chickens do it in style while wearing their VR headsets. Of course, this is still all an idea, and actually strapping VR headgear to chickens would require approval from various committees in Stewart’s university. But it is an idea worth exploring, as it could potentially help farmers and chickens alike, making it cheaper and potentially eliminating losses expected in farming free-range chickens.
“There’s research suggesting that free-range chickens show all the signs of having a stressful life,” Stewart said in the interview. “They have more broken bones, they get broken legs, etc., whereas birds raised in little boxes don’t have those indicators of stress. And who’s to say which is better?”
Second Livestock is another viable application of VR outside of just gaming. Certainly, it may seem unusual, but maybe not so much as testing electric shocks via VR or partying with friends in a virtual world. At the very least, it could help demonstrate whether animals experience VR the same way human beings do.
“Will animals actually accept a virtual world as readily as we do, or is there some level of intelligence or imagination that needs to happen where we can suspend disbelief more readily than a chicken can?” Stewart asked in the Vox interview. “I don’t know.”
Headsets and motion controllers offer a pretty immersive experience in virtual reality already. But motion simulators can provide an even more realistic experience that tests your piloting skills. One YouTuber has built a $25,000 VR racing rig for his home.
“Come and bathe in high-tech sound, feast on unearthly visuals, and let the experience take you on the ultimate trip,” Murdoc Niccals of Gorillaz said, preparing me for my visit. Walking through the front doors I could already feel the bass bumping. Not knowing what to expect, I quickly realized that sensory overload was imminent. I had arrived at the Gorillaz Spirit House, an installation in Brooklyn that brought music to life using Sonos technology, specifically their new PLAYBASE speaker.
What sounds like just a neat listening party for Gorillaz fans like myself soon became a demonstration of the power of technology in bringing art to life.
In walking into the first room in the Spirit House, a multimedia walkthrough experience of sorts, I felt as if I was stepping into the Gorillaz living room or into one of the band’s music videos. A TV masqueraded as a fishtank in front of the couch, the fridge opened up to reveal a cake that you’d swear you could smell, and one huge slice of pizza stuck to the wall, threatening to slide down. The atmosphere was, at the same time, hyper-realistic and cartoonish. And, perhaps it was how overwhelming the experience was, but I couldn’t place where the bass-heavy tunes that filled the room to the brim were coming from. The PLAYBASE, as part of the 5.1 speaker system, a plain yet futuristic box, sat just beneath the “fish tank,” but seemed to be playing from right behind me.
Walking through a curtain-lined corridor I found myself in the next room, the centerpiece of the experience. It was, quite simply, a living room. Since, as it was explained to me, Sonos products like the PLAYBASE are designed for at-home listening, it made a lot of sense to put the “audience” so-to-speak in a living room. I sat on a couch, and in front of me I saw only the simple PLAYBASE sitting up against a plain white wall. But, as I slowly noticed the projectors overhead, I knew that this would be more than just audio. Using projection-mapping technology alongside the speaker, the entire room quickly became engulfed in gorgeous visuals.
According to Jed Lewis, the Senior Director of Global Brand Activations at Sonos, “As you progress into that projection-mapping room, you start to see where the sound and visuals start to come together.” The experience felt both like watching a music video and taking a trip to the astral plane — it was awesome.
Whether it was the ability of the PLAYBASE to really wrap a listener in sound or the stunning visuals, one thing was crystal clear by the end of the day: technological progress will allow us to explore creatively in new and inventive ways. Jed Lewis said of Sonos and this partnership, “So much science is behind just letting the art be the art.”
The blending of art and science is nothing new, not by a long shot. However, as technologies like AI (artificial intelligence), AR (augmented reality), VR (virtual reality), projection mapping, and advanced audio tech continue to advance, the ways in which we can express ourselves artistically will also continue to expand and develop.
The Microsoft HoloLens has tremendous potential in education, providing 3D models that can better explain complex subjects like physics and anatomy. Lifeliqe is one company developing a HoloLens app that offers a range of lesson plans and models. They’re currently testing the app in middle school classrooms
Each year, innovators from across the globe trade in their lab coats and laptops for ties and gowns to honor the nominees at the Edison Awards ceremony in New York City. Over the past three decades, the awards have highlighted the most innovative products and people in science. Last year’s honorees featured Alan Stern, Principal Investigator of NASA’s New Horizons mission to Pluto.
Without these innovative leaders, our world would stagnate.
This year’s ceremony was no different – it focused on cutting-edge developments in medicine, energy, entertainment, and a slew of other categories. While not everyone who attended brought home an award, all of the nominees showcased the amazing power that science and technology have to transform our world.
Without their dedication to pushing boundaries and finding new solutions to old and emerging problems, our world would stagnate. Below are just four of this year’s attendees, all of whom are following in the footsteps of the Edison Awards’ namesake by making the world a brighter place.
The Tractor of the Future
With automation and robotics poised to completely transform the modern workplace, experts have predicted that the first jobs to face disruption will fall under the three D’s: dull, dirty, and dangerous. According to Tazio Grivetti, Innovation Viability Manager at Caterpillar (CAT), a 2017 Edison Awards sponsor, the company’s tractor system is a perfect example of that type of tech.
Guided by a mantra of “We Make Progress Possible,” the company developed a tractor that can be operated remotely. “It’s not a simulator. It’s not a video game. It’s the real thing,” says Grivetti. Each vehicle needs an operator, but instead of working from a job site, they can work remotely. “When you get off [work], you can enjoy the city as opposed to going back to your trailer, waiting until your next shift because you’re 800 miles from civilization,” he explains.
In addition to saving operators from the hassle of working on-site, Grivetti asserts that the company’s tractor would prevent them from having to deal with the noise, discomfort, and danger inherent in what is otherwise a highly desirable, good-paying job. “It’s a safety thing. It’s a convenience thing. A skilled operator can move an amazing amount of material, and you want to keep those skilled operators working for you and being productive,” says Grivetti.
CAT isn’t content to simply transform how we build here on Earth, either — they’re also poised to use their tractors to transform other worlds, thanks to a partnership with NASA that will have them digging on Mars in the future.
Most of today’s batteries utilize lithium, but that material is in limited supply, unstable, and expensive. By finding a way to create batteries out of aluminum, which comprises 8.2 percent of the Earth’s crust, the team at ITRI has opened up the possibility of much larger, higher capacity batteries, such as those needed for vehicles and industrial operations.
The team plans to incorporate other materials into their batteries to improve the tech even further. “Graphite can make an aluminum battery even longer lasting, and also it can charge very quickly,” claims Chia-chun. “It can charge within one minute.”
Better batteries are one of the greatest hurdles remaining in our quest to end fossil fuel dependence, so the work done by Chia-chun and his team will not only benefit our tech, it’ll also benefit our planet.
Winning bronze in the Media, Visual Communications, and Entertainment subcategory of Cameras & Virtual Reality was Lucid VR for its appropriately named LucidCam.
The pocket-sized device uses two wide-angle lenses to cover a 180-degree field of vision, making it the “first and only 3D camera with peripheral vision for virtual reality,” according to the Edison Awards site.
“Our technology can actually reproduce what both eyes can see,” explains Han Jin, the company’s CEO. “The processing we do mimics how a human brain would work by taking two streams in and then creating that sphere where you feel like you’re actually there.”
The device is wifi supported for live streaming, and Jin hopes it will bring previously unattainable experiences to people across the world. “With this technology, I really think you can put people in different places without them physically being there,” he says – an ability that he believes will increase worldwide empathy as well as inspire people to try the seemingly impossible.
Sound and the IOT
Taking home the gold in the Health & Wellness subcategory of Hearing Technology was Oticon USA’sOticon Opn. The device is heralded as the first hearing aid to connect to the Internet of Things (IoT), bringing a tech that’s been around in some form or another for more than 100 years into the twenty-first century.
The device can connect to any other within the IoT, such as a smartphone or speaker. “If someone was hearing impaired, they’d have to take out their hearing aids to listen to music through headphones,” says Maureen Doty Tomasula, a senior product and marketing manager at Oticon. “It may not even be loud enough. Then they’d have to turn it up really loud. But with this, it streams directly through their hearing aids.”
The Oticon Opn can also be connected to the IFTTT network to create if/then “recipes” to help the user or those around them manage their lives. “If you’re the parent of a younger child wearing hearing aids, you can set it up through IFTTT that if the batteries are running low in Suzy’s hearing aids, then you’ll get a message on your phone saying Suzy’s battery’s are low,” explains Tomasula.
The device should mitigate many of the problems associated with hearing loss, and in some cases, maybe even make users’ lives easier than those without hearing impairments. “To talk on the phone is difficult. It just doesn’t work well with hearing aids and hearing loss. But now, an Oticon user doesn’t have to even think about it. She can use her phone even better than her friends,” says Tomasula. “Think about it. They have to put their phone up to their ear.”
This interview has been slightly edited for clarity and brevity.
3D design and sculpting apps normally rely on motion controllers as input devices. But Gravity Sketch is now working on supporting hand tracking in their app so you’ll be able to draw and sculpt intuitively with just your fingers, creating an experience closer to hand sculpting.
Virtual Reality is a burgeoning technology in the field of gaming. And, just as developers are becoming more and more comfortable with the medium, the technology continues to keep pace with the kind of experiences gamers want to have.
Wilson’s Heart, the recent release of game studio Twisted Pixel, showcases the unique storytelling ability of virtual reality. Take a look at UploadVR’s glowing review for the new game. Beware, the video does disclaim that it may spoil some minor story elements.
In the game, you play as Wilson, an older gentleman trapped in a sinister hospital. Joe Durbin of UploadVR describes the experience as “wickedly enjoyable.” He lauds the excellent visuals and intricate storytelling, which is aided by the voice acting of some of the biggest names in show business, like Alfred Molina (Spiderman 2), Rosario Dawson (Marvel’s The Defenders), and even Peter Weller of Robocop fame.
Virtual reality is rocking the medical technology industry. It’s quite possible VR and augmented reality will be present in almost every aspect of medical training in the near future. 3DSystems’ has created a new virtual operating room training experience that tests students with precise controllers.
As if the movies weren’t enough to give you chills, Ridley Scott, director of the sci-fi flick Alien, now wants to bring the horrific experience into virtual reality (VR). RSA Films, a production studio Scott co-founded, recently launched its dedicated, in-house VR division called RSA VR. According to the company’s short write up, this VR division won’t just be limited to VR. It will also include projects that explore augmented reality (AR) and other so-called mixed media.
“I enjoy everything from assembling teams and forging partnerships to create, distribute and market VR content, to having an ongoing informative — and often inspirational dialogue — with colleagues in the space. I consider us in a unique position to undertake this with RSA’s talented roster of directors,” Dennis added.
Beyond Just Aliens
RSA VR’s first work will be set in the world of Scott’s latest addition to the Alien franchise, the upcoming Alien: Covenant. The VR division is working with Twentieth Century Fox’s FoxNext and Technicolor’s MPC to deliver this experience — and it may even involve keeping that baby alien from wrapping around your face.
As machine learning produces virtual reality that feels more real than ever, the divide separating “human” and “machine” is shrinking. We are teaching AI to beat us at our own games, and it’s proving to be a limitlessly powerful student. The world in which the phrases “seeing is believing” and “show me” mean something is receding in the rearview mirror rapidly, giving way to a reality where you can’t always trust your sensory data. Which — like any other data — can be hacked or faked. Though, in this case it could be at your own direction — and to your advantage.
The human brain evolved to keep us safe in a world of predatory animals, deep, unnavigable waters, high cliffs, and sharp edges. Optimization for survival and reproduction of our genes demanded accurate sensory input, and generations of reliance on that data led to hard-wired fears that feel as real as anything we experience. Something moving fast at the periphery of your field of vision startles; the sight of a sheer drop opening into yawning space causes the heart to pound. Danger! Avoid, survive.
Now, we have learned that we can work, learn, and play at our highest levels by tricking our minds into perceiving what isn’t there. Like a wizened mentor, turning the lessons inward to the self, we shape the lesson and provide the interpretation for our own brains, translated into unmistakable sensory data. In other words, we provide ourselves with learning opportunities that are based in sensory experiences.
Smart sensory devices are changing the depth to which we experience virtual reality. Oculus earbuds, for example, along with acoustic filtration apps like H_ _ r provide a sense of true immersion in a virtual environment — something that’s only been recently made possible. It’s also possible now to smell things without a nose, thanks to advancements in “artificial olfaction” technology produced by companies like eNose. We can taste things that aren’t there and even “send” them to each other online for tasting. If these artificial sensory experiences were to work in tandem, we might not be able to tell the difference between virtuality and the real thing beyond actually telling ourselves what is real and what isn’t.
In fact, virtual experiences may soon provide more sensory data than we can get by any conventional means; an even more “realistic” experience than reality. This would be even more powerful, as TechCrunch points out, with the help of chemical stimulation strengthening the synapses that cement our memories.
The irony of relying on our brains to remind ourselves of what’s real—precisely because we know we won’t be able to trust whatever data our brains themselves come up with—is itself amusing. One of our most fascinating cyborg moments of the coming years may be the merging of technologies with the human body in the pursuit of more realistic virtual experiences.
Opening Up New Worlds
The implications of the abilities we’ve developed to trick our brains are more than amusing. They are changing the way we learn, work, and relate to each other as well, not to mention motivating us to learn how to recapture neuroplasticity and use it to our advantage. The quest for the perfect virtual experience is opening up new worlds for anyone who’d like to experience them.
All of these things would be little more than novelties if our technologies were not so adept at tricking our sense. Thanks to better tech and improved sensory swindling, though, each of these virtual applications holds deeper meaning for us as a species.
When IMAX announced its plans to create virtual reality (VR) experience centers all over the world, the future of increasingly immersive movie-viewing experiences began to take shape. Now that IMAX has opened the first of those dedicated VR experience centers in Los Angeles, that future has become a reality.
For its first film undertaking, IMAX is partnering with Warner Bros. to give movie-goers a super-sized cinematic experience using several upcoming superhero flicks as the catalyst. “It’s fitting that with Imax and Warner Bros.’ shared history of launching Hollywood movies in Imax theaters, today we’re entering into our first studio deal to bring VR to the multiplex,” IMAX Corp CEO Richard L. Gelfond told The Drum. The deal includes the highly anticipated “Justice League” and “Aquaman” superhero franchise films.
“This type of premium content will introduce audiences to virtual reality in standalone and multiplex-based Imax VR centers as well as other platforms,” Gelfond added. While details are still sparse, we know that the IMAX VR Experience Center utilizes both the HTC Vive and the Starbreeze 210 degree StarVR hardware.
HTC’s new virtual reality (VR) platform now allows brands to identify whether or not a user has already seen an ad via its VR headsets.
This new strictly opt-in VR Ad Service — where ads will only show in content that developers have specified to include them in — means advertisers will only have to pay for an ad after a user has seen it. The platform is capable of carrying ad formats like scene banners, 2D and 3D in-app placements, and app recommendation banners.
Ads that appear in immersive VR environments can not only provide more effective impressions, they can also track whether the users have viewed them or have turned away their gaze.
This technology aims to give advertisers the means to effectively reach and pique the interest of their audience while simultaneously enhancing brand image, and attracting more users to directly download their apps in the VR environment.
The technology was launched at the 2017 VIVE Ecosystem Conference.
Advertising in the Age of VR
In-game advertisement, even in the traditional sense, offers a lot of incentive for developers to support the development of their games. But ads are also something that viewers naturally try to avoid. With VR gaining a strong foothold in mainstream media, companies are now trying to monetize the platform by introducing VR ads — a concept, while fascinating, is also slightly disconcerting for some.
On one hand, ads viewed within HTC’s immersive VR environment are based on precise re-targeting, which means advertisers can ensure that they are actually showing ads relevant to its viewers. But, since the payout is linked to people actually viewing the ads, the tech must verify this — which it does, by tracking the viewer’s gaze. It wouldn’t be hard to imagine a future where people are already wearing VR or augmented reality (AR) equipment on a daily basis (perhaps in the form of contact lenses), meaning they quite literally could not look away from a commercials — or any other content for that matter. That hypothesis aside, HTC points out that their aim for VR advertising isn’t meant to be an interruption of the VR or AR experience — it’s actually designed to complement it.
Only time will tell if it will succeed from a consumer perspective. Until then, we can only hope that VR and AR companies find the right balance between creating a viable advertising revenue stream and ensuring a great AR and VR user experience. Ideally, one that doesn’t force us to consume media, commercials or otherwise.
Using a system of electrodes and sensors, researchers at the National University of Singapore can digitally transmit the basic look and taste of lemonade to a plain glass of water. The team used color and pH sensors to capture the lemony hue and acidity of fresh lemonade, and then transmitted the data to electrodes surrounding the plain water. The drinker’s taste buds were stimulated with electricity from the receiving electrodes, and LED lights mimicked the lemon color as they drank what tasted like lemonade.
The concept behind the concept (inevitably conjuring up the dystopian future of The Matrix) is simply to allow people to get online to share sensory experiences when they’re apart.
The team tested the digital tumbler with 13 tasters who didn’t know whether the lemonade was real or not. The tested lemonades were either cloudy white, green, or yellow. First the volunteers learned to rest their tongue on the tumbler’s rim as they drank — this contact allowed them to benefit from the electrical stimulation. Then, they tested the drinks and rated each one based both how sour it appeared and how sour it actually tasted.
On average, the tasters reported that the real lemonade tasted more sour than the virtual lemonade. However — perhaps because of the brightness of the LEDs — they perceived the virtual cloudy lemonade to be more sour based on its color alone. Recognizing the inherent limitations of any attempt to imitate a taste profile without scent, the team plans develop a way to simulate olfaction as well. As Ranasinghe says:
“We’re working on a full virtual cocktail with smell, taste and color all covered. We want to be able to create any drink.”
Social media sharing is really just the tip of the iceberg for this kind of technology. Virtual drinks could help people enjoy sugary drinks without experiencing the impact on their blood sugar or teeth, or even have an alcoholic drink without becoming intoxicated. More than that, though, this new tech is transforming how we interact with one another and share experiences. Many social media platforms are already embracing the idea: Facebook acquired Oculus with the vision of VR serving as a new platform for immersive communication. Ideally, this would allow us to share more than memories with our friends and families online: we could experience new things together — even when we’re geographically apart.
When Bethesda showed a limited demo of “Fallout 4” in virtual reality (VR) during the 2016 Electronic Entertainment Expo (E3), it impressed gaming critics and enthusiasts alike. Not only would any first-person shooter game in a VR format be noteworthy, this was an installment in the acclaimed “Fallout” series. It was enough to get any gamer’s trigger finger twitching — or, in this case, pulling a virtual trigger.
For this year’s E3, Bethesda has promised not just a limited demo of the successful open-world, first-person shooter game, but a complete and uncompromised “Fallout 4” experience with a VR treatment. This has been confirmed by both Bethesda game designer Todd Howard and marketing VP Peter Hines.
“We have an opportunity to make something really unique. We’d rather do that than make some other tiny experience,” Howard told Uploadlast November. “I don’t think that’s what people want from us.”
I talked to Todd the other day and I was like, ‘Hey how’s Fallout 4 [VR] coming?’ and he said, ‘Pete, Fallout 4 VR is the most incredible thing you’ve ever seen in your life. You can’t even imagine what it’s like playing in VR and how realistic it looks with everywhere you turn your head. It’s gonna blow your mind.’ […] We will have it at E3.
“Fallout 4 VR” is expected to come out for the HTC Vive this year. When it does, it will be the first open-world game that’s been overhauled for a full-blown VR experience at that scale.
In the early 90’s the World Wide Web came online and with it came most people’s introduction to the internet.
At first, it was just this thing that existed on the fringes of society, used by a select few who were mostly captivated by the novelty of being able to beam information around the world instantly. It only had a few sites, and there really wasn’t much to do. Eventually, as more and more content was created, it sucked everyone in. Now, it has become impossible for anyone to live in society without a connection.
The internet is taking its next steps through AR, augmented reality, and its more popular cousin, virtual reality. What they allow users to do is rather than interacting with the internet through a screen, they place the user inside the internet and allow them to interact with it in a three-dimensional space. Where virtual reality plunges users inside new worlds, augmented reality plasters the digital world over top of the real world.
Seeing is believing. If you get the chance, give it a shot, till then watch this demonstration of Microsoft’s Hololens to get a sense of what it will enable you to do.
There is virtually no part of life that AR will not impact. Education, art, business, sports, travel, entertainment, all will be enhanced by the introduction and application of AR to them, just as they were by the internet. Imagine sitting across the table in a meeting with someone, and beside them, as you are looking at them, an info-graphic pops open telling you everything you need to know about them. Or traveling to the Amazon and instantly seeing everything you want to know about every plant and animal you look at. Or sparing with Muhammad Ali and playing 1 on 1 with Michael Jordan.
The biggest application will probably be in how we educate kids. Let’s face it, the online world is more captivating than anything going on inside a classroom. To compete for their attention, we are going to need to bring the digital world into the classroom and AR provides an ideal tool for doing that. Rather than having kids squint at blackboard and projectors, they will have interactive lessons rendered in front of them.
Another term you might hear is MR, mixed reality. The line between MR and AR is a little blurry as they both lack clear definitions. The most hyped piece of MR is from a company called Magic Leap. They have managed to raise 1.4 billion dollars in funding to keep their super secretive project going. All we have are a few leaked videos of what it could do two years ago.
Where is This Going?
It has been just over 20 years since the internet was introduced to the masses through the world wide web and look at all that is has done in that time. Nothing has ever had a greater impact on as many people in such a short amount of time. And it is rapidly evolving. The pace at which it is progressing is itself accelerating meaning the next 20 years will see far more change than the last 20 years did.
VR, AR, and MR (someone needs to come up with a catchy name for calling all three) are the next set of tools that will push the digital world forward. The devices will get better and more content will be created, eventually allowing the internet to jump from the screen to the world. It will be all around us, digitally populating our world while also giving us whole new worlds to step into.
The bigger question may not be what we are going to do with it but what it is going to do to us.
Realistic visuals and audio are essential to shaping an immersive virtual reality (VR) experience. But these researchers from the National University of Singapore believe VR shouldn’t just cater to sight and sound. For the ultimate VR experience, other senses should come into play as well.
The add-on contains two features. One is a a wind module attached to the bottom of the headset that uses two fans to simulate wind blowing in the wearer’s face. The other is a temperature module that attaches to the back of the wearer’s neck to simulate heat. Various experiments show that gradual application of each module can mimic how the whole body would actually feel if, for example, the wearer was walking through a desert under the scorching sun or skiing down a mountain slope.
Previous attempts to recreate environmental conditions required fans and heat lamps, so being able to scale this down to something compact is a significant achievement. Next up for the team? Amping up the VR experience via smell and vibrations, as well as learning how human emotion can be augmented and applied to multi-sensory VR.
Gabe Newell. the co-founder and president of Valve, has authored projects that have garnered so much attention that he has been meme-ified into a god.
But looking past the deification by those who support his work, Newell is heavily involved in the future of virtual reality. His company has worked intimately with HTC to produce the Vive VR platform, one of the few mainstream software that consumers can use to access virtual reality. Yet, even though there are over 1,300 apps on the steam VR marketplace and there was an 86% growth in users in the end of 2016, the revenue stream isn’t uplifting. There are 30 VR apps that have made just over $250,000 , which are not the most promising of figures.
SteamVR by Valve:
While Newell is optimistic, he says he’s “pretty comfortable with the idea that it will turn out to be a complete failure.” He went on to say that, “if you don’t try things that don’t fail you probably aren’t trying to do anything very interesting.”
But that doesn’t mean he’s inclined to fail. By upgrading the VR console to have a stronger CPU and better resolution, having an open source platform for developers to tinker with software, hardware, and everything in between, and unleashing a litany of new games (valve is currently developing three games of its own). Newell believes that the shared optimism of developers and consumers alike will drive the newborn industry into its place in the family living room for years to come.
Some time back, Magic Leap raised a staggering $1.4 billion in venture capital in order to make mixed reality our new reality. Since then, a proverbial firestorm of media coverage has swarmed around them. Part of this stems from the secretive nature of the company.
Part of this firestorm stems from the secretive nature of the company. Few people have tried the tech (and ever fewer journalists), so no one really knows what to expect; however, if the teasers are any indication, whatever they develop is going to be pretty fantastic.
Take A Real, Hard Look
Now, it seems that we may finally be seeing what Magic Leap has really been up to. Business Insider asserts that an image of a prototype was just leaked to them. Their source said that the device is known as “PEQ0,” which is a stand-in name that was derived from an internal “prototype naming scheme” that Magic Leap uses.
Business Insider tried to reach out to the company for comment and to verify the information, but they didn’t hear anything back by the time the article was released. As you can see in the below image, the prototype
As you can see in the below image, the prototype definitely looks like a prototype. And this is a bit of a problem.
The issue with this is that, in recent months, Magic Leap has been accused of using erroneous and misleading marketing material to make the public (and reporters) believe that the technology is a lot farther along than it truly is.
Specifically, Reed Albergotti published an article in which he alleged that Magic Leap had “oversold” its system, using different tech for the demo than what will be used in the final product. In response, in a memo to employees, CEO Rony Abovitz stated that the company shouldn’t be sidetracked: “Ignore all of this. Focus on what we are doing, and we ship a great product. That will speak loudly and reverberate for many years.”
This leak also comes at a prime time, as there is a board meeting next week that is generally seen as “a milestone in the product’s development.” In short, it is a chance to show, once and for all, that Magic Leap can truly do all that they claim—shrink augmented reality tech so that it is truly portable.
If this prototype is any indication, they have a lot of work to do before next week.
Undeterred by the recent lawsuit loss suffered by Oculus, Facebook founder and CEO Mark Zuckerberg continues working on developing hardware to improve the technology’s augmented and virtual reality (AR/VR) experience. Just after his tour of the Oculus Research lab in Redmond, Washington, Zuckerberg shared a new prototype for vR gloves.
“We’re working on new ways to bring your hands in virtual and augmented reality,” Zuckerberg wrote in a Facebook post. “Wearing these gloves, you can draw, type on a virtual keyboard, and even shoot webs like Spider Man. That’s what I’m doing here.”
While not much is known about the VR gloves just yet, we do know that Oculus acquired a hand-tracking startup called Pebbles Interfaces that was already working on technology that could help in the creation of VR gloves.
We also know that the prototype seems to use OptiTrack Prime 17W cameras in lieu of an Oculus-built sensor system to sense the position of the gloves, according to TechCrunch. The gloves also seem to have a slim form-factor.
These gloves could help Oculus encourage the adoption of VR tech within the mainstream market (their Touch controllers don’t quite do the trick, as they have very limited uses in VR). Oculus isn’t the only company working on VR gloves, though. There’s Dexta Robotics’ Dexmo gloves and Manus‘ sleeker, bike-glove style accessory that works with HTC Vive controllers. There’s also Neurodigital’s Gloveone and Noitom’s Hi5.
From entertainment to healthcare, the possible applications of 3D digital holograms are numerous. Unfortunately, current holographic displays are nowhere near as advanced as science fiction movies have led us to believe.
Right now, most virtual reality (VR) and augmented reality (AR) setups simulate a 3D viewing experience by projecting two separate 2D images that are viewed using special glasses.
It is possible to create 3D holograms that can be viewed without glasses by using wavefront modulators, which control the direction of the light, and deformable mirrors (DMs). The problem, however, is that even our highest resolution displays don’t have enough pixels to produce a 3D image. This means that we are only capable of creating a hologram about a centimeter (.4 inches) big with a viewing angle of three degrees.
KAIST’s technique produced a bigger 3D holographic image: two centimeters (.8 inches) in length, width, and height, with a 35-degree viewing angle. That’s an almost 2,600 times improvement in quality over any existing holographic technology. If they can continue to scale up this technique, it would spur further advancements in holography and improved AR and VR technology.
It’s the most wonderful time of the year for all of you film buffs. That’s right, it’s Oscar season! Nominations were announced yesterday and came with the usual snubs and surprises. There’s so much to celebrate in terms of diversity, and possibly even more than you think. Not only do we see greater cultural diversity being lauded, but also new technological mediums are being given their moment in the spotlight.
For the first time ever, a film made for virtual reality (VR) has been nominated for one of the most prestigious awards in Hollywood. The Google Spotlight Story, Pearl, has been nominated for an Oscar in the Best Animated Short category.
The film follows the lives of a father and daughter as seen from the passenger seat of the car they share, and the music that drives them.
Pearl was directed by a former Oscar winner, Patrick Osborne, who received the award for his work on the adorable Disney short film, Feast.
There is, however, one small caveat to this news. Given the rules of the nomination process, the 360° cut of the film is not the same cut that was nominated. A traditional cut of the film was created to be able to be shown in traditional theaters. This cut is the nominated film.
Still, this is an exciting development for VR content and will bring attention to the artistic potential of the medium.
There’s a lot of good that technology brings to the world—convenience, ease, accessibility, even social good. Now, one technological advancement has come into focus recently for its potential promote empathy: virtual reality (VR).
Slowly, virtual reality is gaining a foothold outside of gaming, being recognized for its use in fields such as education, retail, even the medical industry. Researchers and experts are now pointing out that VR can do a lot more than simply provide an immersive dimension to traditional entertainment. VR, can place users temporarily in someone else’s shoes, and as such, can be used to “reduce racial bias, promote environmentalism, encourage empathy for prisoners, humanize political enemies, or even help users make better financial choices.”
“We have spent the past fifteen years designing experiments to test the efficacy of VR to teach empathy. Across dozens of studies, a pattern emerges in that VR tends to be a more effective tool than other media and role-playing techniques,” Jeremy Bailenson, founding director of Stanford University’s Virtual Human Interaction Lab said in an article written by NBC News.
In one experiment conducted in Barcelona, VR was instrumental in providing an in-depth understanding of racial bias, with VR allowing users to use light and dark-skinned avatars. In another, Stanford University used a time-lapse simulation that gave users an immersive experience of what it was like to be a piece of coral corroding slowly due to ocean acidification. Researchers observed that those who were part of the VR experience recognized acidification as an environmental threat versus other who merely saw the coral corrode on video–demonstrating the impact of VR to environmental empathy.
Sun Joo (Grace) Ahn, believes the technology’s unique ability to build this level of empathy among users can be credited to its inherently immersive nature. Imagining life from someone else’s point of view is based on limited information. Anything that is confusing or unclear will leave perceptual blanks that prevent someone from fully understanding what it means to be in their shoes.
Even if you could easily fill those perceptual blanks using your imagination, these will all be based on assumptions. VR makes it possible for people to actually fill those shoes. So if you want someone to relate to the difficulty of experiencing racial bias on a daily basis, VR can let you navigate a virtual world while assuming an avatar with a different skin-tone from you in real life, so that you experience the emotions behind it. According to Ahn, the added sensory input makes it easier for the experience to make an imprint in the brain.
“Virtual simulations tend to mimic what you might experience in the physical world so there’s a lot more detail for you to create a mental schema, a memory of this experience as if it happened to you in the real world,” she says. “When you are trying to recall that experience later on, it’s a lot more salient. You have a lot more details to recall. It’s easier for you to build this mental map of what you experienced in your memory.”
Research behind this phenomenon and its significance in promoting positive behavior is still in its early stages. Whatever short-term effects we may get from VR exposure will also have to contend with a barrage of other media sources that may contradict the positive messages delivered in VR form. Right now, studies are looking into how you can use the technology to draw long-term effects, which will involve more research in repeated use as well as development of more empathy-building VR programs. There’s also the possibility of the technology and research being “hijacked or used to promote different agendas regardless of their moral composition,” as one Stanford University graduate student points out.
One thing is for certain, however. This development points to VR being more than just a novel way to experience gaming. Technology can indeed have a more profound effect for all of us.