Sensing the world in VR: an overview on the state of VR in UX (and introducing a new term to the game)

Share on facebook
Share on twitter
Share on linkedin
Share on facebook
Share on twitter
Share on linkedin

Share

Note: This is a guest post written by Ayelet Batist, who is giving a Master Class (which you should definitely sign up for) on avatar embodiment in VR on June 7th.
Sign up for the Master Class (it’s free & online) →

Standing in a crowd of excited Chinese visitors at the 798 Art Zone centre in Beijing, I was wondering what all the fuss was about.

At the centre of the circling mass of observers, stood a young man wearing a strange headset waving his hands about.

A monitor on the wall displayed scribbles of colorful brush strokes that seemed to appear as the man with the headset moved his hands.

Aha, I realized, he is drawing in a virtual reality environment.

I admit, the results on the monitor appeared far from impressive. The experience as a whole seemed more like a technology experiment than something that could actually be rendered useful.

Still, I awaited my turn to try it out.
When I was finally instructed (in Chinese and hand gestures) to wear the headset and hold the hand controllers, I curiously examined a VR user interface for the first time.

Still, no wow effect.
Yes, I’m a perfectionist when it comes to design, and this UI was not a design masterpiece.

I picked a color and gave it a go: a beautiful brush stroke appeared as I moved my hand in the 3D space. Nice!

In front of the bubbling crowd of observers, I started painting more enthusiastically, quickly adding to my 3D creation: a flower, a butterfly, a rainbow…

To my disappointment, my creative flow was interrupted by the operator, tapping on my shoulder and telling me my time was up.

Still under the spell of the experience, I walked away from the crowd.
Suddenly, I realized what the fuss was about.

Yes, the resolution wasn’t great, and the UI was clumsy and unrefined. But it felt like pure magic.

The age of VR

Before we set out to design VR interfaces, we must first consider the very basic aspects of immersive experiences, not just the technical limitations.

In their book ‘The 4th Transformation‘, authors Robert Scoble and Shel Israel predict that virtual reality (and sister immersive technologies: augmented and mixed reality) will not only disrupt the technology market, but will dictate how the next generation consumes, communicates, and even thinks.

As this notion spreads, more designers join the frontier, explore tools and processes that can be applied to VR design, and report back to the design community.

Many terrific blogs and tutorials focus on the technical limitations of this new medium, and the different approaches for bypassing them. This is great for carving the way for us designers to follow.

However, I get the feeling that we all too quickly jump to sketching out 3D interfaces, before we’ve really taken the time to consider the profound ways in which VR interactions differ from anything we’ve designed before.

When we started designing interfaces for mobile touch screen devices, we began to consider the small screen sizes, the different ways in which users can hold the device, and the average sizes of fingers and how all of these impacted the design.

Similarly, before we set out to design VR interfaces, we must first consider the very basic aspects of immersive experiences, not just the technical limitations.

Head jumping into designs

As Sam Applebee, Co-founder of  Kickpush studio, pointed out in his blog:
In old and familiar digital media (desktop, mobile, wearable), designers create content that the user can choose to focus on, instead of focusing on all the other sensory stimuli around them.

In immersive VR experiences, we place the user inside our designed content, taking control of all of their sensory input, at least for some of their senses.

This is a huge UX challenge, and a huge responsibility.
Thus, when designing immersive experiences the first matter to reflect on, is the system of human sensory inputs, and how we can both delight and protect our users.

Eye opening experiences

The obvious is a good place to start.

While we’ve long been designing all kinds of visual content for all kinds of digital media, in VR we are responsible for providing all of the visual content that is accessible to users.

Whether it’s fully computer generated, or real world content. We’re providing a complete alternative visual input.

A trivial yet important outcome of this, is that the degree of engagement that the user has with our content is no longer fully controlled by them.

For example, we can design an interface that appears right in front of the user and moves with their gaze as they turn their head, leaving no option to ignore it.

This increased control comes with an increased responsibility.

The VR design frontier has already provided tips and guidelines for the ideal positioning of texts and interfaces in the 3D space. These guidelines usually refer to the ideal interface distances from the viewer’s eyes for achieving a sharp, easy to focus on, image. But many more UX issues require our consideration.

Excluding the case of certain kinds of (let’s call them) intimate contents, users might feel disturbed when visual content is positioned inside their peripersonal space (the area surrounding our body that provokes higher alertness when intruded).

Furthermore, the intensity of this effect might be different for various content types, and for different users.

For instance, some users might want to specify just how close the zombies can get to them in a shooting game…

The existing guidelines advise on where to position content so that it can be perceived most clearly by the user’s central vision, which is highly sensitive to details.

The peripheral vision is less sensitive to detail, and so objects positioned in its range are not seen as clearly. But beyond not being seen clearly, content that is positioned in the user’s peripheral field of vision can actually be disturbing.

Since the peripheral vision is very good at detecting motion (which would have been very useful to our ancestors in the jungle for avoiding lion attacks), positioning animated content in the peripheral field of vision could grab the user’s attention, interrupting their focus on the content in their central field of vision, and induce a certain level of anxiety. The available guidelines are a great start, but are not a complete cookbook for creating the best visual VR experience.

Design guidelines that address the technical limitations of the medium, are another example for a set of rules that we should not assume to be complete. Latency, problematic resolution and visual glitches can disrupt the immersive illusion by causing irritating cognitive conflicts between what the user’s brain expects to see next and the actual received visual input. These known issues are often reviewed and addressed by many online resources providing basic guidelines.

But many issues are not yet addressed: those scary known unknowns of the cognitive and physiological effects of manipulating a human’s visual input for an extended period of time. We have heard of, or experienced short term VR related headaches, nausea and dizziness. But what about long term effects? If we learn that spending a long time in a VR session can harm the user, is it our responsibility to design experiences that encourage the user to take scheduled breaks and have shorter sessions? While such issues may not be experience-specific, experience designers can, and probably should, include them in their design considerations.

That doesn’t sound right

The second type of sensory input that is entirely controlled by the experience designers is sound.

Stereophonic sound is not a new concept.

Our brain can perform the magic of figuring out where sound sources are positioned, in relation to us, based on extremely slight differences in the sound inputs received by each of our two ears.

Even without the aid of sight, sound design can create extremely powerful spatial illusions (try this: virtual-haircut).

When sound input is combined with visual input, the immersive illusion becomes ever more convincing. But there is, of course, a catch.

When the visual and auditory inputs are properly synced, they give our brain the same information about where an object is in relation to us. When they are not well synced, the illusion is broken.

As beautifully illustrated in this talk about sound design, if we show the user a walking dinosaur in a VR game, we’d want to precisely link the various sounds that a dinosaur would make to their exact location in relation to us: The sound of the left foot comes from the left foot, the sound of the right foot comes from the right foot, and the roar could come from several locations in its huge open jaws.

The addition of sound makes VR experiences feel realistic, which is great.

For example: Designing the sound of a game scene, a team of sound designers found that positioning a generic sound source at some distance above the player’s head, gives the player a strange sensation of being under a heavy mass, or of being under water. This effect fitted perfectly to the particular game scene they worked on, which happened to take place in a submarine.

But, this raises a worrying concern.

Unintentionally creating this effect in scenes that don’t involve submarines could be undesirable.

Precisely because VR experiences can be so convincing, there are clinics that treat a range of phobias by exposing patients to their threatening experience in VR. However, these therapeutic VR sessions are deliberate and are carefully monitored.
Unintentionally giving users the feeling of being underwater or 6 feet under the ground might not be as therapeutic.

Sniffing around

Smells have the ability to trigger memories and emotions, which can intensify a designed experience.

Taking a break from the alarming thoughts of inducing anxiety attacks, we can turn to consider the design of less dramatic sensory inputs: smell and taste. What roles would these senses play in the design of future VR experiences? Do we even need to consider them?

Perhaps, before we consider their importance, we should consider how smell and taste inputs could be artificially supplied. Supplying a taste input would require an invasive physical interface.

Since this sense plays a relatively minor supporting role in our experience of the world, it is mostly being overlooked by the VR industry.

The sense of smell may be more interesting to ‘interface’ with.

Smells have the ability to trigger memories and emotions, which can intensify a designed experience.

While they may not be prominent or numerous, some companies are working on VR masks that cater to the sense of smell.

Yet, smell designers would probably not be in high demand anytime soon.

A magic touch

Now, let’s be honest. What we would really like to do in an alternative reality is not merely see things. We’d want to touch them — and not just in the case of a certain VR genre.

The sense of touch is an important and rich part of our experience of the real world, and it turns out to be gratifying when simulated in digital interactions.

For those of us who may not have yet experienced the power of touch in VR, we may still be able to appreciate its promise by recalling the excitement Apple created when they released 3D Touch and the Taptic Engine for the iOS.

Apple’s website even went so far as to claim: “Apps and games will never feel the same again”.

In typical Apple fashion, this was of course somewhat of an exaggeration; especially in light of the possibilities of touch in VR. In VR we may potentially grab, pull, push, turn, and manipulate objects in a similar way to that of the real world.

But then, we find ourselves facing a new problem. The object, or interface, that we are “touching” isn’t really there. So, how can the sense of touch be simulated while allowing the user to move her hands freely in the virtual world?

The industry came up with all sorts of solutions for this. Some more mainstream than others. All relying on having some device that moves along with the user’s hand.

The common devices are hand controllers that provide both a control interface as well as a sensory feedback mechanism. Other solutions are various types of gloves, finger clips, and even full body haptic suits.

From a design perspective, the variety of haptic feedback devices on the market may require a variety of designs specifically tailored to certain devices, and even designing for cases where the user may not even be using a haptic feedback device.

Consistency between different apps and experiences might become an issue. If some interfaces offer buttons that provide haptic feedback when pressed, and others don’t, would this feel weird and confusing?

Perhaps a more broad question is whether haptic feedback always improves an experience.

It is easy to imagine, for example, how haptic feedback enhances the experience of hitting a ball in a virtual baseball game. But would haptic feedback enhance all possible interactions? Would it be more gratifying or more disturbing to receive haptic feedback for every card swiped, for every slider moved, and for every button pressed?

There is yet research to be done, and design conventions to be created, which makes this a very interesting time to participate in the VR design scene.

Keep on moving

When we learn to use a tool, e.g. a baseball bat, our brain gradually learns to include that tool as an extension of the mental representation of our body.

Perhaps the most interesting sense that VR brings to our attention, is the least familiar one: Proprioception.

Proprio-huh?

Proprioception. Yes, it’s actually a thing.

Proprioception is the sense of our body posture and of the position of our body in relation to the perceived space.

Imagine a 3D, dynamic mental representation of our body.

This model is obviously dynamic, since it keeps updating as we move about.

But it is also flexible.

When we learn to use a tool, e.g. a baseball bat, our brain gradually learns to include that tool as an extension of the mental representation of our body.

When this happens, we can use a tool automatically, and very accurately without the need to plan and carefully execute the movement.

As we swing the bat, the mental representation of our body is constantly updated to fit the change in posture and position. Our brain knows how to create and update this mental model based on information coming from several other senses:

  • Sight
  • Touch
  • Vestibular information (balance and spatial orientation information detected by a sensory system in our ears)
  • And proprioceptive information (joint angle, muscle length, and muscle tension information from our muscles)

Normally, for average healthy humans, these different sensory inputs provide the brain with synchronized information. That is, they all give the same answer to these questions: “Where is the body?”, “What is the body posture?”, and “Is the body moving?”.

For example, when we walk, we feel our muscles moving, we touch the ground with one foot and then the other foot, and this information matches the visual image of the surroundings moving at a certain speed. It also matches the vestibular balance information from our ears.

So what happens in VR?

As we’ve seen, for the time being, VR experience provides complete sensory input for some of our senses (sight, hearing) but not for all of them, and this is where problems start.

Most of us have heard of the unpleasant experience of VR sickness, a close relative of Motion Sickness.

Motion sickness happens when there are sensory mismatches between the artificial visual information and the sensed proprioceptive and touch information.

When a user sees herself through the eyes of a VR character that bends over to pick an object from the ground, all the while her own body signals to her brain that she isn’t really moving, the brain receives conflicting information on what’s going on.

This doesn’t feel so great.

In fact, to our brain this feels as if we were drugged or poisoned, and in that case, the instinct of throwing up and getting the toxin out might fix the problem.

While this could be appeal to some, most users will not want to use VR to feel drugged.

So how do we solve this?

There are different approaches in use, all have in common the goal of minimizing the sensory conflict.

Some approaches are based on limiting movement in the virtual environment, by constructing an environment that fits the dimensions of the real room. In some approaches the conflict is lessened by creating ‘jumps’ in the viewed movement instead of a continuous movement.

Getting familiar with the different solutions and the advantages and disadvantages of each is probably an important first step in getting into VR experience design.

Considering the current ‘patch’ solutions for the issues caused by sensory conflicts primes the question: what might be if and when we do manage to provide full, matching sensory input?

Haptic suits, VR treadmills and convincing avatar embodiment experiences are being developed and tested, and new interesting findings are revealed.

To take a glimpse into some of them, join the upcoming webinar at Hacking UI Master Class on: ‘Why we should care about avatar embodiment in VR’.

See you at the Master Class!

Ayelet.

About the author:

hackingu_admin

hackingu_admin

Product Designer at ContrastUX

Find hackingu_admin on:

About the author:

hackingu_admin

hackingu_admin

Product Designer at ContrastUX

Find hackingu_admin on:

Articles that may interest you.

Results & Analysis of The 2017 Tech Education Survey

The Falafel Method for UI Design

Git for designers is here – Meet Abstract

This blog is brought you by Contrast UX

© 2021 All rights Reserved. Privacy Policy.

We participate in the Amazon Services, LLC Associates program, and affiliate advertising program designed
to provide a means for us to earn fees by linking to Amazon.com and affiliate sites.

© 2021 All rights Reserved. Privacy Policy.

We participate in the Amazon Services, LLC Associates Program, an affiliate advertising program designed

to provide a means for us to earn fees by linking to Amazon.com and affiliate sites.

Sign Up

for The HackingUI Newsletter