I recently delved into the exciting realm of Extended Reality (XR), spending over two years investigating cutting-edge immersive design methodologies that challenge the limits of human perception, experience, and comprehension.
This story aims to guide designers or anyone who is curious about virtual reality, based on my experience as someone transitioning to XR UX design. I will explain how XR design differs from flat-screen design and how our current knowledge and point of view may be used with this technology.
To avoid confusion between XR and VR, let us just say that we will now concentrate on virtual reality.
Differences Between 2D and 3D Design
- Depth and Scale
Unlike screen-based design, which is constrained to 2D layouts on a screen, VR immerses users in a fully interactive environment where they can move, explore, and interact with objects and UIs in a virtual world, and also users can experience scale and depth just like they would in the real world. To make sure that objects feel proportionate and natural, we must carefully evaluate their size and distance, and we also need to take the user's posture and viewing angles into account.
And we can use the environment as a medium to give more engaged storytelling, not only using an illustration or captivating narrative to help the user grasp what we are about to tell.
- Changes in User Interaction and Presentation
The interactions transformed into complex spatial gestures and natural movements. In screen-based UIs, we can interact with it by taps and clicks, swipes and scrolls, drag and drop, hover state, and form inputs. And now, we have hand gestures, head and body movement, controller-based input, voice commands, eye tracking, haptic feedback, and spatial audio cues.
- Height is Important
Height has a direct impact on comfort, usability, and interaction in addition to being a visual factor. Everyone can enjoy a more immersive and inclusive experience when height adaptability is taken into consideration, regardless of height or VR interaction style.
Back in the day when I got the priority of how to use Figma in VR design, I found this story on medium “3 key UI design concepts for VR and AR apps” which gave me an instant understanding of how to use Figma in designing UI and then test it with a VR headset. So there are 3 key UI design concepts for VR, which are:
- Distance: How far away should the UI be presented to ensure that it is neither too large nor too small for the user?
- Size: How large is my workspace on Figma, given that we are used to desktop, tablet, and mobile frame sizes?
- Height: At what height should we position the UI that we will design?
I slightly modified the order of the concepts because there are things that we need to pay attention to regarding distance before determining the right size for designing a UI in VR.
Distances
As explained above regarding the differences between 2D and 3D, one of them is depth and scale, which is related to distance. Distance isn't just about measurement, it's about human perception and comfort. In VR, every millimeter matters because we're designing for human biology, not just pixels on a screen.
In screen-based UI design, the user interface (UI) is displayed directly on the user's device, meaning its viewing distance determined by their habit, for mobile devices, users generally hold their screens at an average distance of 35 cm from their eyes, and desktop monitors are usually positioned farther away, around 50 cm. And that's not what happens in a 3d environment, because not all spatial UI are positioned so closely when users are using laptops and smartphones, it might be placed 1 meter ahead, or even 20 meter over their head, It all relies on what kind of information we want to provide.
Â
And now, the question is how to make sure that the user can read all of the information we put in the spatial UI, regardless of how far away they are. So we need to ensure that the content is consistent at all distances. This means that the UI we place one meter ahead is identical to the UI that is three or even twenty meters away.
Â
as described on this Google I/O 2017 talk, the answer is distance-independent millimeter, it can be described as 1 millimeter at a meter away, it can also described as 2 millimeter at 2 meters away, and even 2000 millimeter at 20 meters away.
Sizes:
Frame size in Figma
As we know, the first step in designing any UI is to set up a canvas or frame that matches the screen or device we are designing for. For example, let’s say we are going to design a mobile screen for iOS, and then we must decide first what kind of Apple devices are mostly used in the targeted zone and then use the resolution as an anchor for other resolutions of the devices.
We can use the same step to create UI in VR. For example, we will create UIs for Meta Quest 3s, so we will use 1832 x 1920 as a working space, but that is just a reference for canvas size to assist us. The most important thing is to think about how much of this canvas we can use to make sure our user interface is visible and usable.
VR Field of view VS Comfort zones
From a UX perspective we first need to know about is where we ideally place the content. And it’s essential to understand how the human field of vision works. In this publication”A Hazard Detection and Tracking System for People with Peripheral Vision Loss using Smart Glasses and Augmented Reality”, researchers state that
“human have some different area human field of vision consists of different areas which are used to see varying degrees of details and accuracy about the surrounding environment. And central vision is where objects are clearly and sharply seen and used to perform most of the daily activity Central vision is where objects are clearly and sharply seen and used to perform most of the daily activities. The second type is the peripheral vision used to detect larger contrast, colours and motion and extends up to 60 degrees nasally”
Â
And also the interaction design foundation through Interaction-Design Foundation’s course”Spatial UI Design: Tips and Best Practices” stated that users will feel more comfortable if they are not constantly turning their heads. Content should be horizontally positioned within 30 degrees of each other's side, considering the horizontal field of view is 60 degrees. It is not recommended to use a position more than 30 degrees from the center since it is heavy on the neck. Some studies state that a neck is considered healthy if it can be rotated between 160 to 180 degrees, but we must ensure that the content we want to provide doesn’t exceed these areas because the user might miss something when they turn the body.
And we also consider vertical field of view also mentioned in Interaction-Design Foundation’s course”Spatial UI Design: Tips and Best Practices”, we should not have users look up or down for extended periods of time, especially when they are moving around. Ideally, content should be positioned 40 degrees slightly above the horizon line for vertical rotation .
Â
Workspace
Based on the information above, we can use Figma frame to determine the right size for the user interface design by using the horizontal and vertical FOVs as a reference.
However, there's an accuracy issue when comparing a 20-degree horizontal FOV to a 20-degree vertical FOV. We calculated 364.5 pixels for the vertical FOV, while the horizontal FOV resulted in 384 pixels.
Visually matching is frequently preferable rather than relying solely on mathematical calculations. Human perception is not linear; our eyes process horizontal and vertical fields of view differently due to factors like binocular vision, lens distortion, and brain compensation. For this reason, we will adjust the horizontal base length to 1154 pixels, and use it as the foundation for the vertical length, and the frame drawn degree scale should look like this
Â
That will help us in drawing the near periphery area on the Figma frame.
Â
And then we draw the canvas inside the area to ensure the user see every information that we put in that workspace size.
Â
We have 820x544 pixels for the spatial UI workspace, as you can see in the figure above. However, since we are developing for virtual reality, we should employ the other components to provide users with information as well.
For non-diegetic UI/ head-locked UI, we can also use that spatial workspace because both spatial & non-diegetic UI consider human vertical and horizontal FOV as a guide to make user still comfort. We don’t want user to make an extra effort by shift their glance in search of specific information we didn’t put within near peripheral area.
Why do we use the same workspace for non-diegetic UI? Do you recall dmm? One pixel is equivalent to one millimeter at a distance of one meter; both of the UI use the same dmm. The difference is that we place the non-diegetic UI on the user's "eye" while the spatial UI is placed one meter away.
Height
Height placement plays a crucial role in ensuring a comfortable and natural user experience. Since the human eyes naturally rest at a 10°–15° downward gaze as stated in this publication “Human Engineering Guide to Equipment Design”, reducing strain and improves readability. With this arrangement, users may easily access information without tilting their heads excessively, which over time may become uncomfortable.
Additionally, keeping UI within the lower field of view helps maintain immersion, as it minimizes obstacle of the main scene while still keeping interactive elements easily accessible.
To keep the spawned UI inclusive, we must make consideration about user body height, we can design the system to take the user’s height and then subtracts the height of UI by 170 mm/ 17 cm for 10° downward gaze, and 260 mm/ 26 cm for 15°.
Â
Take Away
Designing user interfaces for virtual reality (VR) requires a fundamental shift from 2D interfaces since it presents new depth, spatial positioning, and ergonomic issues. Designers of 3D user interfaces must carefully consider placement, visibility, and interaction at various depths to maintain usability.
Distance plays a crucial role in legibility and comfort, with recommended ranges, such as 1 meter placement for primary UI which helping to balance accessibility and immersion. Additionally, rather than depending just on flat screen measurements, it is crucial to scale things adequately when creating in programs like Figma to account for how they would appear in virtual reality.
Finally, user height and natural gaze behavior influence UI positioning, the ideal placement is just below the eye line to accommodate the ergonomic comfort of the body. Designers may bridge the gap between computer interfaces and human perception by incorporating these factors into VR experiences that seem immersive, natural, and intuitive.
References
- What is Extended Reality, the clear description even in one paraghraph Found this guide in my early research,, 3 key UI design concepts for VR and AR apps
- One of UI type in 3D environments, spatial UI
- An Introduction why there are dmm?, Designing Screen Interfaces for VR (Google I/O '17)
- How AR can help human with vision loss, A Hazard Detection and Tracking System for People with Peripheral Vision Loss using Smart Glasses and Augmented Reality
- Interaction Design foundation’s Spatial UI Design: Tips and Best Practices
Â