top of page

Virtual Reality / Multimodal Interaction / Gaze / 3D Modeling 

GAZE ADDON

3D sketch

VR APPLICATION

ROLE

Researcher

Designer

Developer

TOOL

HTC VIVE

Eye Tracker

Unity3D

Sketch

DURATION

Feb - May 2017

SKILL

Storyboard / Information Architecture

Ergonomic / User Interfaces Design

User Research / Usability Testing

VR3dmodeling.png

In this project, I designed and developed a light-weighted 3D Sketch VR application using HTC VIVE (VR headset). Unlike professional 3D modeling VR application around the market, for example, the Blocks by Google Daydream, or Gravity Sketch, this application focuses on rapid 3D Sketch — assisting 3D designers to express their 3D ideas quickly, freely, and smoothly. To realize that, I get rid of some unnecessary functions and keep core functions for 3D modeling including six basic geometries as building blocks and four fundamental editing tools. Moreover, the realistic 3D environment design and the natural interactions enable designers to be creative and focus on working itself rather than how to use the software.

 

Here is how I use the application. 

Design Philosophy

Design Philosophy

 

“less is more.”

This light-weighted 3d modeling tool assists users to be creative from three different aspects:​​​

Minimal modeling features but supporting infinite combinations by providing basic geometries as building blocks and editing tools like rotation, moving, resizing and coloring.

 

Virtual environment design provides users with a private and relaxing vibe.

It’s like we are working in your own room. We are relaxed and then creative.

 

Interactions are natural and efficient.

  • Selection: gaze selected and button commit. Gaze as the most direct modal that represents our intention can significantly improve our selection efficiency in VR application. 

  • Manipulation: Manipulations like moving, rotating, resizing perfectly map user’s hand movements, which would be easy for users to learn.

  • Navigation - Context-aware menu (see below) helps users to find information quickly while reducing disturbs by hiding some info under some states. 

model menu.png
edit menu.png
color menu.png
Design Process

Design Process

GOAL

3D designers can express their 3D ideas quickly, freely, and smoothly

METHOD

User Story / Information Architecture

Sketch / Storyboard / Task Analysis

Low-fidelity Prototyping (Paper)/ High-fidelity Prototyping (Google Cardbaord)

Challenges

VR doesn’t simply increase one dimension to the interface, it essentially makes designers’ works more complicated in the following ways:

  • User interfaces extend to the whole virtual world, which entails many challenges in terms of immersion, navigation, and usability.

  • Multimodal interaction - we are allowed to use more modal channels to interact with the computer

  • UX design tools that are suitable for 2D interaction design can’t fit VR well.

 

“How can I handle these challenges and make the right decision?”

 

User Story

User Story is a great way to help me figure out who my users are and how these users would interact with this application. Defining user stories with vivid usage scenarios become more important for VR application than normal 2D software because the interactive activities will not be limited within a small screen anymore, instead, VR players should be able to interact with the whole virtual world. By defining the user story, I’m able to understand their specific problems, needs under certain circumstance, and more importantly, what the mental model of the user looks like.

 

User = Product Designers, 3D Artists, and etc.

Purpose = express their 3D ideas quickly, freely, and smoothly

Context = The user can sit on the chair in front of a virtual table or walk round in a virtual room or under the blue sky to create 3D models.

Media = HTC VIVE, which is a standalone VR headset plus two controllers.

Action =

  • Model creating, editing, and manipulating;

  • Setup their creation mood by changing the virtual environment.

Scenario.png

Based on the user story, I then decided what features I should incorporate into the application. 

 

Information Architecture

Information architecture of a product is the mental model of your product as well as a good reflection of users’ conceptual model on how they use this application step by step. In the VR world where every information locates in a 3D map, their relationships that indicate how they link together are spatial. 

 

The figure below displays how I structure this application.

information.png

My rules for arranging these features are:

  • For this single player application, the initial position of the user becomes the original position of the world. ​

  • The position of the feature is up to its priority in the whole system. Generally, the information in front of users is more important than in the peripheral area. The information that is located closer to the user is more important.

 

information architecture.png

Once I built a solid conceptual foundation for the application, I started to turn to visualize this product via Sketch and Storyboard.

 

Sketch and Storyboard

When visualizing this VR application via Sketch, there are four things I would pay most attention to:

  • Virtual environment. How does the virtual environment look functionally and aesthetically?

  • User flow. How do users accomplish their tasks?

  • The Menu. How can we support users get access to the information they need efficiently?

  • User Interaction behavior. How can users naturally interact with the virtual world?

The following sections will explain to you how I use Sketch-on-paper to visualize my design ideas from the high-level architecture of the application to specific menu and interaction behaviors.

 

Virtual environment design

Environment design, which affects our impression when we enter the virtual world. In this case, I want to create an environment that can make users feel quiet and focus. The scene is a clean room with a desk near the window. The users can choose to stay in this room while the outside is raining or a clean space to do their works without being disturbed.

IMG_2375.jpg
IMG_2376.jpg

Environment design should be a great representation of what our information architecture looks like. Important features put right in front of the users while the less important one I decided to put behind the user. If users want to use these features, they can simply turn around.

User flow

After environment design, I dived into stories that could happen in this environment - the story of building the 3D Sketch in your room: creating the project, building the models, having a rest and then checking the previous projects which were put on the shelves. The following figures are my sketch about how these features incorporated into user interfaces.

storyboard.jpeg

Menu Design & User Interaction Behavior Design
The biggest goal of this application is to help designers express their design idea naturally and smoothly so it is important that they can use and find these modeling tools via Menus as easily as possible. My way is to conduct the task analysis to figure out what features are necessary for users to build 3D models.

  • Basic-model-creation: plane, sphere, half-spherical/cube, sphere, cylinder, prism, cone, pyramid, the machine body.

  • Single-object-operation: zoom, rotate, movement, color, mirror, array, delete, hide. 

These features would show on the menus. In this application, users will interact with this virtual world through two controllers of the HTC VIVE. But when I considered the interaction behaviors based on controllers, I first thought about how people manipulate virtual objects using their hands. This approach helps me to create natural interaction flows for users. In the following figure, you can see the selection mechanism is gaze-selected-plus-touch-committed, which was generated based on some research results.

sketch for interaction 3D Vr.png

After rationalizing these design ideas, I went to the next stage of testing them on real users via prototyping.

 

Prototype

There are limited prototyping tools that can be used to build prototypes for VR application. Most of them require high-level development skills, especially for creating high-fidelity prototypes. The purpose of making prototypes is to communicate the design idea and then test it on real users. In this project, I made two prototypes for testing four aspects of my application:

  • The environment design

  • The interaction with menus 

  • The using posture of the user

  • The selection mechanism (gaze select + hand commit)

  • The manipulation behavior design (move, rotate, resize)

 

For testing the environment, menu design, users' postures, and gaze

Goal:

  1. Test how users understand and feel about the virtual environment 

  2. Test whether the arrangement of interactive components - menu match their expectation of finding information?

  3. Test users’ using posture

  4. Test selection mechanism.

Method: Unity + Google Cardboard. Test the virtual environment and menu

Here is the testing scene, which is a Google Cardboard view.

3d_environment.png

For the menu part, my initial plan was putting the menu on the table, which inspired from the behaviors that people grab something on the table using their hands. And based on the understanding of human body size and field of view, I put the menus within the most interactable zone, which is the yellow zone shown in the following figures.

posture.png
field of view.png

However, in the tests, I found that most people think it’s a little bit difficult to select the items on the menu from a certain distance. I then changed the design to attaching the menu on the controller of one hand. 

menu_position.png

The posture and position of users. In the prototype, I assumed that users were more willing to use this application while they are sitting on the chair because it is what they do when they work in the real world. So the initial height of the camera was around the height of Adults sitting height. But surprisedly, during the test, I found that some of them were more willing to stand up to use it and they claimed that standing up makes them more freely use their body and thus more relaxed and creative. So in the final product, I allow users to adjust the position of the camera based on the height of the headset.

For the Gaze part, two years ago, when I was doing this project, there is no VR Headset that incorporated eye tracking. So I can’t test the gaze interaction directly in VR. My approach to testing the power of gaze was to designed two studies: one is for testing the function of gaze in a selection task on a desktop application using Tobii eye tracker; another is using head mounted to represent eye movement in order to test the selection mechanism in VR. 

 

It turned out to, as I expected, the efficiency of selection on the desktop has been improved if there is a gaze addon to simply mouse selection. Gaze addon here means using gaze to select and then click the button of the mouse to commit the selection. However, for head-mounted selection in VR, the result is not as good as a laser controller. I think it’s because the head movement can directly represent eye movement.

 

For testing manipulation behavior design

Goal: How do users manipulate 3D models? 

Method: paper prototype and guessability. Instead of telling participants how to manipulate the virtual objects, I, at the very beginning, asked users what their approaches to interacting. Because Google cardboard has no controllers that can represent the movement of users’ hands, so I just use the paper prototype.

resize_rotate.png

Even though my initial design was two-hand control, most of the users prefer to do the single-hand operation instead of two hands. Thus, in the later development, I use one controller to finish all the manipulation tasks.

Implementation

Implementation

TOOL

Unity 3D

HTC VIVE

Here is my project code source:  https://github.com/JaneYum/Unity_XR_Projects

Based on the design, I developed the application in Unity. Here are the Action Items

  1. 3D environment building

  2. Models preparation

  3. Interaction development

    • create models

    • open model editing menu

    • hide model editing menu

    • move models

    • rotate models

    • change the color of the models

    • re-scale the models

 

Here is the video for interaction. You can see the manipulation of the object perfectly match our movements of the controller.

The Next

The next

 

The project is the start point of my wild VR / AR / MR journey; it reveals the great potentials of this immersive technology and brings us tons of design challenges. From 2D to 3D, it doesn’t just increase one dimension, in fact, VR/AR call for our mobile users to evolve a new mental model in order to interact with the Digital product with new appearances and capabilities. The challenges I encountered in this project raised three important questions to me: 

  • How can we design a comprehensible UI system that is actually the whole virtual world

  • How can we interact with this technology as naturally as possible? How does Multimodal interaction work?

  • How can designers express and test their VR/AR design ideas when the prototyping tools for VRAR are so limited?

 

I didn’t stop here, in the later two years, I did a bunch of researches, internships, and projects, from academic to industrial, and keep exploring the answers to these questions. I’m passionate about creating interfaces that help users easily use the tech — communicate themselves (intentions, ideas, and feeling) with the computer in a more direct and visual way.

 

Later Research Project:

⭐️ Multimodal Interaction Research

⭐️ AR Interface Research

⭐️ AR/VR Design Tool Research

bottom of page