InfinityCube
Product Design / Virtual Reality
Designed and Developed a prototype that demonstrates the future of workspaces in VR.

TL;DR Version (Too Long; Didn't Read)
I have a shorter, less detailed version of this here on Behance. Feel free to have a look at it.
Index:
- Research
- Programming in Unity
- Executing The Behemoth
Problem Statement
The interaction methods for each application, each game and each website (ignoring the numbers) is almost completely different in VR. There is no way for a user to know what to do in which scenario and what visual indicators mean what. This is what I'm trying to solve. A common interface, an interaction methodology that creates a system for the users to understand VR in a more intuitive way. Essentially what a GUI in an Operating System does.
Responsibilities
I focused on the usage of VR in task based activities, more specifically for workplace environments. Created a bunch of use-case scenarios which could serve as a baseline for a coherent system that is native to VR.
Success Definition
The project was to be determined a success if the interaction paradigms brought forward by the program are seen to be adopted with ease by target users.
Research
Primary Objective
Computers have been essentially the same for 30 years. We live in windows and are limited by the screens we have around us and our proficiency with them. That is no longer the case. With VR we can now have theoretically unlimited space and interact with our files and information in a much more natural way through spatial UI.
Secondary Objective
If you were to compare the workspaces from the beginning of the 21st century and now, you’d notice that apart from a shift in physical documents being digitized, there haven't really been any other changes made to aid productivity in this digital age. Although productivity is difficult to measure, there is some existing research that provides valuable information on it.
Project Timeline

A very ambitious projected timeline for the project.
Tasks of a GUI
One of the first things I did after gaining basic knowledge of the terms I was dealing with was to break down the tasks a GUI does in an Operating System and other tasks that the OS itself allows us to do. I was looking for tasks that were so stupidly obvious, and had become bare necessities for us whether it be in a workplace or not. Some of these tasks were:
- Opening multiple applications or files simultaneously
- Playing music/videos
- Playing games
- Organizing apps/files (Using Drag & Drop, Copy & Paste, etc)
- Browsing the Internet (Email, research, social media, etc)
- Utility tasks (Calender/Alarms/Reminders/Notifications)
I was completely new to the world of 3D. To be able to make a functioning VR application was daunting to even think about. I had to learn a new software to be able to successfully make this work, Unity. I talk more about Unity in a later chapter but one thing I learnt very early on was the hardware limitations I was facing and how interactions differed with varying HMD.
Productivity in VR

At this day and age where most of our work is done on computer screens, we’re hugely reliant on the physical space available to us to create an environment that we can work in. Most of the times, this environment is something that we don’t get to choose.
Virtual Reality is at a stage in terms of hardware where it can solve this. If done right, VR workspaces can be the future of work. In a single swing, it’s breaking location barriers, increasing productivity, and making collaboration for teams simpler. Infinite virtual work space would mean more productivity as fewer interruptions to tasks as less time and cognition is spent navigating occluded things in the interface.
"Letting users spatially organize tasks... can lead to a 40% increase in productivity."
-Jody Medich, Designing VR Tools: the good, the bad, the ugly
"One example that I personally want, and that I think resonates with many people, is a virtual workspace, with completely configurable virtual displays, holograms, and the ability to switch between workspaces instantly. Other people could teleport in to talk and work with me, and I could teleport into their workspaces. I’d be much more productive, and work would be more fun—just like when I first got a personal computer.
In fact, there’s a direct analogy with the personal computer here. More than 40 years ago, JCR Licklider’s vision and Xerox PARC’s work to create the personal computer—especially that of the Computer Science Lab under the late Bob Taylor—led to the computing devices we all use today. That was the first great leap in human-oriented computing.
I believe that VR will be the second great leap. Instead of interacting with the digital world through flat screens, we’ll be able to live in the digital space whenever we want."
-Michael Abrash, Chief Scientist, Oculus
Workspaces
Our workplace is changing every single day. From assembly lines to cubicles to open co-working spaces, we've come a long way. But this is definitely not the end stage of its evolution.
Some of the biggest advantages of having a virtual workspace would be:
- Total Privacy - Imagine traveling for your next business meeting and preparing that presentation (containing confidential data) in VR with no one peeking over your shoulders.
- No distractions - For people working in open spaces or even in cubicles with noisy neighboring coworkers, a VR workspace would be the ideal solution.
- Convenience of Location - With VR, new millennial employees who prefer to work from home or remote locations, get to be productive and present at the same time even from home.
- Increased productivity - As mentioned earlier, spacial organization of tasks and files leads to happier, more productive workers.
- Wildly collaborative - Hiring freelancers from remote locations and different countries becomes vastly more simple. Also, meetings and conference calls in VR help you feel more connected as users are more actively present.
- Training new employees - Jobs that require some sort of onboarding or training will be made simpler as VR can adapt to any work environment as necessary.
- Very natural accommodation of AR - Think of the scenario where you're tagging physical papers using AR and then organizing the digital files in VR. - As mentioned earlier, spacial organization of tasks and files leads to happier, more productive workers.
- Unlimited virtual "real-estate" - Last but not the least, the biggest advantage has to be the very premise of VR, unlimited virtual space which allows startups and companies with limited physical space for their employees to work with ease and at better speeds.
A workspace at the end of the day is nothing but the working environment you're in. A workspace for a student would be their classroom whereas for an office worker could be anywhere from their home to their cubicle office space. Types of workspaces for offices include:
- Open Office layout.
- Closed/Privacy layout
- Coworking space
- Home office
With each of them having their own pros and cons, I was focusing on the category of people having physical space restrictions such as closed space layouts and coworking spaces.


Text Input in VR
During my initial explorations of VR, I was struck at the initial stages on the fact that keyboard use and text entry were necessary but not natural — and people all around the world were having similar complaints. It is with a research-backed understanding I was able to understand that in certain situations the user still needs a keyboard to interact with applications, particularly in productivity-driven or desktop scenarios, but also in games, social applications and content browsing.

The Logitech Bridge is a particularly interesting product to look at. When the Bridge Tracker disk is paired with the Logitech G keyboard (physical keyboard), an exact replica of the physical keyboard will be in your VR environment. As simple as that. You’ll be typing on your actual real-world keyboard and it will be tracked as it is in VR.
"We believe that a physical keyboard should be present, as it delivers essential tactile feedback and a universal experience that people value. Whether you are using a keyboard for gaming, communication or
productivity, it is an effective and efficient tool. Besides letters, numbers and symbols, keyboards provide a range of modifier keys for more
complex actions, all learned, perhaps painfully, and stored in your memory over years of use."
-Logitech
Locomotion in VR

In VR, the concept of standard locomotion control is turned on its head. Most of the time, traditional game controls don’t work well in virtual reality games. Lateral movement with a thumbstick or a keyboard can trigger motion sickness in a lot of people. Your inner ear controls your sense of balance and spatial awareness (vestibular system), if what your inner ear perceives is different from what your eyes perceive (vestibular mismatch) you can lose your balance or get dizzy. A vestibular mismatch can even trigger nausea or vomiting in extreme cases.
The good folks at Google Daydream Labs have researched and documented out some key findings that should be kept in mind when looking at locomotion in VR:
- Constant velocity: Locomotion in VR can cause motion sickness when there’s a conflict between a person’s vision and their sense of balance. For example, if you see images showing you accelerating through space, like on a roller coaster, but you’re actually sitting stationary in a room, then your vision and vestibular system will disagree. A way to mitigate this is to use constant velocity during locomotion. Although acceleration can be used to produce more realistic transitions, constant velocity is far more comfortable than acceleration in VR.
- Tunneling: Tunneling is a technique used with first-person locomotion (such as walking) where, during movement, the camera is cropped and a stable grid is displayed in your peripheral vision. This is analogous to watching first-person locomotion on a television set.
- Teleportation: Teleportation is a locomotion technique for apps using first-person perspective that allows you to near-instantaneously move to a target location. This technique reduces the simulator sickness that many people feel when the virtual camera moves. However, it also makesit harder for people to maintain spatial context—“where am I, and how did I get here?” We found there are subtle things that can ease the transition and improve context. For example, Google Street View on Daydream fades before and after teleportation. Also, when you teleport to a new location, the app quickly moves the entire scene toward you to convey directional motion. This effect is called “implied motion.”
- Rotation: It’s often tempting to design a VR experience where weassume that people will be either standing or sitting in a swivel chair. Unfortunately, hardware limitations or physical constraints may not allow for full 360-degree rotation. To make sure people can get where they want to go in a VR environment, consider giving them the ability to rotate themselves within the virtual space. Continuous and animated rotations tend to induce motion sickness. Instead, we’ve found that discrete, instantaneous rotations of about 10-20 degrees feel comfortable and provide sufficient visual context to keep people oriented.
Content Zones
Alex Chu of Samsung research gave some useful measurements related to the perception of depth at different distances in VR (Chu, 2014). Your eyes strain more to focus on objects as they get closer to your face until you are eventually cross-eyed. The distance that he gives where this starts to become noticeable is about 0.5 - 1 meters. Oculus recently began to recommend a minimum distance of 0.75 meters (Oculus, 2015). Between there and 10 meters is a strong sense of stereo depth and separation between elements. This gradually fades off and is less noticeable up to 20 meters away. After 20 meters, the stereo separation is essentially imperceptible, partially due to the resolution of the screen. As objects approach infinite distance, they approach a limit at which the two screens would be identical, pixel for pixel. Infinite distance is, essentially monoscopic. This diagram illustrates the perception of depth as it relates to the DK2’s field of view, based on Alex Chu’s presented measurements:

According to that same presentation, people can comfortably rotate their heads horizontally 30° from the center and have a maximum rotation of 55°. I concluded that rotation of 30° combined with the device’s field of view gives an area in which a user can comfortably rotate their head and see elements, 77° to the side (94°/2 +30°). Beyond that, combining the maximum rotation of 55° with the field of view gives an additional area where people can strain to see things in their peripheral, but persistent content would not be comfortable to see on a regular basis, 102° to the side (94°/2 + 55°). After that, content behind the user could only be seen if they physically rotate their body, likely out of curiosity about the environment. By combining all of these measurements, Mike Alger created the following diagrams for comfortable content zones(Alger, 2015).

Using the same equation for other head mounted displays such as the consumer Rift, Vive, or Gear VR, yields nearly the same 20 meter distance every time All other distances exist within the anti-aliasing and interpolation of a single pixel. Content beyond this distance of approximately 20 meters loses the benefit of depth perception and can thus be deemed the far horizon at which meaningful content should be placed. This results in the following zones diagram: (Alger, 2015) But it is important to realize that this diagram is a 2D representation of a 3D concept. One has to visualize the diagram in 3D to be able to fully grasp the zones correctly.

Office ergonomics with relation to computers has been around long enough for some more clearly defined numbers to emerge. The recommended angle of viewing for longer working periods tends to be between 15°-50° downwards and at a distance outside of 0.6 meters as is illustrated in the diagram on the following page from Dennis Ankrum’s “Visual Ergonomics in the Office” (Ankrum, 1999).

Programming In Unity
It took immense amounts of effort and time to learn Unity from scratch. Not only was 3D game development new to me, C# syntax wasn't as close to C++ as I'd hoped for. I'll briefly touch upon the various stages I had to go through in terms of Unity:
- Understanding Unity UI
- Game Dev terminology
- Game Physics
- C# Deep Dive
- Making a Basic Hoop Shoot game
- Understand the Google VR SDK
- Making use of IBM Watson SDK
- Spatial Audio
- Mapping an external Bluetooth Controller
Executing The Behemoth
Branding
It is often advised and even taught to look at logos as people. We have a tendency to be give preference and pay more attention when someone refers to us by our names. No one likes to be referred to by their vague descriptions like "the guy with the black hair and non-existent jawline". Similarly, a logo does not have to literally describe what the product does but rather make the product more recognizable.
From an intial mind-mapping session of words I was able to lock down on the word and aspect that I was focusing most throughout my project, Infinite virtual space in VR. From there, I narrowed it down further to 3 names:
- Infinite Loop
- Horizon VR
- Infinity Cube
I took forward the names Horizon VR and Infinity Cube because Infinite Loop seemed to digress away from the intended meaning as compared to the other two. I did some quick sketches to get myself started on what the possibilities could be without trying to be too literal (mostly). Here a most of them:






I was looking to make the final logo have a certain amount of depth and signify "infinite" in some way or shape.


A loader animation for the chosen logo.
Onboarding
The process of onboarding is a vital one in terms of creating a shallow learning curve and allowing the user to come back to the app time and time over. One of the major benefits of onboarding, especially in VR, is that it bridges the gap between what a first time user expects and what the application can actually do.
Onboarding is the "action or process of integrating a new employee into an organization or familiarizing a new customer or client with one's products or services."
I started devising the onboarding journey by first deciding what are core aspects and interactions that a user should know before they enter the actual home screen.
From there, I made a storyboard of the process that the user will follow and finally brought that into Unity to test. After a few feedback sessions with new users, I was able to pin down the timing of the instructions which I still feel could use some work.

User Flows
Making a user flow for VR is extremely difficult without huge amount of testing and user behavior observation. Since at the beginning, before setting the final context, all the separate functionalities were their own objects in the home screen.




Home Environment
The home screen is the crux of the entire project. This is where I am proposing the user spend most of his working hours. I went through a lot of research on ideal work environments and what makes them ideal, looking at what cubicles look like and how cubicles serve their purpose.
I started looking at the home screen as this indoor room where there was a table and I used baseballs as placeholders for the user case scenarios I was working on (File explorer, context menu, etc). After the initial feedbacks I quickly understood the need for me to tear down the walls and stop thinking of the workspace as this literal indoor workspace.


I then added smaller things like a real time clock since this was a mobile experience and with the rise of smartphones, users tend to look at time on their smartphones which would be in use so a method to view time was important.



As per Google VR design guidelines on their cardboard labs app, providing an element in the VR world that gives the user a sense of scale is very important. These trees act as that element hence the different scales on them.

Finally, I added sticky notes that the user could drag and drop, and on top of that, I decided to do away with typing in VR in any form. I went ahead with using IBM Watson's Speech to text service that would utilize AI in a way that actually works in my particular context.

File Explorer
I don't think anyone can imagine their personal computers without a way to access and control their files. And thinking of recent times, even our mobile phones have both 3rd party and in-built software to access our files and folders. These are called File Explorers.
I was trying to look at how folders could be represented in their most basic form. I tried to make them transparent such that the user can see the subfolders and files inside a folder.

I also used my previously gained knowledge about movement in VR and decided to use teleportation as a method to interact with folders and files by literally going inside them when they are opened.



Context Menu
Since the time they were introduced as a part of the GUI system in the mid-1970s by Dan Ingalls(Sawyer, 2012), context menus have been these drop down menus that display a small set of specific or contextual commands and options. Objects, may it be icons, backgrounds, text, etc need to support the context menu. Not all objects necessarily have the option inherently.
For my context menu, I was looking at a radial menu by interacting with an icon that provides the options that could be replaced with whatever the context is. In my case, since I was experimenting, I used the options to change the texture of the cube that was on the left of the menu.


Wallpaper in VR
Wallpapers or desktop backgrounds are those images that we have as a background of our GUIs. Whether it be computers, mobile phones or any other electronic device. A wallpaper has become so common that it is in our nature to expect a customizable background on our devices.

Notifications
A notification is a message you can display to the user outside of your application's normal UI. When you tell the system to issue a notification, it first appears as an icon in the notification area. To see the details of the notification, the user opens the notification drawer. Both the notification area and the notification drawer are system-controlled areas that the user can view at any time.

3D File Placement
Even though I initially wanted the context menu to be very similar to what we have in our desktop computing systems, having options to duplicate, delete, etc, it felt as though I wasn't utilizing the amount of space available to its full potential. Thus I added options for changing the Z-depth of a file or object in the context menu itself as it made most sense to not introduce new buttons for this particular aspect.
Thank you for reaching this point. Give yourself a pat on the back.