The idea of computers that focus our gaze at images on a flat screen, will one day be considered a brief blip in the history of computing. Look at the full history of computing and, for thousands of years, our conception of things being computed involves an interaction with tangible objects. From the numbers notched in a counting stick, to a Quipu, where knots are used to record statistical and narrative information, to the abacuses, looms controlled with punched cards, to differential gears and counting machines. When we think of computing hardware, we immediately think of technology with microprocessors inside a fixed dimensioned rectangle where we look, and some controllers that are placed flat on a desk.
As VR, AR, MR (virtual, augmented, mixed realities) begins to take centre stage, we will find that the constant tethering to 2D is just a blip in the whole history of how humans use computers of various kinds. We’re heading towards a new era of spatial computing.
Clarifying some messy definitions
When you Google “spatial computing” you’ll get a range of different definitions, and some are easier to understand than others. The two best definitions I have found are:
“Spatial Computing is the practice of using physical space to send input to and receive output from a computer.”
“Spatial computing is human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.”
Computers are typically tied to machines with a fixed physical location, e.g. a desktop PC with a keyboard, mouse and a flat screen. Even the title suggests its fixed location, on top of the desk. With spatial computing, machines no longer need be tied to a fixed location, but rather they allow the user to interact with the machine physically, and the interface can now occupy the space around us.
Enough of the science-fiction talk. What does this actually look like?
In practice, “spatial computing” refers to a fancy umbrella term for variations on VR, AR and MR. The companies who operate in the field of spatial computing are limited by the technology currently available, and offer products based primarily around AR glasses and Mixed Reality applications.
Magic Leap, for example, market themselves as a “wearable spatial computer”. The Magic Leap 1 is a wearable headset with controllers, that uses a variety of sensors and cameras to build an understanding of both its environment and the user, that makes digital objects interact with the real world. This enables immersive, mixed-reality experiences where digital objects respect real-world rules.
This may not seem like a far-cry from the MR applications we see today, but it represents a baby step into the future. Why? It’s all down to the way different hardware will affect the way users interact with the technology.
The future of hardware and UI
Listing the hardware required for spatial computing is like having a grocery shopping list that ends with a solid gold toilet and a rocket to the moon. It goes something like this:
- Devices: VR Headset / AR Glasses / Some hybrid of the two
- Tracking and sensors, speakers and cameras
- Photogrammetry and 3D scanning
- Haptic gear: gloves, vests, bodysuits
Surely we don’t need all this crazy hardware? Not unless you want to be able to do this
But it does explain changes in how users will interact with computers in the future. Up until now, computers have forced the user to interact via typed command or touch controls, which appear on a flat-screen. The UI for spatial computing will consist of eye-controlled interactions, body / hand gestures, and voice controls.
A good rule of thumb for considering how these interactions will look is to ask yourself, “is this how I interact with non-digital tools?” Sitting down, typing on a keyboard, using a mouse to point and click on a flat screen, are actions unique to interfacing with a computer (in its current guise). Compared to our current conception of computer hardware, spatial computing will give the user super powers over the digital media they interface with.
The aim in spatial computing is to make the hardware so intuitive that it becomes almost invisible. While digital objects respect and respond to our immediate physical environment, the hardware, along with our interactions with it, should be “designed so well that the tool becomes part of the task.”
Why did we get hooked on 2D?
In autumn 1968, two revolutions in computing happened just a week apart. At the Fall Joint Computer Conference, Douglas Engelbart gave the Mother of All Demos, during which he revealed the pillars of computer interaction as we think of it today. He demoed the “oN-Line System” which showed windows with text, hypertext, graphics, typing commands on a keyboard, using a mouse and video conferencing. A week later Ivan Sutherland showcased The Sword of Damocles, also known as the first Head-mounted 3D Display System. The display was suspended from a counterbalanced robotic arm and ultrasonic sensors were used to track the head movement. The system generated binocular imagery, so the image appeared to float in mid-air and changed perspective as the wearer moved around it. Although Sutherland didn’t call it “virtual reality”, the Sword of Damocles is seen as the technological precursor to all things VR, AR and MR.
So why did we get stuck with the screen, keyboard and (much later) a mouse? Simply put, we didn’t have the computing power to be able to split across several movable or wearable devices. It was far more feasible and cost-effective to keep all of the technology in one machine in a fixed location with a defined volume and resolution.
Spatial computing is now possible, so the transition away from 2D is essential. This means moving away from the flat screens as the focus of our attention and interaction, to complete freedom of movement, interacting in space, finding and controlling information with eye movements, hand and body gestures, and voice controls. We will move away from responsive web pages to responsive mixed reality spaces. This is already familiar to us; we spend all of our life moving through 3D space, interacting with 3D objects. Like the cutlery drawer in your kitchen, there’s no need to label where the forks, knives, and spoons go. Once you open the drawer, it’s immediately apparent. This is one area where we all speak the same language.
The Gravity Sketch Design Philosophy
While we naturally think in three dimensions, we are trained to work within the constraints of a two-dimensional medium, forced to translate our ideas into flat screens, using time-consuming tools. There’s a big reason why all this matters so much to us. The principles behind spatial computing express the core of Gravity Sketch’s design philosophy. We use the message “think in 3D, create in 3D” to explain our product, but the phrase embodies why we created Gravity Sketch in the first place, and what we feel is wrong with the current conception of computing.
For us, design is problem-solving. Our design philosophy is predicated on the question: “What’s the easiest way to create the thing that exists in my head, and with the greatest fidelity to my concept?” With Gravity Sketch, we’re building our contribution to a future where UI is transparent, immediate and intuitive. Designers and artists will drive the adoption of new technologies across many industries, and we’re paving the way forward with tools that we have created to help them work in a flow state.
At face value, “spatial computing” is still a fancy term for grouping together a number of technologies that already exist. But go deeper, and you find that the term deserves hype for what it defines and what it predicts — tools that bring the digital world closer to the physical world. Once you pick it up, you’ll feel like you have super-powers.