What Comes After Google Glass? (Part 2)

by nils  - June 12, 2012

Yesterday I posted about how reading might change in a world where all our interactions are mediated by an augmented reality device like Google Glass. But I also mentioned some potential pitfalls, and in this post I’ll discuss them, as well as some ideas for avoiding them.

Reading “Out There”

I see a number of potential problems with reading using augmented reality. This is based on my own behavior while reading and while doing other activities and considering how they might be combined. In particular, in a world where your book is floating “out there” in front of you, you’re going to have problems with safety, with attention, and with distraction. If your reading device is the same thing you use to interact with the rest of the world, you’re going to be hard-pressed to just sit down, with nothing to do with your hands, and stare off into space (that is, into the book floating in front of you) and pay attention to the book. This is especially difficult if the rest of the world is kind of faintly visible through the book.

In fact, the great thing about a book and about the Kindle device is that when you’re using it, you can’t do anything else. It captures and focuses your attention. You can reach for a cup of coffee or something, but you really can’t drive, you can’t walk effectively. You have to commit to reading.

Fundamentally, there a haptic – tactile feedback – aspect of reading, even on the Kindle or iPad, that’s important to keeping you engaged. It gives you something to do with at least one of your hands, and that engagement with the hand is the clue to your consciousness that you need to pay attention to what you’re doing. These haptics also extend to the all-important question of navigating the book. Again, with a real book, or with a Kindle or an iPad, you have a physical gesture on the item to turn the page, find the table of contents, and so on. And if you want to highlight a passage, or share it, or go back a few pages to reread that last part, you need a way to do all those things. When interacting with the air this becomes a disembodied gesture at best. And you’re not going to be able to do that just with your eyes, I suspect. And of course both books and iPads are opaque – the rest of the world may appear around the book, but not through the book.

In the interview with Charlie Rose, referred to in my earlier post, Sebastien Thrun showed an interaction of reaching up to the Google Glasses to push a button. But I think that’s not really going to work in the end. Not only is the gesture clumsy, because you can’t see your own hand at that point, but it’s also really obvious, where you might want some ability to be more subtle. And it’s only a single button – can you really get all the necessary interactions into a single button? Steve Jobs couldn’t – that’s why he invented multi-touch for the IOS devices.  Note that the most successful devices of all time – including the pencil, book, and iPhone – require visual engagement – they can’t be operated simply by touch.

But even the IOS devices have a problem – no feedback to your actions. This is a big problem for me when I’m using the iPad or iPhone as an input device, for example. I’m a touch typist, but on the iPad there’s no way to know if I’m touching the right keys, so I have to use my eyes, which slows me down.

A Proposal – A Smart Slate

Assuming my concerns are rational, how might you address this issue in a Google Glass era? You’re going to want to have something with which to interact, that has some physical presence, and that perhaps can even react to your touch. What I’m imagining is a “smart slate” type of device, on which the Google Glass device, or other devices like it, “project” the images for items that need a physical presence to be most useful, such as books, keyboards, “Minority Report”-like displays, and touch interfaces.

The glasses would keep track of the location of the slate, and always make sure the images are projected correctly for the current orientation of the device. If the slate is moved, the images are moved at the same time. If the slate is tilted away, the image tilts. If the user swipes the slate, the page turns, or the table of contents is loaded, depending on where the swipe occurred. The slate could be instrumented to tell the glasses about the swipe, or the glasses could use a Kinect-like capability to detect the swipe visually. In a more advanced version of the slate, it could provide haptic feedback, using one of several technologies that are becoming available for programmatically changing the texture of a surface, such as this technology from Senseg which may appear in Samsung smartphones soon.

This is an example of something I’ve called “rematerialization” – a play on Daniel Burrus’s recognition of “dematerialization” as a central driver in the future. With digital technology we have dematerialized books, but in reality they’ve been rematerialized as Kindles and iPads. Because we humans exist in “meat space,” we still need our “stuff” to exist in meat space, even if it’s not in quite the same form as it used to be. And while our books may dematerialize even more, out of Kindles and iPads into Google Glasses, there’s still going to be a need for meat space interface for us to interact with them.

That’s What I Think – Now It’s Your Turn

What do you think? Are you looking forward to reading books floating in the air, or do you think there will still be a physical device when all is said and done?

bonus

Get the free guide just for you!

Free

Gamifying Enterprise Apps - Fun or Engaging? (Part 1)

nils

Your host and author, Nils Davis, is a long-time product manager, consultant, trainer, and coach. He is the author of The Secret Product Manager Handbook, many blog posts, a series of video trainings on product management, and the occasional grilled pizza.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

You may be interested in

>