The Future (A Speculation)
This article was written 12 years ago in 2011, so I ask you not to judge the writing of my youth too harshly! The text is as it was originally published.
I am feverishly fascinated by the acceptance of technology and how humans choose to embrace it, and in recent months have found myself becoming more excited by dozens of discussions I have engaged in about the subject. I can’t help feel we have been approaching the foot of a mountain of that will fundamentally improve the way we think of how we use products and interfaces.
I think as an average consumer of a digital diet it is very difficult to ever bridge the gap between the short term understanding of what is achievable, and what remains as the world of The Jetsons.
The Interaction Problem
In Minority Report (2002), the protagonist John Anderton interacts with his devices using an array of super slick arm slinging gestures; probably one of the most widely recognisable and futuristic ideas in the film.
It was very easy to think how that would easily translate into a reality. In fact I was so inspired by the film, within a year the product had designed and prototyped for my A-Level Design & Technology project included my own attempts to bridge the human-digital divide.
This was my first real attempt to conceptualise a solution to these interaction inadequacies, and despite my youthful optimism eventually being dashed by Dick Powell (who quite understandably understood the leaps needed to reach that point better than an 18 year old DT student) I still believed that somehow we still fundamentally were failing to address the gap.
But within a few years my dreams were starting to be realised and in 2006, new gesture based technologies were publicly demonstrated by Jeff Han in his infamous TED talk. These concepts were not new. It was just a case of them being incorporated into products. And Jeff’s demonstration was by no means a full realising of the dream of Anderton’s world and digital immersion. A year later, much of what we’d seen in that demonstration suddenly was bought into focus by the first truly successful touchscreen product, the iPhone.
By this time I studying for my degree in Industrial Design and was aware of projects going on within my own university department that aimed to exploit physical interaction in the modelling of virtual products and environments by human gesticulation. It looked like momentum was building.
Touch screen kiosks been around for years, and were used on millions of Point of Sale (POS) units, but until this point designers and technologists had failed to deliver a pleasurable or easy experience (problems included lack of sensitivity and lack of accurate response). Once the fallacy that you had to use a stylus to interact with a screen was exposed, the sluices were opened and a torrent of consumer-acceptable touch based devices washed in. The technology had reached it’s teenage.
I think this taught me that until the point that somebody can demonstrate actively a simple, cost-effective, well resolved product, most people say it will never work.
And it’s the never short sightedness that always frustrates me.
The reality is that just because we as consumers cannot see how a new technology could ever be useful and not just a gimmick, we often tend to dismiss that technology as ever having any practical use. But thankfully there are plenty of intelligent, curious people out there who do spot the opportunities and deliver new configurations that turn the very crude carbon dust of ideas into glistening gemstones.
I can’t but help thinking that however big the jump appeared to be, the advent of the useful touchscreen is just the precursor to a far wider revolution, and in recent months, the first true signs of additional new and more important directions have been emerging and this is why.
Firstly, there always are fundamental misconceptions about the immediate future of technology interaction.
I base this on nothing other than anecdotal evidence, but I get the strong impression that most people haven’t got a clue that technologies like augmented reality (AR) are currently so utterly basic that they are practically useless or simple novelties and they will in the not too distant future have a much stronger impact when we broaden our minds.
At a demonstration I went to over a year ago, I was shown a range of ways that AR was being used today. One is the typical ‘reality overlay’ where we superimpose information on the world. This seems great in principle, until you actually try it. You end up with cluttered, jittery overlays that fail to actually filter any discernibly useful information, and in no way seamlessly integrate with any environment. You have to carry a device in front of your face and interact with the data via touch, which obscures your view further.
I think most people believe that AR will one day improve, and the way this will happen is that we will have contact lenses or retina-implants that overlay this information to us. Problem solved? I say no.
I recently came across a video talk where the speaker discussed how naive this long-held belief of what AR is (please can you tell me if you know who it was, and send a link!). To demonstrate, he gave the example of Terminator. In the movie, the Terminator sees various pieces of information presented in front of his eyes. He can read these pieces of information as he surveys his environment and his robotic mind uses this it to label his environment and condition.
But if you think about this, it’s absolutely crazily inefficient. Why on earth would a robot project information into one medium (effectively a transparent screen), only to have to read that back in and reinterpret it using visual sensors? That’s exactly like printing out every email to read it rather than ever using the computer’s display. That’s exactly what a QR code does - and that’s why they are currently a gimmicky half-resolved technology.
He then goes on to demonstrate how if you don’t restrict the AR to a purely visual process, a whole glut of improvements to the experience are available to you.
His is example is this. If you are using a GPS device while driving or walking, you fail to absorb your environment as quickly or as well as you might had you memorised a map-based route in your head, or followed road signs.
I know this myself because if I walk a route with GPS, often I can’t recall that route without double checking because I failed to survey the environment as well as I would have done if I’d navigated using more traditional methods. My visual senses prioritise the output of the GPS device.
The speaker proposes a device you hold in your hand down by your side, which physically leans in the direction you should be travelling rather than giving audible or visual instructions. In this way you can be fully alert to your surrounding and benefit from 100% availability of the senses that you would consider crucial to traditional navigation.
Now this is just an idea, and again demonstrates it’s own naivety. What happens if you need to carry something with both hands? What happens if you have a disability that affords you no feeling in your hands? What happens if your immediate route is incredibly complex?
But it does suddenly suggest we are incredibly blinkered. Revisiting our Minority Report example, many people still think this is way forward for interacting with computers. I think there is definite hints of usefulness, but I also think it will never be a primary method of interaction and we’ve simply stumbled upon an easily imaginable implementation.
It has been shown that this sort of interaction is incredibly tiring for humans and that anything more than short bursts become difficult for the user to sustain. The same applies to desktop based touchscreens. Clearly the success of the Kinect and Wii demonstrate that these full-body recognition technologies do have value, but I’m not convinced we’ll ever use them in isolation like Anderton does in the film.
The Oven Clock Problem
On Christmas day I watched stand up comedian Michael Macintyre lament the complexities of updating his oven clock. Who ever sets it, and for those that do, who ever tackles it first when daylight savings kick in?
This is a fundamental problem. Everyone owns an oven, and everyone has the same problem. For most of us the oven clock is hassle to update and despite the two-button setting mechanism which is used almost universally for setting clocks (chosen for it’s minimum number of mechanical parts) it remains a completely appalling system.
I’m going to suggest to you a far better approach to setting the clock, probably the closest-to-perfect solution, and then I’m going to shoot my suggestion down and give a much better solution.
I think a much better way for a human to set a digital oven clock is by voice. To have to option to control the whole oven by voice is also desirable, but here I shall just discuss this one function.
The reason setting it mechanically is odd is two fold. Firstly, this is a digital device, so why are we interacting with it mechanically at all? Surely this is at odds with the benefits of a digital clock - a device which removes all mechanics in order to demonstrate the full time in the single most easily read way, not limited by the laws of physics upon solid materials.
Secondly, it’s tricky. If you understand the process to setting the clock, it’s simple. You can apply the same logic to every clock you own. But it is nowhere near intuitive. Give a clock like this to a child with no experience of such a system and no instructions and you may as well give them a Rubic’s Cube to solve.
To approach your oven and say, “Oven, set the clock time to ten past eight” is almost infinitely more sensible, human and understandable.
In fact, this approach generally is far more sensible with a whole gamut of tasks we carry out daily.
I have found this out myself already with the use of Siri. Yes, a cliche I suppose, but I really think voice control is an interface that people can too easily dismiss as a gimmick, especially if you naively believe it should be the only interface to an object.
But I can’t tell you how useful it has become for me setting reminders, timers and how frustrating it is I can’t control other aspects of the device already.
Convergence & Obsolescence
The truth is, as Siri-like technology matures, it will become ever more invaluable, and we will see it and it’s kind spread into thousands more device types in the coming years.
Yes, it feels like a novelty right now, but that’s one absolute hallmark of a great technology waiting fulfil it’s potential. Think of the first time you saw a camera on a phone. The photos were so grainy and impossible to access, how possibly could that ever be useful? Who’s absolute first thought at seeing one of the devices was that with a few just years of development, we would be recording HD footage that rivals traditional compact cameras? I’m not sure too many consumers did.
And if you perhaps think we’re at the pinnacle with this particular example, you are probably once more underestimating the possibilities. I can’t see any reason why in a handful more years that the compact camera becomes entirely obsolete as the cellphone device converge with the camera device so much so that it puts some well known firms out of business.
If you think about it, why would you ever want to carry two or more devices? It’s bizarre. Many people will argue that it can’t work. You will never get the quality right in both devices enough for that idealistic convergence, but I argue differently. I think our desire to keep these two products apart is based on our traditional experiences. Of all the arguments Ithink for against convergence, I can easily dismiss each:
“If I lose my camera, I lose my phone and I couldn’t risk that”
The physical loss of a device is getting less and less important month by month. Already my own experiences have shown me that the separation of content from hardware means that you can almost instantly replace a device with no loss of data. I would argue the loss of a device will in future be even more distressing to a user, but not because of the loss of data, but entirely because of our greater product dependency. Content reacquisition will be much simpler and far less worrying.
“I won’t have all the functionality and quality of my compact”
I simply do not believe this. High quality lenses, adapters for lenses, cases, software - it all permits a single device to do all the things a traditional camera will do, plus incorporate all the luxuries a modern communications device like a smart phone does (GPS, meta data, graphics processing etc).
If you think of the arguments for using pretty much any non-digital medium is that the digital medium simply doesn’t reproduce the same way, then it is simply a matter of time before that void is admonished.
Anyone can argue that vinyl is better than digital music for a plethora of reasons, but actually the only current reason that stands is physicality. I guarantee every other aspect could be reproduced to perfection with digital techniques (if not now, in the near future). Even random idiosyncrasies (including limiting parameters that ensure an exact result) can be reproduced if enough care it taken. Maybe not right now, but I believe it can be achieved with such authenticity that a human cannot tell.
This is not an argument for doing away with these originals (which I love), but it is an argument against those who say digital cannot create an identical replacement.
“I like the separation”
When convergence is done properly, this is a non issue. Web browser plus cell phone? Until 2007 that was like someone had superglued Ceefax to a Nokia 3210. When you see the elegant solution, it will change your mind.
In fact, I’d present my opinion that Apple, the current king in converging technologies, will in the next few years kill off the iPod as a true standalone device completely. I also believe that they will also kill off DVD drives entirely within 12 months as web based distribution becomes universal, and that they launch a TV based system that will eventually provide convergence for every box you currently place under your TV set. Instead a range of multi-faceted devices will emerge.
And this convergence is why I believe they will never build a Apple branded standalone camera even though they incorporate that technology in most of their products. It is completely at odds with a converging approach.
“It’ll be too expensive to buy a unit that incorporates both”
It’s already possible to buy these phones quite readily, and traditional economics shows the standard model for costs means the price will drop as saturation occurs. The first DVD burner I saw in the UK cost £420 at PC World just a few years ago. Within twelve months the bottom had fallen out of the market. High end costing products cost that and will always exist because they contain the newest features and technologies. Over time they simply become absorbed into normality, just like electric windows in cars.
The Oven Clock Revisited
Coming back to my oven clock example though, I have yet to challenge my own suggestion of voice control to set the time. I still believe this a much better suggestion than our current mechanical system, but why is it not perfect?
Firstly, if you are mute, you cannot use this method. It is therefore no more universal.
Many consumers dismiss universality as an idealistic madness, but a perfect product should be universal. People think this means sacrifices and trade-offs, but it doesn’t. The web has shown you can build incredibly usable products when care is applied without sacrificing quality or endangering the 95th percentile’s experience.
To take the iPhone as the example once more, the most heavy duty of the accessibility controls are entirely hidden from the average user and yet could be considered exceptional in the field.
Why I believe touch screen technology it is still in it’s infancy is because haptic (touch) feedback is still so poor, even non-existent. The key reason for this is the physical limitation of materials being able to deliver localised physical responses, and this is why aural control is far more accessible and likely to be available more widely in the immediate future.
However to think such responses are impossible is again blinkered. The development of smart materials will I believe soon show us that creating surfaces with entirely amorphous, controllable physicality will occur in the coming years and this will again revolutionise and enhance the interface. It might take five years, it might take seventy, but it will occur.
So what do I propose as the ultimate solution for the oven clock? Well it’s simple really and you may have already worked it out. Voice control is likely overkill in the first instance; instead really, the clock should automatically be set. Using the Anthorn transmitter and some basic technology this would be so ridiculously simple to implement, and it’s bizarre that it isn’t done as standard when you really think about it.
For generations oven makers have been churning out ovens with clocks added as afterthoughts, but which fundamental parts of the product which when you think about it is a careless attitude. Why should a consumer be forced to learn how to set up this part of the machine, repeatedly? Surely in these cases, it would almost be better just leave the clock off, and save the parts cost. Or supply a cheap standalone mechanical clock if they really feel it too much effort to address it with the same care as the rest of the functional experience of the product as a whole.
What It May Mean For Web Professionals
What I’m saying is that I think we currently apply too many interfaces with big limitations and as technology is now maturing, we’re about to see a huge shift in the possibilities and the blurring of the line between the physical and digital environment.
As a professional web designer, I think the reason for my fascination in the immediate future of the interface is that I believe that as an industry we soon will need a far greater understanding of the physicality of what we build in our designs as it’s going to become important.
We already are going through a revolution in understanding the impact of a touch or gesture based control of our websites and applications, as well as their place on an ever fragmented range of screen sizes and resolutions.
This is just the start, and I believe that within a decade or so it is very likely that the fundamental level the web will integrate with all sorts of senses through all sorts of interfaces is going to generate yet a further explosion in the fragmentation of our discipline.
It’s a wide, umbrella like statement, and perhaps a little idealistic, but I really cannot see any other way the world will progress.
But despite all this, it’s all just speculation really, and as a consumer I am really just as blind as everyone else as to what the future really holds. It does excite me though.
This post was first published on Mon Dec 26 2011