Well, another year and another CES and, unfortunately, this year wasn’t much different than the last: lots of TVs, lots of connected devices (I think LGs 3D TV display when you walk into their booth was exactly the same as last year). Everyone battling out for consumer attention through glitz and glamour.
But wait. It wouldn’t be fair to just write off CES as an electronics superstore. If you looked closely enough you could see some trends. Trends that have been building for years and trends that are just coming to fruition.
The way we interact with content.
Consumer behavior with content has been undergoing a steady transformation since the advent of the PC. Up until then, content consumption was very static. It was radio or print or TV. Then video game consoles appeared followed soon by the first commercial PCs. Yeah, that’s glossing over a lot but you get the picture. Before the advent of digital, content consumption was very limiting. People were beholden to a few different channels. Digital was an enabler. And that enablement has been steadily transforming the ways we can interact with content. Enter CES 2014. There are devices like Leap (motion technology) that provide kinetic ways to manipulate and consume content. There are displays like LGs transparent TV that provide digital overlays to physical products. And then there are the Oculus Rift and Intel VR to bring back true virtual reality as a way to spatially consume and manipulate content. Lots of companies ranging from fitness technology to display manufacturers were inventing unique methods for people to engage and interact with content. Perhaps the most interesting are the smartwatch manufacturers. Having recently purchased a Neptune Pine (waiting for it to arrive), smartwatches might represent the next phase of digital content consumption (or FoVC if some of my previous prognostications come to fruition). How will content publishers have to tailor their experiences to meet the needs of these new interface mechanisms?
Perhaps the most interesting aspect of this trend are 3D printers and scanners that enable us to articulate content physically for the first time ever. Imagine watching a video, wanting something in it, and printing it out (for a cost)? That is the ultimate in content consumption.
Again, prior to digital, there were only a few ways we could connect with content. We could watch the TV. We could tune the radio. We could pick up a magazine or book. The Internet, of course, radically altered that. Today it’s possible to listen to the radio while sitting at a computer…or even through an Internet-enabled TV! But for the most part, doing so has been very limited to devices intended to “consume” content: phones, tablets, PCs, TVs. What CES 2014 showed was a myriad of non-traditional devices connecting to content sources. Wireless speakers. Wireless DVRs. One of the standouts was the TiVO Roamio. This device not only enables consumption of Internet-based content and traditional broadcast (through coaxial) but it also supports the distribution of this content through live streaming (over the Internet to authorized devices) and downloadable files. Content is no longer contained to specific device types. It is free-flowing through a network of devices that, if nothing else, show us the Internet of Things is alive and kicking.
An interesting side note to this is the growing transparency of connectivity. Every device that comes out seems to have connectivity built in. We are coming to expect it now.
Extending the phone
Perhaps the most interesting trend to mature over the past few years is positioning the smartphone at the center of our “digital universe.” Capable of providing us everything from messaging to full-Internet connectivity for our other devices, smartphones are more than just phones and more than just computers. They are a hybrid device that enables us to both consume content and deliver it. What’s more, it’s the one device that we carry with us everywhere. That’s probably why companies are looking at ways to extend phone functionality without replacing the phone. Look at Google Glass as the best example. But there are others. Sony recently launched their phone “cradle/receiver” for the car dash. This device enables someone to slide a smartphone into the cradle and have it act like a traditional in-dash receiver while seated. Of course, a few years back Verizon launched their hybrid phone/tablet and we are seeing more “tweener” devices like that with Intel and Microsoft (“a computer when you need it, a tablet when you want it”). It probably won’t be long before we’ll see a hybrid tablet/computer that takes a smartphone as it’s core, especially considering recent Nvidia processor announcement that brings desktop power to a mobile device.
Sensors, Sensors Everywhere (i.e., The Quantized Self)
This is a new trend, appearing only in the past couple of years but exploding onto the CES scene. Wrist bands like Nike+, Misfit Wearables, and FitBit joined by new players like June and Lumo Lift all monitor our daily activity in some way, converting steps taken and calories eaten into elegantly-designed smartphone applications (I use a FitBit One, not the wrist version, but am often not diligent enough to keep it on my person). But there are also other devices, like Interaxon’s Muse headband, that sense brainwave activity to gauge engagement and focus. What this reflects is the growing transparency of sensors. They are becoming so small and so light that they are getting embedded into everything (like headbands and even socks) simply because they can. And, of course, those sensors can output a myriad of data: how far we walk, what we eat, how we sleep, how we sweat. Every aspect of our biological wellbeing can be quantified and delivered to an application. And given the parallel miniaturization of radio technologies (i.e., bluetooth and wifi), these sensors are rapidly becoming a mesh network around our body (again pointing to a healthy IoT). It won’t be surprising in the future to imagine all clothing being built with sensors and clothing manufacturers offering mobile apps that track everything from body heat to sitting position (imagine the app as a little person with green, yellow, and red lights to indicate sensor placement and sensor health).
I would be remiss if I didn’t acknowledge the resurgence of this category of devices. Smartwatches aren’t new. In fact, I had a very early smartwatch put out by Fossil that had PalmOS on it. And that was over 15 years ago.
But thanks to the trends I’ve identified above, smartwatches are relevant now. They are a way to extend our phone experience, they are a way to connect with content, and they are a new way through which to consume content. And there were tons of them at CES. Incumbents like Sony where showing off new versions of smartwatches they’ve had out for years while upstarts like Pebble were showing new models and newcomers like Neptune were sporting their first versions.
It’s clear that smartwatches are here to stay. In fact, there is quite a bit of talk as to whether or not they will out do FoVC devices like GoogleGlass (simply on the comfort-factor: more people are comfortable wearing watches than devices on their face). I also wonder what lies in store for the wearables market. Can FitBit, Nike+, and Misfit Wearables survive when smartwatches will do everything they will do? Who wants to wear more than one thing on their wrist? Or perhaps people won’t mind wearing something on both wrists during the day while ditching the watch at night? What if my smartwatch had the sensor app I imagined above? So my watch is managing the entire sensor network that is sewn into my clothes. Why would I need an extra band? Maybe it won’t be the smartwatch that kills the fitness band market, maybe it will be the clothing.
What does the future portend?
What will CES 2015 look like? Or CES 2025? It’s clear that these trends will continue. Another 10 years and we will be consuming and interacting with content in novel ways. Perhaps 3D holographic projection systems will finally become a commercial reality (as much as 3D printers are today). Combined with VR systems like Oculus Rift, we may very well be able to “physically” enter our content, moving it around kinetically as we move ourselves within it. And it’s clear that the phone isn’t going away but different ways to interact with it (and use it as an enabling device) are going to continue. I could see in 10 years computers being driven entirely from a smartphone. There is no reason why the smartphone can’t be the storage, internet access, computing, and communication hub for our lives. Of course, wireless range and power will naturally grow (even as it is today) making ancillary/complimentary devices like smartwatches even more applicable.