Artificial Intelligence. Once it was the future and now we’re living it.
I remember sitting in a home economics class a mere three decades ago and being shown a picture of what a house thirty years hence would look like: streamlined kitchens with integrated appliances, automatic lights, screens in every room, a car on the driveway that has no business with a key, robots vacuuming the carpets. OK, so the last one didn’t catch on, but the others are ubiquitous. Yet this was a relatively linear vision of technological advancement – the things we already had, but working a bit better; there was no thought of AI. The illustration didn’t include remote controlled heating systems, security cameras with facial recognition or fridges that do their own shopping. And I’m pretty sure that all those screens were TVs and possibly a lone desktop computer. How could we have envisaged the plethora of screens we now own?
The recent Consumer Electronics Show in Las Vegas showed that AI is starting to impact all areas of consumers’ lives, from ovens that connect to recipe apps and smart mirrors that help choose clothing or apply make-up, to a bath that fills itself with the perfect depth of water at the perfect temperature (without a human having to go into the bathroom to do it, obviously).
The expression ‘at the touch of a button’ is in danger of becoming passé. Who needs a button when you can just bark out commands to the nearest personal assistant? If technology predictions are right, these assistants will soon be baked into more and more devices, so that you can instruct your TV, talk to your cloakroom mirror and tell your car where to go. There might not even be such a thing as a quiet bathroom break any longer with the advent of Alexa in the toilet (complete with lights that flash along with the rhythm once you have asked it to play your favourite music).
What strikes us at Tapestry is that AI is cropping up more and more in Entertainment and Media. This is a regular research area for us, so we spend a lot of time thinking about relevant consumer behaviour and needs, and we are keen to understand which direction consumers want it to head in. Development here is focused on personalisation, using machine learning to provide a media experience tailored to the individual. At the Consumer Electronics Show, LG’s President and CTO Dr I.P. Park quipped that ‘currently you need to be smart to use a smartphone.’ Soon devices themselves will become smart, knowing what you want without you having to tell them (or even realising what you want).
This might sound like nirvana to some of us. Apparently the average adult makes around 35,000 decisions per day, so no wonder we can’t face our TV guide. And I can’t be the only one who stands with my phone in hand, poised to play music on SONOS, and not a single artist springs to mind. What a dream it would be if our television could pick the perfect show to match our mood at any given moment. And Spotify already helps us when our mind goes blank, with mood-led suggestions that save on thinking time and broaden our horizons.
On the other hand, digital ‘help’ can be the bane of our lives. We have all had the experience of doing even the most tentative online browse for a pair of jeans/conservatory/wart remover and then been chased round the internet by adverts for that same item long after we have bought it, lost interest or realised it wasn’t a wart after all. Similarly, supermarket customers regularly bemoan too much choice, but they’re not saying they want to delegate their entire shop to a personal shopper. Just as they know what they want to cook and what is likely to be left rotting in the fridge, so the entertainment consumer has a mental (albeit partly subconscious) list of ingredients that will satisfy their media appetites.
So the big question is: if consumers no longer need to do the thinking, how do we strike the balance between helpful AI and annoying AI? How do we make sure consumers feel in control and empowered by it, rather than at its mercy?
It seems to us that successful AI will work for the consumer in a number of ways, on both visible and invisible planes. It will give them choice and control, but it will be intelligent enough to monitor preferences and steer the consumer towards suitable content at an appropriate time (so no more persistent ‘suggestions’ that hound rather than inspire). An example of this is Google’s quest to improve search result accuracy by enabling users to upload pictures to Google Image, which then uses recognition technology to search for similar pictures (of images, video or text).
On an invisible level, as is already happening with technology developed by Netflix, AI can optimise video fluency and definition so that high quality streaming is delivered to users with different internet speeds and bandwidth.
Underpinning all this, of course, is the reliability of AI. Datasets are commonly blighted by biased algorithms, and AI is only as good as its component parts. The development of machine learning will hopefully reduce such problems, resulting in systems that can efficiently predict and satisfy consumer needs – something that Netflix has worked on recently to improve its content recommendation.
At Tapestry we’ll leave the technological nuts and bolts to the experts, but we know from developments so far that AI needs to be handled with care. We look forward to working with our clients to pin down the optimum role of AI in consumers’ lives.