Readers like you help support Pocket-lint. When you make a purchase using links on our site, we may earn an affiliate commission. Read More.

Key Takeaways

  • Amazon's September event revealed that Alexa is evolving with generative AI and large language models, making the voice assistant more conversational, adaptive, and helpful.
  • Alexa will be able to retain information from previous conversations and handle changing topics, reflecting how humans converse and providing a more natural experience.
  • The new features will allow users to have more fluid conversations with Alexa, eliminating the need for rigid language and precise device commands and creating a more personalised and humanised experience.

Alexa is one of the original and best digital assistants. Launched to support Amazon devices, Alexa is best known for interactions through the Echo speakers and, as such, has become a cornerstone of the smart home for many. The language used around digital assistants is evolving, with companies now talking more openly about artificial intelligence rather than softer terms of reference that might once have been used.

With that in mind, it was no surprise to hear Dave Limp, senior vice president of devices and services, talking about generative AI and large language models (LLMs) at Amazon's September event. While the language surrounding such systems may have become a little alien to casual listeners, Amazon is supercharging Alexa, evolving the familiar voice assistant to take it into the future.

The Alexa of the future will be more conversational, more adaptive, and ultimately, a lot more useful. Live demonstrations with beta software can always be risky, but Dave Limp's demo on stage revealed just how much of a change is coming. We're going to see an end to the continual use of the "Alexa" trigger word for starters. While it is possible to have some ongoing contextual interactions in the current version of Alexa, this is going to be much more pronounced in future. Alexa is going to be able to use contextual information better to hold a longer conversation and assimilate information during that conversation to take better final actions.

What's surprising is how well Alexa will cope with changing topics while retaining information about previous conversations. This is important behaviour because it reflects how humans have conversations when someone goes off on what seems like a tangent, but it's actually related to the arc of the conversation. For example, in the demo I witnessed, the conversation was about an upcoming football game. After a bit of back and forth, the conversation suddenly switched to BBQ sides - "What sides go with BBQ chicken?" - of course, Alexa provides those answers, but when finally asked to write a message to invite friends over to the game, everything falls into place - details of the game, the day it's taking place and the details of the BBQ too.

What's important here is the part that generative AI has to play. There's been a lot of talk about generative AI over the past year, and a lot of it has been around image generation. Amazon is doing that - it's on Fire TV, for example - but here we're talking about the ability to generate that message based on the contextual information that's just been discussed.

Echo Hub
Pocket-lint

There's the evolution of a new feature here - Alexa Let's Chat - where you'll be able to have what's more like a conversation rather than just asking for information. But there's also a greater ability for Alexa to determine what you're asking without having precise information.

This will be a game changer for the smart home user. It means you don't have to use the sort of rigid language that you did in the past. Currently, if you want to adjust a connected device, you have to be really precise with the device name. For example, in my smart home setup, if I want to turn the heating on, I have to tell Alexa to ask Hive to turn on the heating. In the future, the idea is that you can just say, "Alexa, I'm cold", to get the same response - the heating turned up.

This extends to new devices - Alexa turn on my new light - as well as creating Routines. Currently, if you want to create an Alexa Routine, you have to walk through the process in the app, selecting the trigger actions, the devices involved and everything else. In the future, you'll just be able to tell Alexa what you want to happen, and a Routine will be created.

amazon_echo_show_8_2023_3
Pocket-lint

Integrated into the new face of Alexa will be more pronounced mannerisms, laughter, expressions of surprise and little details to humanise the system. That should encourage people to talk to Alexa more - and the more interaction there is, the more Alexa learns and the more personalised the experience will become.

Best of all, because Alexa is a platform, the benefits come to the entirety of Alexa; this isn't limited to the latest Echo Show 8 or the new Echo Hub; this will be available for all Echo speakers all the way back to the 2014 original. That's an incredible offering for those with older devices because nowhere else do you have that kind of sophisticated evolution of the user experience. It's going to be rolling out first in the US, but we're sure it will be landing in other regions in 2024.

Where does that leave Alexa's rivals? Alexa is the dominant force in the smart home, and that mostly comes down to Amazon's commitment to the Echo ecosystem. It offers a much wider range of speakers and other devices than its rivals, Apple and Google. While Siri now has the HomePod speakers as a point of access beyond the Apple Watch and iPhone, Apple feels like it's adrift when it comes to the smart home. Google made greater endeavours with Nest Hub and some speakers, but it has been a slow evolution, with Google focusing more on Android devices, using AI to evolve its Workspace apps and search.

That leaves the smart home very much in Amazon's hands, and it feels right now as though Alexa has an unassailable lead.