Alexa devices will no longer offer a “do not send voice recordings” setting after March 28th. Future Alexa recordings must be sent to the Amazon cloud, though you’re able to still ask Amazon to automatically delete voice requests after they’re processed.

The “do not send voice recordings” setting slightly increases Amazon customers' privacy by processing Alexa audio data on-device. It also reduces Amazon’s ability to utilize customer voice data for AI training or other purposes, and it may alleviate some customers' concerns about “spying.”

That said, I don’t want to placetoo muchweight on this feature’s importance. “Do not send voice recordings” is only available on three Echo devices—the Echo Dot (4th Gen), Echo Show 10, and Echo Show 15—and it doesn’t provide 100% local processing. It simply transcribes your voice requests into text, which are then sent to Amazon and saved to the cloud.

Customers affected by this change will be automatically transitioned to the “don’t save recordings” setting, which automatically deletes recordings from the cloud after they have been processed. If you want to manually review any audio recordings or text transcriptions that Amazon has saved to the cloud, check yourVoice History panel. (I suggest that you regularly check Voice History regardless of your account preferences, as you won’t always know when Amazon makes changes to available settings or policies.)

“The Alexa experience is designed to protect our customers’ privacy and keep their data secure, and that’s not changing. We’re focusing on the privacy tools and controls that our customers use most and work well with generative AI experiences that rely on the processing power of Amazon’s secure cloud. Customers can continue to choose from a robust set of tools and controls, including the option to not save their voice recordings at all. We’ll continue learning from customer feedback and building privacy features on their behalf.”

Amazon has not provided a reason for the “do not send voice recordings” feature’s removal. However, the company’s statement to The Verge, shown above, suggests that “generative AI experiences” are to blame. An AI-enhanced Alexa may be able to understand tone of voice or inflection, which don’t really exist in written text, hence the need for voice recordings. And someAlexa+ features, like voice recognition, just can’t work without audio recordings.

I should also point out that Amazon uses voice recordings to train AI. Removing the option to withhold voice recording telemetry may be nothing more than a data grab. And I suspect that the setting’s name, “do not send voice recordings,” was less than ideal from Amazon’s perspective. It naturally implied that Alexa collects more data than it should.

In any case, Amazon has been repeatedly criticized for the way that it handles voice recordings and other private data. The company was recently sued by the FTC for allegedlyretaining and utilizing children’s voice datawithout explicit parental consent—a potential violation of the Children’s Online Privacy Protection Act. In a separate federal lawsuit, Amazon was accused ofusing customers' Ring camera footage(including footage from indoor cameras) to train algorithms. Both cases were settled to the tune of several million dollars.

If you want a more private smart home experience, consider setting upHome Assistant. The open-sourceHome Assistant Voiceplatform makes it easy to run voice commands locally, though you’re able to also go for a super-customized setup by integrating Home Assistantwith local LLMs. I realize that this isn’t aneasysolution, but if you want a voice assistant that respects your privacy, you have to host it yourself.