Talking Tech: Alexa, are you listening to me?

Talking Tech: Alexa, are you listening to me?

Pixabay photo

By Frank Artusa

Try asking your Alexa, Siri, Google Assistant or any smart home device a similar question and you will get some variation of response assuring you that your privacy is important and the queried device is most certainly not spying on you. But how do these supposedly omniscient household oracles know to respond to their respective “wake” words if they’re not indeed listening to every sound wave to grace their diabolically helpful little speakers?

One thing is clear, these devices are indeed monitoring their local environments for all ambient and discrete vocal communications. Manufacturers of these devices, however, have repeatedly stated that no information is collected aside from the user instructions after activation with a wake word or phrase. This is when the device enters an active listening state of operation. Even if only post-wake word data is captured, the question is; what exactly happens to that data on backend servers? 

In 2019, Amazon admitted to employing personnel to listen to customer’s voice commands, albeit to help train and improve the Echo device and algorithm. Amazon in particular has recently changed the way data is handled. Previously, users were given the option to process requests on the local device, but as of March 28, all collected data will be sent to Amazon’s cloud environment for processing. This represents a significant potential opportunity for abuse of private data.

Trusting personal data to these tech behemoths can be a challenge, especially when data collection and selling is so lucrative since the information can be used for targeted advertising and other uses. Data brokers, organizations that buy and sell personal data about individuals, are estimated to earn over 200 billion per year worldwide and the industry continues to grow markedly year after year. They not only leverage personal data to assist advertisers, but this data is also used by retailers, credit agencies, employers and political groups. A big part of the problem is the erosion of consent when agreeing to use these devices. Buyers are faced with a barrage of privacy policies and end user agreements that are usually agreed to with a quick mouse click or touch of a phone screen.

Now with the advent of Generative Artificial Intelligence (GAI) and Large Language Model technologies like ChatGPT, Google’s Gemini, Meta’s LLaMA and Apple’s expected enhancement to Siri, these devices will become even more robust and likely increasingly involved in day to day lives. One example is the significant growth in how GAI based tools are augmenting mental health services, particularly when individuals don’t have access to resources, funding, or health care coverage needed to visit an actual therapist. The adoption of this service and technology is a boon to individuals needing access to therapeutic services even if on a rudimentary level, but the privacy concerns associated with the capture of deeply personal and intimate information are significant.

Governments recognize the problem and have begun taking steps to mitigate the issue surrounding the exposure of personal digital information. Europe’s General Data Protection Regulation and California’s Consumer Privacy Act are two such examples of efforts being made to give control back to individuals. Ultimately, as society increasingly embraces smart technology and more sophisticated devices become available, individuals must weigh the benefits to quality of life against the potential exposure and misuse of sensitive information. 

Until privacy is the default and not the fine print, we’d all be wise to treat our smart assistants like strangers in the room.

Frank Artusa, a resident of Smithtown, is a current cybersecurity professional and retired FBI Special Agent.

NO COMMENTS

Leave a Reply