Site icon The Libertarian Republic

Smart Voice Assistants: a Whopper of a Risk

LISTEN TO TLR’S LATEST PODCAST:


By Liam Kirsh

Last Wednesday, Burger King aired a 15-second TV ad featuring a man dressed as a Burger King employee speaking directly to the camera:

You’re watching a 15-second Burger King ad. Which is, unfortunately, not enough time to explain all the fresh ingredients in the Whopper sandwich. But I got an idea. Okay Google: What is the Whopper burger?

Credit: Screenshot from YouTube

If you’re using a screen reader to listen to this article, that last line may have activated your Google smart device and caused it to read from the Wikipedia article on the Whopper. That’s what Burger King’s marketing team had in mind, and they planned for it.

Background

The prior week, on Tuesday, April 4, a user under the name Burger King Corporation edited the article to include the following line in the product description:

The Whopper, also known as America’s favorite burger, has a flame-grilled patty made with 100% beef with no preservatives, no fillers and is topped with daily sliced tomatoes and onions, fresh lettuce, pickles, ketchup and mayo, served on a soft sesame seed bun.

The edit was reverted by another editor within 20 minutes. The attempt was made twice more under the username Fermachado123 (the same username used by Burger King marketing chief Fernando Machado on Twitter) and reverted again but then left in with minor changes.

The day of

The TV ad was released at about noon EDT on Wednesday, April 12. Over 20 minutes, in an attempt to change the Google devices’ response to the commercial’s trigger, vandals made a series of edits to the Wikipedia article. They added ingredients such as “medium-sized child”, “toenail clippings”, “cyanide”, and “rat”. Wikipedia volunteers reverted each edit, and an administrator eventually locked the page to prevent edits from unregistered and recently registered users.

As of 2:45pm EDT, Google issued an update blacklisting the sound clip so Google devices would not respond to it. In response, Burger King created a revised version of the ad that aired on both The Tonight Show Starring Jimmy Fallon as well as Jimmy Kimmel Live, bypassing Google’s block and triggering devices once again.

In an emailed statement to The Washington Post, Burger King spokeswoman Dara Schopp proudly announced that Burger King had seen a 300 percent increase in “social conversation” on Twitter that day. Wikipedia editors published an open letter demanding an apology.

Concerns

Burger King violated several Wikipedia policies. One of these is the conflict of interest guidelines, which require editors to disclose conflicts of interest and advises against making direct edits to articles. As an administrator and long-time member of the wikiHow community, I know that volunteers spend a great deal of time patrolling edits for vandalism or low-quality contributions. Wikipedia is a non-profit, and the content is created and maintained entirely by volunteers. It’s distasteful for Burger King to commercialize articles for their own financial gain, and Burger King owes the Wikipedia community an apology.

More importantly, however, these events demonstrate the urgent need for voice authentication in smart devices. Google Home and Amazon Alexa are susceptible, as well as Android devices configured to accept commands from the lockscreen. Sure, this TV commercial was harmless. But it raises more serious concerns. A television ad, family member, or guest in the home could perform any of the following:

Unlocked smartphones and tablets, or those configured to accept commands from the lockscreen, have even more capabilities and could be hijacked by any person or speaker within hearing range.

Future considerations

Recently, Google implemented a Smart Lock feature in some phones, which identifies the owner’s voice and only unlocks the phone when they say “Okay, Google.” Google is developing a similar feature for Google Home, but hasn’t announced a roadmap or release date. Unfortunately, this doesn’t go far enough. Imagine a conference presenter who was recorded using Smart Lock to unlock their phone outside the venue — an audience member could replay the presenter’s voice to unlock their phone and embarrass them during the presentation. Or worse: a person could replay their ex-spouse’s voice to the smart device to learn personal details about them. Manufacturers are putting their customers at risk by leaving out reliable security mechanisms in smart assistants.

For a solution, companies might look to the verification implemented by HSBC Bank in their telephone banking system last year. This system requires the user to recite a new set of words each time they authenticate themselves. Ideally, smart assistants would require this authentication by default for high-security actions (purchases, banking, etc.) and offer the option to enable it for calls, texts, and calendar events.

The burden lies on Google, Amazon, and Apple to design a reliable security mechanism for their smart home assistants. This feature should be an utmost priority, and I won’t be using a smart assistant until it’s added.

EDITOR’s NOTEThis Op-Ed was originally published on Medium. The Libertarian Republic has received expressed permission to republish.
EDITOR’s NOTE: The views expressed are those of the author, they are not representative of The Libertarian Republic or its sponsors.

WATCH TLR’S LATEST VIDEO:

Exit mobile version