Security researchers from the University of Electro-Communications & Michigan discovered a new kind of injection attack that allows an attacker to infuse arbitrary audio signals into voice assistants using light.
The new attack dubbed “Light Commands” is a vulnerability that exists in MEMS (micro-electro-mechanical systems) microphones, allowing attackers to inject inaudible and invisible commands into voice assistants via Photoacoustic effect.
“We propose a new class of signal injection attacks on microphones based on the photoacoustic effect: converting light to sound using a microphone,” researchers stated.
To launch the Light Commands attack, an attacker needs to transmit a light modulated audio signal that converts into the original audio signal within a microphone. Researchers revealed that hackers can remotely send invisible and inaudible signals to smart home devices like Alexa, Portal, Google Assistant, and Siri. It’s said that the Voice Controller systems in MEMS microphones lack authentication mechanisms.
Once the attackers hijack the device, they can control smart home switches, operate smart doors, make online purchases, and unlock smart locks by stealthily brute-forcing the user’s PIN.
Researchers stated that attackers require no physical access or user interaction to exploit the vulnerability; all they need is sight access to the target device and its microphone ports. It’s confirmed that Light Commands attack works within a distance of 110 meters.
With technology advancing day-by-day, cyber-attackers are finding innovative ways to get into our devices. Recently, academic researchers from England and Sweden have discovered that hackers can use the microphone on the smartphone to steal the phone password and gain access to the device’s data.
Researchers found malware that can exploit the smartphone’s microphone to steal the device’s passwords and codes. In their report, s, the researchers claimed that they’ve found the first Acoustic side-channel attack that presents what users type on their touch-screen devices.
Recently, a cybersecurity researcher, Matt Wixey, revealed that attackers can hack modern audio gadgets to make deafening sounds. He discovered that attackers can build a custom-made malware to induce it on connected speakers to produce deafening sounds at high intensity, turning them into offensive cyber-weapons.
These developments are reminiscent of the early days of phone hacking in the 1980s when phone hackers (or “phreaks”) hacked into telecommunication systems. They used devices (or used their own whistling) that made sounds to emulate phone signals going across to switchboards and modems. The concept was called “phreaking”. It was done just to make free phone calls and have some innocent fun.