Posted inSecurity

AI can now steal passwords, banking details, by ‘listening’, here’s how

A recent report revealed that an AI-driven tool secretively steal passwords by eavesdropping on your keyboard inputs

Hackers using AI to listen
Image credit: Canva

Hackers can now decode your passwords simply listening to the soft noises your keyboard makes as you type, using AI. Cornell University researchers have unveiled a novel technique through which AI tools can illicitly gather your data: by capturing keystrokes. A recent research outlines an AI-driven cyber-attack capable of pilfering passwords with an impressive 95 percent accuracy by eavesdropping on your keyboard input.

The researchers reportedly achieved this by training an AI model on the auditory pattern of keystrokes and deploying it on a nearby smartphone. The built-in microphone listened intently to keystrokes on a MacBook Pro and subsequently replicated them with a striking 95 percent precision—the highest precision observed by the researchers without resorting to a large-scale language model.

The researchers also evaluated the model’s accuracy during a Zoom meeting, where keystrokes were recorded via the laptop’s microphone. During this assessment, the AI replicated the keystrokes with a 93 percent accuracy rate. When tested on Skype, the model’s accuracy stood at 91.7 percent.

Interestingly, the effectiveness of the attack wasn’t influenced by the volume of the keyboard itself. Rather, the AI model was fine-tuned on the waveform, intensity, and timing of each keystroke to enable accurate identification. For example, idiosyncrasies in typing style, like a slight delay in pressing one key compared to others, were factored into the AI model.

In a real-world scenario, this type of attack would likely manifest as malware installed on a phone or a nearby device equipped with a microphone. The malware would then gather keystroke data and transmit it to an AI model by intercepting microphone inputs. The researchers employed CoAtNet, an AI image classifier, for their attack, training it on 36 instances of 25 different keystrokes on a MacBook Pro.

However, acquiring a new keyboard won’t be a remedy. The attack methodology is independent of keyboard acoustics, rendering even quieter keyboards susceptible to the assault.

Regrettably, this development adds to a series of fresh attack avenues made possible by AI tools, including ChatGPT. Last month, the FBI issued a warning about the perils associated with ChatGPT and its exploitation in criminal campaigns. Security researchers are also grappling with emerging challenges, such as adaptive malware capable of rapid transformation through tools like ChatGPT.