AI ‘looks like Barbie but may be Oppenheimer’ warns expert

Artificial intelligence: Expert discusses research on future crime

Artificial intelligence (AI) poses “multifaceted” risks if used by those with hostile intentions – including state actors such as the “troll farm” bankrolled by Wagner Group founder Yevgeny Prigozhin, a cyber-technology expert has warned.

Suid Adeyanju, CEO of cybersecurity specialists RiverSafe, was speaking after the publication of an alarming study published earlier this week by the University of Surrey indicating that AI can identify passwords with a better-than-90-percent accuracy through the sound of keystrokes.

The study describes how researchers pressed each of the 36 keys on a MacBook Pro, including all of the letters and numbers, 25 times in a row, using different fingers and with varying pressure.

The sounds were recorded both over a Zoom call and on a smartphone placed a short distance from the keyboard.

When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95 percent, the highest accuracy seen without the use of a language model.

READ MORE: Desperate Putin ‘facing real problems’ as he begs North Korea for weapons[LATEST]

Mr Adeyanju said: “This study should serve as a wake-up call about the true risks posed by artificial intelligence when the technology is hijacked by cyber criminals.

“Far too many organisations are rushing to adopt the technology without conducting even the most basic due diligence tests and in total disregard for standard security protocols.

“Over enthusiastic executives should take note that AI may look like Barbie, but it could turn out to be Oppenheimer if the necessary cyber protections and regulatory procedures aren’t in place.”

He told Express.co.uk: “The actual risks of technology like AI are multifaceted and include anywhere from cyberattacks, security and privacy concerns, ethical dilemmas, economic inequality, legal and regulatory challenges.”

From a cybersecurity perspective, AI-enabled cyberattacks were a pressing threat for businesses, with RiverSafe research indicating that 80 percent of chief information security officers (CISOs) believed it to be their biggest threat.

They also predict AI will outpace cyber defences, prompting businesses to take urgent action to bolster their security posture.

Mr Adeyanju continued: “AI can be misused by various threat actors, including cybercriminals and malicious individuals.

“One example is the use of AI to carry out phishing attacks. Phishing emails have been fairly easy to spot thanks to poor writing, phishing emails created by a threat actor unfamiliar with a language tends to be easy to spot due to poor grammar, incorrect vocabulary, and bad spelling.

We use your sign-up to provide content in ways you’ve consented to and to improve our understanding of you. This may include adverts from us and 3rd parties based on our understanding. You can unsubscribe at any time. More info

Don’t miss…
We are reminded that our privacy and security is vulnerable, says Leo McKinstry[LATEST]
Russia likely to be behind electoral register hack, says ex-GCHQ chief[LATEST]
Hackers accessed personal details of everyone registered to vote in cyber attack[LATEST]

“Such glaring errors were easy to pick up by reasonably careful people as well as automated defences.

“But with AI it is very likely that a phishing email will look genuine, leading to more potential victims clicking on malicious links, making them highly effective and dangerous.

“The above example highlights the need for robust security measures and ethical considerations in the development and deployment of AI technologies.”

Asked about the potential for Yevgeny’s Internet Research Agency to make use of the rapidly developing technology for nefarious purposes, Mr Adeyanju added: “State actors are increasingly leveraging AI in their cybersecurity efforts.

“AI can be used by state-sponsored hackers to carry out sophisticated cyberattacks, including targeted phishing campaigns, data breaches, virtual kidnapping and malware design.

“State actors can also use AI to gather intelligence, analyse vulnerabilities in software and systems, and stay one step ahead of cybersecurity defences.”

Additionally, AI could be used to manipulate public opinion and spread false narratives, enabling state actors to influence political landscapes and even to destabilise governments, Mr Adeyanju pointed out

He concluded: “The use of AI by state actors in cybersecurity poses significant challenges for defending against advanced and persistent threats.”

Source: Read Full Article