News item

"This is your CEO calling. Can you pay this invoice for me?"

Profielfoto van Daphne Frik
20 December 2023 | 2 minutes read

Risks & opportunities in generative AI

In many companies, artificial intelligence is slowly getting ingrained in work processes. Generative AI tools such as ChatGPT make it easy to replace time-costly processes such as generating summaries, creating marketing content, and automating customer service. Yet, AI evolves at high speed, and the business sector needs to wake up. Looking at the biggest cyber security trends in 2024 that everyone must prepare for, generative AI is on top of the list, according to Forbes. Cybercriminals are increasingly incorporating AI in their attacks, ranging from deepfake social engineering attempts to automated malware.

Is my CEO calling me?

Deepfake technology combines machine learning and AI to manipulate or generate fake visual and audio content. With the use of deepfake technology, fake images, sounds, and videos can be created that show people that do not exist or events that did not happen. Deepfakes have not only been used to spread fake videos of politicians or to create fake and nonconsensual pornography but also to create scams.

Instead of sending the well-known “Hi employee, this is your CEO, I need help ASAP” email, cybercriminals might now use AI-generated audio from a CEO’s voice to call up employees and ask them for the company’s credit card details, IBANs or other personal data.

Automated malware in the wild

A second risk from AI can be found in automated malware. Generative AI has made advanced programming techniques accessible to a wider audience, making it easier for cybercriminals to develop malware that can bypass endpoint detection and response (EDR) tools and other security measures.

Earlier this year, researchers from HYAS developed an AI-generated malware, called BlackMamba, that was able to evade EDR security. It did so by exploiting a large language model (LLM) to synthesize a polymorphic keylogger functionality on the fly, dynamically modifying the benign code at runtime.

As BlackMamba is a research project that is only tested as proof of concept, it will not be able to do any harm in the wild. Yet, it is crucial that organizations remain vigilant to this new bread of malware, the researchers noted.

AI Act

The fast-moving developments in AI have called for a collaborative approach. While the US, UK, and China are working on their own guidelines, on December 9, 2023, European Parliament and Council negotiators reached a provisional agreement on the EU Artificial Intelligence Act. The regulation aims to “ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field.”

The proposals include safeguards on the use of AI, limitations for the use of biometric identification systems by law enforcement, and bans on social scoring and AI used to manipulate or exploit user vulnerabilities. The European Parliament will vote on the AI Act proposals in early 2024.

Reshaping a 135-billion-dollar industry

In 2024, AI will keep developing and reshaping the cybersecurity industry. While the global market for AI-based cybersecurity products was about 15 billion US dollars in 2021, it is estimated to surge to roughly 135 billion by 2030, a research rapport from Acumen showed.

Fortunately, AI develops just as fast in the direction of increasing cybersecurity. With AI, an organization will be able to detect anomalies faster, implement automated incident response, and set up smart authentication. “If cyber attack and defense in 2024 is a game of chess, then AI is the queen – with the ability to create powerful strategic advantages for whoever plays it best,” Forbes concludes.

In other words, AI provides companies with powerful tools to get ahead of cybercriminals – as long as we play along with the game.