Microsoft Reveals North Korea and Iran Employing AI in Hacking Activities
According to Microsoft, certain countries, including Iran, North Korea, Russia, and China, are now employing generative artificial intelligence for offensive cyber operations.
Microsoft has announced that it successfully identified and thwarted numerous threats in partnership with OpenAI, targeting their AI technology.
In a blogpost, the company mentioned that the techniques were in their early stages and not particularly groundbreaking. However, they emphasized the importance of publicly exposing these techniques due to the potential for US rivals to use large-language models to breach networks and conduct influence operations.
Cybersecurity firms have extensively utilized machine-learning for defense purposes, primarily to identify unusual activities in networks. However, it is worth noting that some individuals with malicious intent and offensive hackers also utilize it.
The emergence of advanced language models, such as OpenAI’s ChatGPT, has intensified the ongoing challenge between these individuals and those working to counteract their actions.
Microsoft has made a significant financial investment in OpenAI. In line with this, they recently released a report highlighting the potential impact of generative AI on malicious social engineering.
The report suggests that this technology could contribute to the development of more advanced deep fakes and voice cloning techniques. A concerning issue in a year where over 50 countries will conduct elections, amplifying disinformation that is already prevalent.
Microsoft Deactivates AI Accounts Linked to Cyber-Espionage Groups; GPT-4 Limited for Malicious Tasks
Microsoft offered a few examples. In every instance, it was stated that all generative AI accounts and assets belonging to the mentioned groups were deactivated.
Kimsuky, a North Korean cyber-espionage group, has utilized the models to gather information on foreign think tanks that focus on studying the country. Additionally, they have employed the models to create content that is specifically designed for spear-phishing hacking campaigns.
The Revolutionary Guard in Iran has utilized advanced language models for various purposes, such as social engineering, software error resolution, and analyzing potential methods for intruders to avoid detection in a compromised network.
That involves creating phishing emails, such as one that poses as a message from an international development agency and another that tries to entice prominent feminists to visit a website created by the attacker on feminism. The AI greatly enhances and expedites email production.
The Russian GRU military intelligence unit, known as Fancy Bear, has utilized the models to conduct research on satellite and radar technologies potentially relevant to the conflict in Ukraine.
A cyber-espionage group from China, Aquatic Panda, has been observed interacting with models in a manner that indicates they are exploring how LLMs can enhance their technical operations. Aquatic Panda targets various industries, higher education institutions, and governments across different countries, including France and Malaysia.
The Chinese group Maverick Panda, which has been active for over a decade, recently engaged with large-language models to assess their usefulness in gathering information on various topics of interest, including sensitive matters, influential figures, regional geopolitics, US influence, and internal affairs.
In a recent blog post, OpenAI stated that their latest GPT-4 model chatbot has limited additional capabilities for malicious cybersecurity tasks compared to existing non-AI powered tools.