February 22, 2024:
Hackers working in China, Russia, Iran, North Korea, and several other nations use OpenAI Systems. Microsoft and OpenAI believe that these nations used A.I. to help with routine tasks.
But hackers are using OpenAI’s systems for cyberattacks. Some hackers with ties to foreign governments are using generative artificial intelligence in their attacks. Instead of using A.I. to generate exotic attacks, as some in the tech industry feared, the hackers have used it in mundane ways, like drafting emails, translating documents, and debugging computer code, the companies said. The countries using AI do so to be more productive.
Microsoft has committed $13 billion to OpenAI, and are close partners with OpenAI. For example, they shared threat information to document how five hacking groups with ties to China, Russia, North Korea, and Iran used OpenAI’s technology. The companies did not say which OpenAI technology was used. The start-up said it had shut down their access after learning about the misuse of their AI technology.
Since OpenAI released ChatGPT in 2022, there have been concerns that hackers might weaponize these more powerful tools to discover new and creative ways to exploit vulnerabilities. Like anything else, AI could be used for illegal and disruptive tasks.
OpenAI requires customers to sign up for accounts, but some new users evade detection through various techniques, like masking their location. This enables these AI users to develop illegal or harmful uses for AI technology. For example, a hacking group connected to the Iranian Islamic Revolutionary Guards Corps (IRGC) used the AI to research ways to avoid antivirus scanners and to generate phishing emails used by hackers. One of the phishing emails pretended to come from an international development agency and another attempted to lure prominent feminists to an attacker-built website on feminism. In another case, a Russian-affiliated group tried to influence the war in Ukraine by using OpenAI’s systems to conduct research on satellite communication protocols and radar imaging technology. Russia has long used a large propaganda organization to attack and weaken enemies. AI is now another tool for the Russians to use.
Microsoft tracks over 300 hacker organizations, including independent cybercriminals as well as AI operations carried out by several nations. OpenAI’s proprietary systems made it easier to track and disrupt their use, the executives said. They said that while there were ways to identify if hackers were using open-source A.I. technology, a proliferation of open systems made the task harder.
When the work is open sourced, then you don’t know who is using the AI technology and whether their policies are for responsible use of AI. Microsoft did not uncover any use of generative AI in a recent Russian hack of top Microsoft executives.
In combat situations AI has been used increasingly over the last decade. As the AI improves, it is used more effectively and frequently in combat situations. For example, a Ukrainian firm developed an AI system that can, with great accuracy, determine if a group of soldiers in the distance are Ukrainian or Russian. This reduces the instances of friendly fire. That is when you accidently fire on your own troops and cause casualties. This is an unfortunate aspect of modern war that no one wants to talk about but continues to occur on a regular basis. Targeting using AI makes it less likely for friendly fire incidents to occur.