Amber Hickman |
Microsoft has collaborated with OpenAI to share insights into how state-affiliated cybercriminals are using large language models and artificial intelligence to carry out their attack operations in its latest Cyber Signals report.
The firms are tracking a range of adversaries from nations such as Russia, North Korea and China to learn how they are using the technology for tasks such as researching victims’ industries, locations and relationships, as well as to code software scripts.
Microsoft has also shared the new principles it has put in place to protect users adopting its AI solutions. The principles include Microsoft notifying other AI service providers when it identifies a cybercriminal using their AI solutions, collaborating with stakeholders, and being fully transparent with its findings and actions.
The report also shared how AI can help users improve their security, with findings from a recent Microsoft survey reporting that security analysts using Copilot for Security were 46 per cent more accurate and completing tasks 26 per cent faster.
“The world of cybersecurity is undergoing a massive transformation,” said Vasu Jakkal, corporate vice president of security, compliance, identity and management at Microsoft, in a blog post on the firm’s website. “AI is at the forefront of this change and has the potential to empower organisations to defeat cyberattacks at machine speed, address the cyber talent shortage and drive innovation and efficiency in cybersecurity. However, adversaries can use AI as part of their exploits, and it’s never been more critical for us to both secure our world using AI and secure AI for our world.”
Read the full Cyber Signals report on the Microsoft website