36 COVER STORY “largest cybersecurity engineering project in history” to address high-priority security tasks. The initiative is based on six pillars that protect identities and secrets; protect tenants and isolate systems; protect networks; protect engineering systems; monitor and detect cyberthreats: and accelerate response and remediation. To lead by example, Microsoft is strengthening its own defences. As part of the ‘protect tenants and isolated systems’ pillar, Microsoft carried out a full inventory of its own environment in a process Chik refers to as a “thorough spring cleaning” and subsequently deleted 730,000 unused applications and removed 5.75 million inactive tenants. Microsoft also implemented multifactor authentication (MFA) to encourage secure-bydefault practices with customers, a key principle of the initiative. In a 2023 research paper, How effective is multifactor authentication at deterring cyberattacks?, Microsoft reveals that over 99 per cent of MFA-enabled accounts remained secure during its investigation. It also found that MFA reduces the risk of compromise by 99 per cent. “Based on these results, we strongly advocate for the default implementation of MFA in commercial accounts to increase security and mitigate unauthorised access risks,” writes Microsoft’s research team. Microsoft has required MFA for any user signing into the Azure portal and the Microsoft Entra and Intune admin centres since October 2024 and enforced the mandate for the Microsoft 365 admin centre in February 2025. Phase two will roll out later in 2025, extending MFA enforcement to Azure CLI, Azure PowerShell, Azure mobile app and infrastructure-as-code tools, with recommendations to migrate user-based service accounts to workload identities. Safeguarding AI initiatives In 2024 alone, generative AI usage jumped from 55 to 75 per cent among business leaders and AI decision-makers, according to IDC’s 2024 AI opportunity study. As AI adoption accelerates, many organisations have begun using AI tools to answer customer service questions, automate repetitive or mundane tasks, speed up product development and more. “Over 95 per cent of organisations are implementing or developing an AI strategy, which necessitates the need for accompanying data protection and governance strategies,” says Vasu Jakkal, corporate vice president of Microsoft Security. For example, global telecommunications provider Vodafone is using a virtual assistant called TOBi, powered by Microsoft Azure and Copilot, to handle its large number of customer enquiries. Lloyds Banking Group has developed an application with Microsoft Power Apps and Azure AI Services for its customers to communicate with employees in their preferred language. However, if unmonitored, these tools have the potential to render organisations vulnerable to malicious prompt attempts. Cybercriminals achieve this by tricking AI models into ignoring system rules. “There are two types of prompt attacks,” writes Vanessa Ho in the Microsoft blog post on ‘Safeguarding AI against jailbreaks’. “One is a direct prompt attack known as a jailbreak, like if the customer service tool generates offensive content at someone’s coaxing, for example. The second is an indirect prompt attack, say if the email assistant follows a hidden, malicious prompt to reveal confidential data. “To help protect against jailbreaks and indirect attacks, Microsoft has developed a comprehensive approach that helps AI developers detect, measure and manage the risk. It includes Prompt Shields, a fine-tuned model for detecting and blocking malicious prompts in real time, and safety evaluations for simulating adversarial prompts and measuring an application’s susceptibility to them.” Brad Smith presented testimony on behalf of Microsoft before the US House Homeland Security Committee in June 2024
RkJQdWJsaXNoZXIy NzQ1NTk=