Microsoft’s AI Assurance Program, launched in June 2023, is designed to help customers ensure that the AI applications they deploy on Microsoft platforms meet legal and regulatory requirements for responsible AI. Antony Cook, vice president and deputy general counsel at Microsoft, is leading this initiative.
What are Microsoft’s core commitments to customers when it comes to the responsible use of artificial intelligence?
AI is creating unparalleled opportunities for businesses of all sizes and across every industry. At the same time, there are legitimate concerns about the power of this technology and its potential to be used to cause harm rather than provide benefits. We’ve been contemplating these issues for several years and realised that we could share our learnings with our customers around the world to accelerate their AI journeys. That’s why, in June, Microsoft announced three AI Customer Commitments to help customers create their own responsible AI programmes.
First, Microsoft will share what we are learning about developing and deploying AI responsibly and will assist our customers in learning how to do the same. That means sharing key documents we develop, such as our Responsible AI Standard, AI Impact Assessment Guide, Transparency Notes and more.
Second, Microsoft is creating an AI Assurance Program to ensure that the AI applications our customers deploy on our platforms meet the legal and regulatory requirements for responsible AI.
Third, Microsoft is supporting our customers as they implement their own AI systems responsibly. This includes creating a dedicated team of AI legal and regulatory experts worldwide to support our customers and launching a programme with selected partners – like PwC and EY – to assist our mutual customers in deploying their responsible AI systems.
And finally, we announced our Copilot Copyright Commitment in September 2023. This new commitment extends our existing intellectual property indemnity support to commercial Copilot services. Specifically, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products.
We are still in the early days of this initiative, but the response so far has been positive. Customers want to quickly implement their responsible use of AI to remain competitive in their industries and to accelerate progress on their key projects. Having our tools and learnings, along with the assurance provided by our Copilot Copyright Commitment, should help them do that even more efficiently.
Is there a role for the Microsoft partner ecosystem in delivering reliable and trustworthy AI solutions?
As an increasing number of governments and entities seek to leverage generative AI technology to enhance their services and offerings, it will be critical for organisations across the private, public and academic sectors to have access to assistance in deploying reliable, trustworthy AI solutions.
Several of our partners have already launched consulting practices to help customers evaluate, test, adopt and commercialise AI solutions, including creating their own responsible AI systems. We believe it’s critical for these capabilities to be developed and deployed at scale. Therefore, we hope that more partners worldwide will develop responsible AI consulting practices to help businesses of all sizes implement programmes for the responsible use of AI.
How do you believe Microsoft’s commitments and actions will empower customers to define robust strategies for AI adoption?
Our AI customer commitments will provide our customers with a starting point for their responsible AI implementation. We have created tools and resources that we have published on our Responsible AI website, which they can leverage immediately, starting at the earliest stages of their engineering planning and extending to the launch of AI technology. We began our own AI journey in 2017 and have incorporated our learnings into improvements in our processes. Therefore, our customers can benefit from our progress immediately.
Furthermore, our new AI Assurance Program will help customers ensure that the AI applications they create and deploy on our platforms meet the legal and regulatory requirements for responsible AI. The programme includes four specific elements that relate to security, compliance and legal constraints; the first in the form of regulator engagement support. We have extensive experience in helping customers in the public sector and highly regulated industries manage the spectrum of regulatory issues that arise when dealing with information technology use. We plan to expand our assistance offerings to help customers with regulations related to AI. For instance, in the global financial services industry, we worked closely for several years with both customers and regulators to ensure that we could pursue digital transformation in the cloud while complying with regulatory financial obligations. We want to apply our learnings from this work to regulatory engagement concerning AI. For example, the global financial services industry requires that financial institutions to verify customer identities, establish risk profiles and monitor transactions to help detect suspicious activity – this is called the ‘know your customer’ requirements. We believe that this approach can applied to AI in what we are calling ‘KY3C,’ an approach that imposes certain obligations to know one’s cloud, customers and content. Microsoft will collaborate with customers to apply KY3C as part of our AI Assurance Program.
In addition, we will attest to how we are implementing the AI Risk Management Framework, which was recently published by the National Institute of Standards and Technology (NIST), and will share our experience engaging with NIST’s important ongoing work in this area. Then, we will convene customers in customer councils to gather their perspectives on how we can deliver the most relevant and compliant AI technology and tools, before engaging with governments to promote effective and interoperable AI regulation.
Looking ahead, how do you envision the landscape evolving over the coming years? How quickly do you anticipate the governance and regulation landscape will evolve and what is Microsoft’s strategy for adapting accordingly?
Brad Smith’s May 2023 report, Governing AI: A Blueprint for the Future, presents our proposals to governments and other stakeholders for appropriate regulatory frameworks for AI. We see a number of governments considering the best approaches to regulate AI use. In the USA, the White House issued a voluntary code of conduct in July 2023, focusing on safety and security in connection with AI. Other governments are also examining the required governance for AI. Governments play a pivotal role in shaping the path forward for AI and Microsoft welcomes the opportunity to engage with them to advance innovative AI that benefits society as a whole. We believe that industry, academia, civil society and government must collaborate to advance the state of the art and learn from one another.
Microsoft formed the Office of Responsible AI (ORA) to help put Microsoft’s AI principles into practice internally and to respond to evolving governance and regulatory policies. ORA’s public policy efforts involve helping to shape the new laws, norms and standards necessary to ensure that the potential of AI technology is realised for the benefit of society at large. We have gained valuable insights as we implemented our responsible AI practices and hope to help others, including governments and regulators, as they build their regulatory frameworks for the responsible use of AI.
To end our first conversation on an optimistic note, can you share some examples of responsible AI that demonstrate its potential to do good?
AI is a defining technology of our time, and we are optimistic about what it can and will do for people, industry and society. Microsoft believes AI can unlock solutions for the biggest challenges we face and has invested in an AI for Good initiative – that includes efforts around sustainability, accessibility and healthcare – for many years now. Additionally, with its AI for Good Lab, Microsoft brings together a team of world-class data scientists in Africa, South America, and North America who apply AI in addressing societal issues including humanitarian response, food security and conservation. For example, in partnership with Planet Labs, we have combined its high-quality satellite imagery with our AI tech to create Global Renewables Watch, mapping utility-scale solar and wind installations to evaluate the progress of every country’s transition to clean energy and track trends over time. In the Ukraine war the Lab has used AI to identify damage to critical systems and infrastructure in support of the International Criminal Court. In other projects, we have collaborated with world renown experts and non-profits like SEEDS and American Red Cross to assess risk to predict cyclone damage and heat wave impacts in India and identify building damage post natural disasters like the earthquakes in Turkey and Syria.
This article was originally published in the Autumn 2023 issue of Technology Record. To get future issues delivered directly to your inbox, sign up for a free subscription