Safeguarding AI: The path to trustworthy technology

The pace of technology adoption is accelerating. Whereas users once took years to broadly adopt new technologies, now they’re jumping on new trends in a matter of months.

Take the evolution of phones, the internet, and social media, for example. It took 16 years for smartphones to be adopted by 100 million users and 7 years for the Internet. However, Instagram caught on in just 2.5 years, and TikTok blew all of those numbers out of the water when it reached 100 million users in as little as 9 months. If you thought that was fast, wait until you hear about AI.

Generative AI is poised to be one of the most transformative technologies of our time. Compared to the above technologies, AI has taken headlines and everyday consumers by storm—with ChatGPT achieving the 100-million-user mark in just 2 months.

However, that rapid pace of adoption also highlights the importance of properly the secure adoption and development of AI to ensure that it doesn’t become a widespread vulnerability for corporations, consumers, and public entities alike. Read on for some insights on how to adopt AI more responsibly and how you can leverage the advances of AI in your organization.

What’s driving the rapid adoption of generative AI?

Generative AI marks an inflection point in our technology landscape, with core user benefits that make it more accessible and more useful for the everyday consumer. Just think about generative AI compared to legacy AI applications.

Traditional AI might be commonplace today, but it’s hidden deep inside technology in the form of tools like voice assistants, recommendation engines, social media algorithms, and more. These AI solutions have been trained to follow specific rules, do a particular job, and do it well, but they don’t create anything new.

By contrast, generative AI marks the next generation of artificial intelligence. It uses inputs from things like natural language, photos, or text, to create entirely new content. This makes generative AI highly customizable, with the ability to augment human skills, offload routine tasks, and help people derive more value from their time and energy.

That said, it is also important to understand what it is not. It is not a replacement for humans. It makes mistakes, it requires oversight, and it needs ongoing monitoring. Better than, that, it has the power to enable a more diverse talent pool in the cybersecurity industry, as it supports the work of security professionals and operations. As a collective cybersecurity community, we also need to ensure that it’s part of a secure, healthy ecosystem of technology and technology users.

The core components of responsible AI

One of the top concerns around generative AI today is its security. While data loss, privacy, and the threat of attackers are part of that fear, many potential adopters are also wary of the potential misuse of AI as well as unwanted AI behaviors.

Generative AI may have only recently emerged in broader public consciousness in early 2023 but at Microsoft, our AI journey has been more than 10 years in the making. We outlined our first responsible AI framework in June 2016 and created an Office of Responsible AI in 2019. These milestones and others have given us deep insight into best practices around securing AI.

At Microsoft, we believe that the development and deployment of AI must be guided by the creation of an ethical framework. This framework should include core components like:

  1. Fairness – AI systems should treat all people fairly and allocate opportunities, resources, and information equitably to the humans who use them.
  2. Reliability & safety – AI systems should perform reliably and safely for people across different use conditions and contexts, including ones it was not originally intended for.
  3. Privacy & security – AI systems should be secure by design with intentional safeguards that respect privacy.
  4. Inclusiveness – AI systems should empower everyone and engage people of all abilities.
  5. Transparency – AI systems should be understandable and take into account the ways that people might misunderstand, misuse, or incorrectly estimate the capabilities of the system.
  6. Accountability – People should be accountable for AI systems, with deliberate oversight guidelines that ensure human beings remain in control.

Innovation supporting the depth and breadth of security professionals

In the recent Microsoft Ignite event, pivotal advancements in cybersecurity were unveiled, reshaping the landscape of digital security. One of the main developments, the recently launched Microsoft Security Copilot stands as a testament to this evolution. This cutting-edge generative AI solution is engineered to decisively shift the balance in favor of cyber defenders. Built upon a foundation of an enormous data repository, encompassing 65 trillion daily signals and insights from monitoring over 300 cyberthreat groups, this tool is a game-changer. It’s designed to enhance the capabilities of security teams, providing them with a deeper, more comprehensive understanding of the cyberthreat landscape. The aim is clear: to empower these teams with superior analytical and predictive powers, enabling them to stay one step ahead of cybercriminals.

Further cementing Microsoft’s commitment to revolutionizing cybersecurity, the launch of the industry’s first AI-powered unified security operations platform marked another highlight of the event. Additionally, the expansion of Security Copilot across various Microsoft Security services, including Microsoft Purview, Microsoft Entra, and Microsoft Intune, signifies a strategic move to empower security and IT teams, enabling them to tackle cyber threats with unprecedented speed and precision. These innovations, showcased at Microsoft Ignite, are not just upgrades; they are transformative steps toward a more secure digital future.

Want to learn more about secure AI and other emerging trends in cybersecurity? Check out Microsoft Security Insider for the latest insights and explore this year’s Microsoft Ignite sessions on demand.

Artificial Intelligence, Machine Learning, Security