Wow, what a week at Ignite! Microsoft made over 100 announcements in just three days, and I know I’m not the only attendee who’s taken some time to absorb them and think about how they can help me as a security professional.
As a reminder of the main messages from the conference, you can read Quorum Cyber’s Ignite overview blog on our website.
Many of Microsoft’s updates included the use of artificial intelligence (AI), something I’m passionate about exploring more and applying more in the services I develop and manage for our customers.
Microsoft Ignite 2023 has thrown a spotlight on how we are entering an AI-first world, where the amalgamation of pioneering AI technologies like Copilot, with state-of-the-art computing capabilities, notably the Maia and Cobalt chips, is not merely redefining the digital landscape, but revolutionising it. There was a pronounced emphasis on sustainability in AI operations, and how Microsoft plans to tackle this. They are very conscious of the huge energy consumption of these chips and are committed to reducing it and the associated carbon footprint. They underscore the necessity to balance formidable AI functionalities with environmental mindfulness highlighted not only in the keynote but also through an exceptionally large stand dedicated to it.
As we harness the transformative potential of AI, the fusion of advanced AI applications, sustainable computing practices, and robust data security measures becomes paramount. This synergy is critical in steering businesses towards a future where innovation is intricately woven with responsible resource utilisation and secure, trusted AI deployment. The announcements at Ignite signal a revolution in productivity to everyone; just like the days of typewriter to word pressing.
While engaging with customers on the Microsoft Intelligent Security Association (MISA) stand about Security Copilot, it became increasingly clear that this productivity tool is garnering substantial interest and had so many people come up to me to learn more! However, it’s both intriguing and concerning to note that some individuals I spoke with are not currently utilising any form of generative AI within their security operations centres. Given the potential advantages these AI tools offer, this gap in adoption highlights a significant area for growth and an opportunity to enhance security measures in a rapidly evolving digital environment.
A once-in-a-lifetime opportunity
In the AI-first world we are navigating, organisations are presented with a unique opportunity to utilise AI for transformative business growth and rapid innovation. AI’s prowess in utilising data and insights, addressing complex challenges, and enhancing human capabilities is accelerating its adoption. Microsoft’s research indicates that 97% of organisations are either implementing, developing, or planning an AI strategy. However, AI also brings forth substantial challenges in data security, compliance, and privacy. If these issues are not properly managed, they could impede the adoption of AI. In fact, some organisations, wary of data security concerns associated with AI, have temporarily halted or completely banned the use of AI.
Empowering organisations to achieve more
In recognition of these challenges, Microsoft is advancing its solutions to empower organisations to confidently embrace AI while maintaining data security and compliance:
Role of Microsoft Purview: Serving as a leading solution, Microsoft Purview enables organisations to effectively govern, protect, and manage their entire data landscape. In conjunction with Microsoft Defender, it provides strong protection for both data and security operations.
New features in Microsoft Purview and Microsoft Defender: Microsoft is delighted to introduce new functionalities in Microsoft Purview and Microsoft Defender. These enhancements are tailored to secure data and applications as organisations adopt generative AI. Microsoft remains committed to the protection and governance of data, regardless of its location or movement.
Protection Across Generative AI Applications: These new features extend to a wide range of generative AI applications, encompassing Microsoft Copilots, bespoke AI applications developed by organisations, and consumer AI apps like ChatGPT, Bard, Bing Chat, and others. Key features offered by Microsoft Purview and Microsoft Defender include:
- Comprehensive Visibility: Insights into the use of generative AI applications, sensitive data within AI prompts, and the extent of user interactions with AI
- Extensive Protection: The capability to block high-risk generative AI applications and implement customisable policies to prevent data loss in AI prompts and secure AI responses
- Compliance Controls: Tools to identify violations of business practices or codes of conduct and support in adhering to regulatory requirements.
Visibility Challenges of Sensitive Data: Security experts have identified the difficulty in gaining visibility into sensitive data. With over 30% of decision makers unsure about the whereabouts or nature of their sensitive business data, this issue is exacerbated by the data influx from generative AI. It’s crucial to understand how sensitive data is managed within AI frameworks and how users interact with generative AI applications.
AI Hub in Microsoft Purview: As a response, Microsoft is announcing a private preview of its AI hub in Microsoft Purview. This hub is designed to automatically and continuously detect data security risks in applications such as Microsoft Copilot for Microsoft 365. It offers organisations aggregated overviews of the total number of prompts sent to Copilot, including any sensitive information, and the number of users engaging with Copilot, along with their associated risk levels. This functionality has also been extended to over 100 widely used consumer generative AI applications.
These developments from Microsoft represent a proactive stance in addressing the security and compliance challenges brought about by AI. By providing comprehensive tools for visibility, protection, and compliance, Microsoft is facilitating a safe environment for organisations to leverage the advantages of AI while protecting their most crucial asset – their data.
I appreciate that this is all a lot to digest. It’s been a momentous year for advancements in AI and for developments in data security, but I’m confident that they will assist us all to level the field when it comes to safeguarding our most valuable assets.
I’ll write more about how data security and AI as we enter 2024. In the meantime, you can learn more about Quorum Cyber’s Managed Data Security service on our website.