Home / Explore our latest insights / Embracing Artificial Intelligence in Cyber Incident Response

Published: 16th November 2023 | In: Insights

Artificial intelligence (AI) is not a new concept, and indeed is a term that has been attached to a variety of marketing materials over the past 5 to 10 years, but recent developments in generative AI technology have led to a swarm of new capabilities that have taken many people by surprise. As the sophistication and scale of cyber-attacks continue to rise, traditional Cyber Incident Response approaches must evolve to keep pace, and AI is certainly a technology that can’t be ignored.

Today’s AI, seen in OpenAI’s ChatGPT and the Microsoft Copilot series of products, can particularly benefit incident response capabilities. In a world where dwell time is being reduced to minutes rather than hours, speed is absolutely of the essence, and AI is far faster at processing vast amounts of information than any human. An AI tool that has direct access to information about your security posture could, for example, tell you within seconds how many endpoints in your network are vulnerable to a particular type of attack, or even how many have been exploited – and then correlate that data with historical information about previous incidents, the seniority of users, and the sensitivity of data in their email, to present you with an overall risk score. This ability to analyse data almost instantly and make deductions from it is incredibly advantageous in the ongoing battle to respond to incidents as quickly as possible.

This is not without some risk, however. Today’s generative AI capabilities are not entirely foolproof, although they may present data in a way that makes you think they are. Some AI implementations can be heavily influenced by small nuances in the way you ask questions, but will be so eager to present what it ‘thinks’ is the answer to your question that it rarely asks to clarify anything beforehand. This can lead to misleading results that in the end may actually waste time rather than save it.

Following on from that, the ‘black box’ nature of today’s AI implementations can also be problematic. The decision-making process is mostly opaque, and without understanding how it came to its conclusions it can be difficult to fully rely on the information it is presenting. This is particularly challenging when it comes to using AI for automation purposes, where confidence must be high that it will make the right decisions at any one point in time – or, indeed, that it has not been influenced.

Far from the ‘AI’ buzzword of 10 years ago, today’s AI can be a fantastic assistant to security teams across the globe. It is important to remember, however, that it is not a silver bullet solution, and it comes with its own risks and challenges that must be fully understood before it can really be embraced.