How AI systems can go wrong and why they sometimes produce unexpected and unintentional results.

Principal Incident Response Consultant Mark Cunningham-Dickie shared insights on why it’s vital to learn from the past if the higher education sector is to take advantage of advancements in artificial intelligence (AI) at the recent Higher Education Partnership Network (HEPN South) in London.

With over 20 years of experience in the technology industry, including more than ten working in technical roles for law enforcement and other government funded organisations, Mark has worked on hundreds of cyber incidents in his role in Quorum Cyber’s Incident Response (IR) team.

Many companies that develop AI technologies promise that it will deliver improved productivity and efficiency and transform the way people work in every industry. But Mark warned in his 20-minute talk, titled Learning from AI’s failures while digitally transforming and securing your environment, that anyone planning to implement and run AI needs to think and plan very carefully about what they want to achieve, what they intend to use AI for, and how it’s been trained in the first place.

He gave some remarkable examples that demonstrated the power of AI but also showed how projects and initiatives can get out of hand when biased datasets are used to train the AI models or when appropriate policies and guardrails aren’t put in place.

Recruitment bias

Amazon was an early pioneer in using AI applications and began integrating machine learning into its hiring practices in 2014. The AI system reviewed hundreds of applications weekly, identifying patterns from a decade’s worth of data. However, it developed a gender bias, favouring male candidates and discriminating against women. Attempts to correct this bias failed and Amazon pulled the plug on the project in 2017.

Flaws in facial recognition

It’s well known that AI has also made tremendous progress in facial recognition – but it has serious weaknesses in this area too. Studies have shown that facial recognition systems have a false positive rate 100 times higher for identifying black people compared to white. These inaccuracies can lead to significant civil rights issues.

“Making assumptions that are obvious to humans but aren’t to machines, that do not understand the human experience, is problematic,” said Mark. “The challenge in these instances is detecting them and remediating them before they cause major problems.”

Helping medical professionals accelerate diagnoses

However, AI’s pattern recognition skills have proved a phenomenal aid in medicine, especially in detecting and treating conditions of the eye and skin. AI can perform on a par with, or even surpass, dermatologists in identifying malignant melanomas from images of skin lesions. Similarly, AI has been effectively used to detect pneumonia, particularly through the analysis of chest X-rays. Overall, AI has helped to provide “faster, more accurate diagnoses, better patient care, better operational efficiency, and personalised medicine.”

But Mark warned that AI systems can take unexpected actions. For example, while it will follow its programmed instructions to the letter, it might not go about its task in the same way as humans. In one case, when engineers programmed an AI tool to score as many points as possible in a racing car game, the tool didn’t race as humans would have done – it just scored points by other means but crashed its car again and again.

How does this relate to the education sector?

Like the examples discussed by Mark, all industries and institutions need to ensure that they are prepared for the power of AI without taking the plunge and dealing with the consequences later.

Microsoft Security Copilot is now generally available and the industry’s first generative AI solution to help security and IT professionals. Copilot is informed by large-scale data and threat intelligence, including more than 78 trillion security signals processed by Microsoft each day, and coupled with large language models to deliver tailored insights and guide next steps.

Quorum Cyber has embedded Security Copilot into its managed services to help cyber security analysts investigate incidents more deeply and faster than ever. We’re also using it to summarise and report on more detailed and complex incidents that might be related to other cyber-attacks, to improve efficiency, productivity, and clarity.

To help IT professionals understand what they require to utilise AI capabilities, Quorum Cyber has developed a Microsoft Security Copilot Readiness Workshop to provide insights into Microsoft’s advanced security solutions and their integration with Security Copilot.

Talk to our team about how you can ensure you are best prepared for stepping forward with Security Copilot. As well as how we can protect your research, intellectual property, and reputation, talk to us about how we can protect your organisation.

Talk to us about securing your systems today

If you’re worried about protecting your research, intellectual property, and reputation, talk to us about how we can protect your organisation.

Further Insights from Quorum Cyber.

Privacy Preference Center

Skip to content