As first published in Housing Technology.
Cyber security remains a major concern for organisations in every sector. The rapid advancements we’ve seen in generative artificial intelligence (Gen AI) in recent years haven’t yet made the ongoing challenge of protecting assets and data any easier.
The most common way that cybercriminals have targeted associations in the housing sector is through phishing attacks. Gen AI makes it simpler and quicker to create convincing fake videos, voice messages, and emails, and low-skilled criminals can now get their hands on tools to help them trick employees at housing providers and their third-party suppliers.
Accidental data loss
There’s also the risk of accidental data loss from employees. OpenAI’s ChatGPT and other large language models (LLMs) are a cause for concern because AI-generated chatbots present a very significant security risk. Put simply, employees may be tempted to enter confidential data into the chatbot to help them complete their work faster. But for the most part they, and the organisation they work for, will have absolutely no idea what the chatbot will do with the data.
AI works by harvesting everything it is shown and using it for a later date. It becomes better informed, and therefore more useful, every time someone gives it data. If an employee puts sensitive data into a chatbot, it might resurface anywhere else in the world later for another individual using that chatbot. AI does not respect confidentiality unless the rules are built into the system.
While this may at first cause a major headache, these tools simply add another risk. As the adage goes, necessity is the mother of invention. Many of the major technology players have been working on solutions to address this problem. While AI technology has leapt forward, so have advances in tools to detect AI and protect users against it.
Business and reputational risks
As we see more AI technologies being developed, launched, and licensed, the issue will proliferate, and some of them will inevitably be controlled by powers who are less benign than OpenAI, the creators of ChatGPT. There are very significant business and reputational risks presented by the entire world of AI, and businesses and other organisations should act now to protect themselves against this very real problem.
Game-changing solutions to protect your data
At Quorum Cyber, we believe that while developments in AI tools might aid adversaries in the short term, the newest wave of tools will help security professionals to level the playing field in the longer term. We’re at the forefront of exploring how advanced AI tools can empower cyber security professionals. Our work is reshaping the way we approach digital threats. By integrating the latest AI tools, we not only accelerate the process of identifying and mitigating risks but also deepen our understanding of the threat landscape.
Education is extremely important to help housing associations’ employees to effectively use AI tools internally to protect their IT estate and their data, reducing the dependence on public AI models like ChatGPT or Claude. In the near future, an AI-empowered workforce will be a major asset in the fight against cybercrime and accidental data loss.
Discover how we can help protect you
Our complete range of services is designed to protect any organisation before, during, and after any kind of cyber-attack. Our 350+ certified and experienced team members already provide managed security, data security, and professional services including Offensive Security plus Cyber Resilience Assessment (CRA) through to Incident Response Preparedness and Incident Response Retainer for hundreds of customers around the globe.
Contact us today to find out how to strengthen your cyber security posture and defend your organisation from cyber-attacks.














