Get in Touch
Published: 23rd May 2023 | In: Insights
It can’t have escaped anyone’s notice that the world appears to have recently woken up to the arrival of AI, and in particular AI-driven search – or generative AI services to give it its correct name. The news is awash with stories that predict the end of life on earth as we know it – with the dramatic consequences that generative AI services will have for education, healthcare, dating (!) and any aspect of our lives that involves producing written content. Widespread news coverage has reported that ChatGPT, the most widely established AI-powered chatbot has already passed the final MBA exam at Wharton, one of the USA’s finest business schools.
The consequence of this media coverage on this and other stories, is that many of us have enthusiastically logged onto ChatGPT, or its more recently introduced competitors, to ‘have a go’ and see what it can do. The results are usually mixed, and most of this is quite benign – but there is no doubt the seeds are being sown for a very rapid widespread use of the technology. We are already aware that people working in companies and public sector organisations are using ChatGPT to make their working lives easier. Its ability to summarise reports, compare data, or write halfway decent copy in response to a specific brief is very enticing to overworked executives, just as it is with undergraduates in universities who are already using it to provide at least the backbone of their essays (complete with referencing). It’s very good at what it does, and sometimes the results are indistinguishable from human generated content. Sometimes, dare we say it, they’re better.
A very significant security risk
At Quorum Cyber, we are of course excited by the potential of this emerging technology. But for us as a cyber security company, ChatGPT and its competitors are a cause for concern, not because we’re worried that our people will use the technology to make their working lives easier, but because AI-generated chatbots present a very significant security risk. Put simply, employees may be tempted to put confidential data into the chatbot and that employees, and the company they work for will have absolutely no idea what the chatbot will do with the data. AI works by harvesting everything it is shown and using it for a later date. It becomes better informed, and therefore more useful, every time someone gives it data. So, if a competitor searches for your confidential sales forecasts, and one of your employees has put them into ChatGPT, there is no guarantee at this stage that they won’t be found. AI is no respecter of confidentiality.
Business and reputation risks
As we see competitor AI technologies being developed, launched and licensed the issue will proliferate, and some of them will inevitably be controlled by powers who are less benign than OpenAI, the creators of ChatGPT, seem to be. There are very significant business and reputation risks presented by the entire world of AI, and businesses and other organisations should act now to protect themselves against this very real problem.
Three rules to reduce risk now
Here are three things that organisations should put in place immediately:
- Adapt your staff handbook’s IT policy to ensure that it covers this problem – that you have in place a well explained set of rules about what your employees can and can’t load up to generative AI solutions. It must be seen to be on a par with putting confidential information out on social media, for example.
- Identify the data in your organisation that should be protected – and that should not be allowed to be shared to ChatGPT without raising a red flag. There is likely to be an awful lot of this.
- Put in place monitoring technology. At its most basic we can provide technologies that monitor uploads to ChatGPT’s website, and we can replicate this on other competitor platforms. This at least allows you visibility of what is going out of the digital door, and to raise that all important red flag if our monitoring software doesn’t like what it is seeing. At the more sophisticated end we can use machine learning algorithms to identify patterns in the data uploaded that can identify any potential risks and highlight any wrongdoing by an employee.
The problem with ChatGPT is that it still all feels like a bit of fun – people at parties talking about how they asked it to produce a standard shopping list in the style of a Shakespeare sonnet. The time has come for the world to wise up to the risks.
If you’re concerned about ChatGPT, you might like to read our related blog, ‘How to protect your data privacy and security when using ChatGPT’.
To learn more about how we can help you to protect your data, please visit our dedicated Compliance services page on our website.