At Quorum Cyber, we keep a close eye on how technology is changing so we can better protect our customers. In previous newsletters, we have mentioned how we work closely with Microsoft on their tools that utilise Artificial Intelligence (AI) and tightly integrate into their security ecosystem. Although that is important, we also monitor what happens out in the industry, and this month, we gained deeper insight into a paper from an ex-OpenAI employee – Leopold Aschenbrenner. After being fired for leaking information, Aschenbrenner wrote a 165-page manifesto.
This paper talks about the potential of a ‘Superintelligence’ that few people know of and provides indicators of evidence to the trajectory of this advancing technology. It is a lengthy paper, but the key highlights we found interesting were more around the conversations that we have with customers today, specifically around security, future use cases, and implications. Let’s dive in…
Security concerns and potential risks
The paper talks about expectational security risks that most experts discuss today, and the difficulty of applying regulations or standards to such technology, much like in the advent of the internet back in the ’90s. Aschenbrenner talks about the secretive elements hidden away from the public, calling them Artificial General Intelligence (AGI) secrets. Labs where only select personnel are creating AGI but with few safeguards have extreme potential for espionage, with particular threats from adversaries in other countries.
The need for government involvement
Aschenbrenner talks about the need for government involvement to ensure the safety of these AI advances, one of which was a key topic of discussion at the University of Edinburgh, which Graham Hosking, Solutions Director, discussed in depth on the potential regulation and involvement of government, but to what detriment? There also needs to be a balance between regulation and stagnation of this technology that can be used for the good of humanity. Aschenbrenner argues that private businesses looking to start up in AI are not equipped to handle the standards for security, nor the scale in which AI superintelligence will require. Only private firms that are already developing AI will have overall power, even enough in the future to overpower governments themselves.
Espionage examples
The paper discusses numerous examples of historic incidents that could cause espionage of AI labs by foreign actors, or even offering AI engineers higher salaries to move away from their current roles to have senior roles in other countries, with the idea to take over control of any AGI solution.
Economic and industrial advancement
If you have been watching the breaking news on Nvidia over the past three weeks, you will start to see where this paper may have a few truths. For example, the paper talks about the need for expansive compute and electricity requirements to further the advancement of AI. Even today, with Nvidia’s new Blackwell GPU chips, these can consume an enormous power drain of around 1kW each (or equivalent to a toaster). Multiply the required expansion to meet demand for algorithmic efficiencies, and this demand will need to grow to keep up with the race to develop AGI. The paper also goes into how by the end of the decade, in the U.S., it is expected to see a substantial increase in power production and provides detailed graphs on the thread of power demands over the last few decades versus very sharp requirements in the extremely near future.
Technological and algorithmic advances
We all know that over the last two years, AI has been one of the most progressive technologies. The paper references how earlier versions of ChatGPT were the equivalent of a pre-schooler, whereas today the types of models we use in GPT-4 are the equivalent of a secondary school student. These improvements are over such a short space of time, imagine where we will be in another two years. Aschenbrenner talks about the iterative improvements where, in the potential future, these models will no longer be in school but act as fully trained engineers or researchers. With this type of advancement, what would stop this AI system from cloning itself to conduct its own research? If it had the same logic or processing as humans do today, for example, we do not just make up an answer on the spot; we walk through the answer and further comprehend what the answer should be to ensure its correct. This would lead to an intelligence explosion, where AI systems would rapidly surpass human intelligence and capabilities. We would then need to trust the AI as we would not be able to keep up.
Future prediction and implications
AGI vs. superintelligence seems to have an expedited timeline according to Aschenbrenner. He believes that AGI should be achieved by 2027, with superintelligence following shortly after. This would lead to an intelligence explosion, where AI systems would rapidly surpass human capabilities. Superintelligence is expected to provide a decisive economic and military advantage, potentially leading to new forms of warfare and geopolitical power shifts. Aschenbrenner talks about the risks of losing control over these AI systems, which could lead to catastrophic outcomes. Ensuring that superintelligence needs to be aligned with human values and be controlled is going to be a critical challenge. It is highlighted in the paper that current alignment techniques may not scale to superhuman AI systems, necessitating innovative approaches to ensure safety and reliability.
Conclusion
The paper, albeit one person’s view, underscores the urgency of addressing security concerns, economic values, and how quickly these technological challenges are all a race to develop the ultimate superintelligence. The paper calls for a coordinated effort between the government and private sector to secure AI advancements. We all need to be aware of both the AI for good and the potential risks associated with this innovative technology. This is here to stay.
Links to the Paper “Situational Awareness” can be found here.



