Some thoughts on AI risks

Since I’ve been working on my AI for Earth grant and with the UK AI for Good community I’ve been asked countless times about the potential threats of artificial intelligence to human civilisation. In some ways, nervousness and cynicism around AI technologies is not surprising, since the media generally focuses in on job security or the risks of human slavery or extermination by superintelligent systems. There are real reasons to be concerned, and certain aspects of our current social and economic norms are truly under threat. However, there are strong arguments to suggest that the risks are mostly outweighed by opportunities to make the world more equitable and more efficient, with utilitarian benefits. The AI future is often framed as a utopia by enthusiasts and dystopia by cynics. The truth is much more nuanced. Here are some of what I think are the most important talking points:

  1. Automation will replace humans in many jobs. However, many new jobs will also be created. Most people would agree that it is a good thing that the development of, for example, the printing press, steam engine, mechanized engineering or computer was not undermined because of the potential job losses. These technologies created new modes of employment and had net benefits for human society (although perhaps the balance has started to tip in the face of industry-driven climate change and tech-related social and socio-political problems). Furthermore, we cannot reasonably expect corporations to forego technological advances that offer major cost-savings and competitive advantages. That said, our current social structures are not suitable to support sudden mass lay-offs in the industries most vulnerable to automation, such as trucking and retail. Suggestions of retraining these displaced workers into high-tech jobs seem unrealistic. A well-managed system of universal basic income seems to solve some of the problem, especially if it is paid for at least in part using tax revenue from the big corporations that will enjoy efficiencies and enhanced profitability from the automation that led to job losses in the first place, and much of the remainder from savings elsewhere in the welfare system. The bigger challenge may not be the financial security of the displaced workers, but ensuring they remain purposeful and fulfilled.
  2. More automation frees humans to do more leisure, be more creative and active. I agree with this notion to an extent. Automation may offer this to those people who have the financial security that affords freedom to be leisurely. However, making automation beneficial to those with debts or who live hand-to-mouth, will require a fundamental rethink of the welfare system. If automation makes people less integral to production and value creation, companies have less incentive to pay them for their time. Even if financial security could be provided through a universal basic income-type program, it does not necessarily follow that people liberated from their jobs will use their time peacefully or productively. The argument that they would places a lot of faith in human nature. It is possible that more time and less responsibility would be a toxic combination leading to boredom and then contrived conflict, disruptive behaviour and vices.
  3. AI != AGI. There is a critical distinction to be drawn between AI (Artificial Intelligence) and AGI (Artificial General Intelligence). AI is real-world, current technology and it comprises algorithmic techniques that allow a machine to get better at a specific task or set of tasks by being exposed to more data, or by having some set of rewards and punishments. On the other hand AGI is an approximation of human cognition that is able to surpass human-level skill at any arbitrary task. This is not real technology – it is a potential future technology that remains firmly in the realm of science fiction today. For many AI researchers AGI represents the ultimate goal. However, there is no clear framework for achieving it – how does one curate quality training data for being generally skillful? How does one navigate qualitative context, ambiguity of purpose and human paradoxes? These are questions that are difficult to define, never mind express algorithmically in code. We are at least decades away from AGI being technologically feasible. Therefore, existential risks posed by AGI, for example slavery or extermination by superintelligent systems, remain science fiction, and distract us from the real but surmountable societal, cultural and economic challenges posed by rapidly advancing AI.
  4. AGI will break out. There is a common argument that true AGI would necessarily outsmart us, and we would be powerless to react if it became nefarious. However, this seems simplistic. Engineers spend endless hours obsessing over the safety implications of their creations – many things do not get built because of extremely low-probability safety concerns. The conversation is usually dominated by academics, tech CEOs and so-called thought-leaders rather than the hands-on designers and practitioners actually building these technologies. While it is arrogant to believe we can control a system that is by definition our intellectual superior, it is also strange to assume that it would be intent on causing us harm or escaping into the wild.
  5. AI progress will stall. While we are in a boom period for AI technology right now, a large fraction of the new technology being deployed is actually old technology, tweaked and suddenly made scalable by big computing power. AI is predominantly statistics, pattern recognition and data modelling done at scale, but without being explicitly programmed. No-one can say how an excellently-performing neural network actually works, the algorithm is a black box. My somewhat contrarian opinion is that once the low-hanging fruit has been picked and the gains offered by increasing computational power become marginal, criticism about the back box problem will catch up to the excitement of the AI boom and the technology will plateau for a while, possibly a long while, until there is a major, paradigm shifting technological jump.

With all that said, it is very welcome that conversations about AI and AGI safety are being had now. Some of these discussions are urgent, such as how to organise our societies to cope with automation displacing human workers and AI-led economies. Other conversations are important but less urgent, such as how to control existential level risks posed by future AGIs. It is very welcome to see these discussions happening now, as it offers the rare opportunity for philosophy, regulation and education to pace technology.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s