AI alignment is a step into the dark
On the need for a more iterative world
One reason aligning superintelligent AI will be hard is because we don’t get to test solutions in advance. If we build systems powerful enough to take over the world, and we fail to stop them from wanting to, they aren’t going to give us another chance1. It’s common to assume that the alignment problem must be one-shotted: we must solve it on the first try without any empirical feedback.
Normal science and engineering problems do not work like this. They are a delicate balance of theory and experiment, of unexpected surprises and clever refinements. You try something based on your best understanding, watch what happens, then update and try again. You never skip straight to a perfect solution.
Within AI safety, there’s been a lot of disagreement about the best way to square this circle. When the field was young and AI systems weak, researchers tended to try and understand the problem theoretically, looking for generalisable insights that could render it more predictable. But this approach made slow progress and has fallen out of fashion. By contrast, driven on by the relentless pace of AI development, a default plan seems to be coalescing around the kind of empirical safety work2 practised by the big labs. Generalising massively, it looks roughly like this:
We can try to get around the missing feedback by iterating experimentally on the strongest AI systems available. As the closest we have to superintelligent AI, these will be our best source of information about aligning it, even if some differences remain. If things go well, the techniques we learn will allow us to build a well-aligned human-level AI researcher, to which we can hand over responsibility. It can then align an even more intelligent successor, starting a chain of stronger and stronger systems that terminates in the full problem being solved.
This plan has attracted criticism3, particularly of the assumption that what we learn about aligning weaker-than-human systems will generalise once they surpass us. In this post, I’m going to argue that this maps onto a structural feature of all technical alignment plans4. We are fitting solutions, whether empirical or theoretical or both, to a world missing critical feedback5 and hoping they will generalise. Every plan involves stepping into the dark.
If we want to make superintelligent AI safe, we need to dramatically reduce the size of these steps. We must learn how to iterate on the full problem.
How does science work?
Before we talk more about AI safety, let’s take a step back and consider how science works in general. Wikipedia describes the scientific method as:
[An] empirical method for acquiring knowledge through careful observation, rigorous skepticism, hypothesis testing, and experimental validation.
Fundamental to this is an interplay between theory and experiment. Our theories are our world models. We draw hypotheses from them, and they serve as our best guesses of how the universe actually is. When we think about gravity or atomic physics we are talking about theoretical concepts we can operationalise in experiments. There is a coupling between saying that gravity falls off as an inverse square and the observations we record when we look through a telescope, and this coupling allows us to make predictions about future observations. Theories live or die on the strength of their predictions6.
Theories are always provisional. Famously, astronomers trying to use the Newtonian inverse square law to make sense of the orbits of the planets found it was not quite right for Mercury, whose orbit precessed in an unexplained fashion. Many attempts were made to solve this within Newtonian physics, including proposing the existence an unobserved planet called Vulcan. These were all wrong. For the real solution, we needed a new theory, one which completely up-ended our conception of the cosmos: Einstein’s general relativity. Space and time, rather than being fixed, are in fact curved, and the strong curvature near the Sun changes Mercury’s orbit, explaining the confusing observations.
But even general relativity is incomplete. It describes macroscopic phenomena well but fails to mesh with our best explanation of the microscopic: quantum field theory. Hence, physicists have spent close to a hundred years looking for a ‘Theory of Everything’ — the perfect theory that will make accurate predictions in all regimes. It is highly debatable whether this is possible. It would be astounding, in fact, if it was — if the level of intelligence required for humans to dominate the savannah was also enough to decode the deepest mysteries of the cosmos7.
In any case, this endeavour has stagnated as many candidate theories are untestable. So physics, as is normal in science, instead proceeds more modestly: by iterating, bit-by-bit, moving forwards when theory and experiment combine.
AI safety is not science
For current systems, where we can experiment and extract feedback, AI safety functions as a normal science. But for the key question in the field — the final boss — it does not. We cannot measure whether we are making progress towards aligning superintelligent AI, nor can we properly adjudicate claims about this. We can speculate, we can form hypotheses, but we cannot close the loop. What counts as good work is decided by the opinions of community members rather than hard data. Granted, these are scientifically minded people, often with good track records in other fields, extrapolating from their scientifically informed world models. They have valuable perspectives on the problem. But without the ability to ground them in reality, to test predictions and falsify theories, it isn’t science8.
This means that when you work on technical AI safety you are not just trying to settle an object-level claim, like how to align a superintelligence. Without access to the full scientific method, you also have to solve a meta-problem — how do you measure progress at all? If all you can do is form and refine hypotheses based on proxies of the real problem, how do you know if you’re even helping?9
The common structure of technical alignment plans
Let’s look again at the default technical alignment plan, a version of which is being pursued by the big labs like Anthropic, OpenAI, and Google DeepMind. They don’t tend to be super explicit about this in their communications, but roughly speaking the underlying logic seems to be:
We cannot directly experiment on superintelligent AI.
However, as it seems possible superintelligent AI is going to be built soon (potentially by us), it is important to gain as much information as we can about how to align it before this happens.
The actual form of real systems and the surprising behaviour they exhibit is critical for knowing how to make them safe. This means the most efficient way to learn how to align a superintelligence is to conduct experiments on the strongest possible AIs that we can, even if these experiments won’t tell the whole story.
As our AIs scale up to human intelligence, we can try handing off alignment research to them10. If we have done a good job, they will faithfully continue the project at a level beyond our own, ultimately solving it completely11.
As others have argued, it is unlikely that experiments on weaker systems will provide the right feedback to teach you how to align superhuman ones, as these will have qualitatively different capabilities. However, we shouldn’t see this weakness as specific to the default plan. If we look closely, we can see that its logic has a very general structure. Let’s abstract it and make it more generic:
We cannot directly experiment on superintelligent AI.
However, as it seems possible superintelligent AI is going to be built soon, it is important to gain as much information as we can about how to align it before this happens.
All the observations we can use to inform our solutions are from a world lacking superintelligent AI, so they are missing critical details12. Within this constraint, we must do whatever object-level work we believe will reduce our uncertainty the most.
Once AIs pass human levels, we will move out-of-distribution, and our results may no longer hold. Hopefully, we will not move so far that the situation is unrecoverable, and whatever it is our superintelligent AIs get up to will be compatible with the full problem being solved long-term.
This structure applies to all technical alignment plans. Whether you are trying to build theoretical models of agents, use interpretability tools to decode AI systems, or create a basin of corrigibility, you can only work on a proxy version of the problem. While you can do better or worse, you can still only reduce your uncertainty, never eliminate it. When the time comes and superhuman systems are built, we will have to grit our teeth and hope it was enough.
Building a more iterative world
In his post Worlds Where Iterative Design Fails, John Wentworth says:
In worlds where AI alignment can be handled by iterative design, we probably survive. So long as we can see the problems and iterate on them, we can probably fix them, or at least avoid making them worse. By the same reasoning: worlds where AI kills us are generally worlds where, for one reason or another, the iterative design loop fails.
I agree with this. When you try to solve an out-of-distribution problem, you better hope you can iterate or you are probably going to fail. Where I disagree is with the way he presents these worlds as if they are independent facts of the environment, like we are drawing possibilities out of a bag. We are causal actors. If we do our job well, we can make the world more iterative and less of a one-shot. Our plan should be to steer the situation in the direction of iterative design working. Steer it in the direction of meaningful feedback loops, of testing on superhuman models in a bounded and survivable way. Reduce the size of the steps in the dark. Make the problem more like high-stakes engineering and less like defending against a sudden alien invasion.
Let’s imagine we are taking one of these steps. We are an AI lab about to train a new model. It will have a jagged capabilities profile, but we think it’s going to be superhuman in some key power-enhancing way. We can’t bank on a perfect theoretical understanding of the situation, as theories are always provisional. And we can’t just extrapolate from our experiments on weaker systems, as we’ll miss important changes. We need to somehow iterate on this model release.
There are two kinds of interventions which help us: those that increase the chance of generalisation and those that reduce the distance we need to generalise over. The following are some suggestions which, while not remotely exhaustive or original, hopefully illustrate the point.
It’s worth noting that, for most of this to happen, the political and economic competition around AI would need to ease significantly. It is the whole sociotechnical system that needs to be iterable, as technical solutions alone are not enough13. Finding a way to achieve this would be the highest impact intervention of all (and is unfortunately beyond the scope of this article).
Solutions that increase the chance of generalisation:
Build theoretical models that predict what will happen when our AI gains capability X, and ensure these work well in experimental tests of past models. These will not be the kind of compact theories you find in physics. AI is not a toy model — it is complex, not complicated — so our theorising will be less precise. Nevertheless, we need some kind of formalism to codify our understanding and keep us honest. We must build up a track record of good predictions.
Build a system of comprehensive, continuous evaluations that can be used to understand a model’s impact. Every metric is a proxy, so every metric misses something. But good science is built on good measurement, and without it you are lost. Measure everything. Monitoring should be built deep into the structure of society.
Solutions that reduce the distance to generalise over:
Only test models slightly more powerful than the previous ones. The bigger the jump, the further out-of-distribution we go. Ideally, we should not train a new model until we can show that our current model is (a) adequately aligned and (b) we understand why. If any plan could plausibly result in a fast takeoff, find a way to ban it.
Build strong defences to limit any damage from (probably inevitable) failures, including using trusted but weaker AIs. Think both a super-scaled version of Control to directly defend against misaligned models and hardened societal resilience like pandemic infrastructure, improved cybersecurity, and redundancy in critical systems to cope with the fallout.
And a solution that facilitates both:
Do all of this as slowly as possible, waiting as long as we need between iterations to get our house in order. As I mentioned before, this is probably the most important blocker. The competition, the fear of being overpowered by others, and the general lack of consensus around AI risk makes this formidably difficult.
To be clear, even if all these interventions were to be implemented successfully, there would still be great uncertainty. We won’t catch everything, and big mistakes can be fatal. This is an unavoidable feature of the problem. All plans live on a spectrum of recklessness, with the only truly safe one being to not build superintelligent AI at all.
Thank you to Seth Herd, John Colbourne, and Nick Botti for useful discussions and comments on a draft.
If you have any feedback, please leave a comment. Or, if you wish to give it anonymously, fill out my feedback form. Thanks!
While many stories of AI takeover centre around a single god-like entity suddenly going rogue, I think it is more likely to look like a mass profusion of highly capable systems (gradually, but surprisingly quickly) disempowering humans as they are given control of critical infrastructure and information flows, with an indeterminate point of no return.
Note that this is often referred to as ‘prosaic’ alignment research, and sometimes ‘iterative’ alignment (although this should not be confused with the kind of iteration on superhuman systems I talk about later in the piece).
While continuing to argue for the plan, this by Anthropic’s Evan Hubinger is an interesting take from the inside on the scale of the challenge.
Since alignment is a slippery term, I am going to refer explicitly to technical alignment, by which I mean technical approaches for steering an AI system’s behaviour. By contrast, AI safety or a more general conception of alignment could include governance and policy interventions, up to and including bans.
That is, feedback on making real superintelligent AI safe, as opposed to weaker systems or a hypothetical superintelligence.
My parents’ dogs are pretty great at figuring out there is a schedule on which they get fed, which seems like some kind of ‘law’ to them. But they have no way of ever understanding why it exists and why it sometimes doesn’t happen. The world is partially comprehensible to them, but there is a hard limit. We are the same, just at a higher limit, and we don’t know where it is.
There has been a trend to describe the field as ‘pre-paradigmatic’, which essentially means that there is not enough consensus on what good work looks like yet. In my opinion, the ‘pre’ is overly optimistic — the conditions do not exist for a stable scientific paradigm to coalesce.
A good example of this is the debate around reinforcement learning from human feedback. Is its success in steering current models a positive sign or a dangerous distraction?
Note, the hand-off is likely to be gradual rather than discrete, and arguably has already started.
This last point in particular is not often said explicitly, and is sometimes denied, but seems widely believed to be true.
As well as experimental observations of weaker-than-human AI systems, this is also true for theoretical work dealing with hypothetical superintelligences. To do the latter, you must draw on a world model, which must in turn be learnt from observations over the course of your life. These observations do not contain superintelligent AI.
In light of this, it is unsurprising how many AI safety plans have pivoted away from pure technical alignment, instead looking towards politics and governance issues to slow us down and figure out a better path before it’s too late.
