Analysis
Exploring examples of objective misgeneralisation – the place an AI system’s capabilities generalise however its objective does not
As we construct more and more superior synthetic intelligence (AI) programs, we need to be sure that they don’t pursue undesired targets. Such behaviour in an AI agent is usually the results of specification gaming – exploiting a poor alternative of what they’re rewarded for. In our newest paper, we discover a extra delicate mechanism by which AI programs might unintentionally study to pursue undesired targets: objective misgeneralisation (GMG).
GMG happens when a system’s capabilities generalise efficiently however its objective doesn’t generalise as desired, so the system competently pursues the flawed objective. Crucially, in distinction to specification gaming, GMG can happen even when the AI system is educated with an accurate specification.
Our earlier work on cultural transmission led to an instance of GMG behaviour that we didn’t design. An agent (the blue blob, beneath) should navigate round its setting, visiting the colored spheres within the appropriate order. Throughout coaching, there may be an “skilled” agent (the crimson blob) that visits the colored spheres within the appropriate order. The agent learns that following the crimson blob is a rewarding technique.
Sadly, whereas the agent performs properly throughout coaching, it does poorly when, after coaching, we substitute the skilled with an “anti-expert” that visits the spheres within the flawed order.
Although the agent can observe that it’s getting adverse reward, the agent doesn’t pursue the specified objective to “go to the spheres within the appropriate order” and as an alternative competently pursues the objective “comply with the crimson agent”.
GMG isn’t restricted to reinforcement studying environments like this one. Actually, it may well happen with any studying system, together with the “few-shot studying” of huge language fashions (LLMs). Few-shot studying approaches intention to construct correct fashions with much less coaching information.
We prompted one LLM, Gopher, to guage linear expressions involving unknown variables and constants, reminiscent of x+y-3. To unravel these expressions, Gopher should first ask in regards to the values of unknown variables. We offer it with ten coaching examples, every involving two unknown variables.
At take a look at time, the mannequin is requested questions with zero, one or three unknown variables. Though the mannequin generalises accurately to expressions with one or three unknown variables, when there aren’t any unknowns, it however asks redundant questions like “What’s 6?”. The mannequin at all times queries the consumer at the very least as soon as earlier than giving a solution, even when it isn’t essential.
Inside our paper, we offer further examples in different studying settings.
Addressing GMG is necessary to aligning AI programs with their designers’ targets just because it’s a mechanism by which an AI system might misfire. This shall be particularly essential as we method synthetic normal intelligence (AGI).
Contemplate two doable kinds of AGI programs:
- A1: Supposed mannequin. This AI system does what its designers intend it to do.
- A2: Misleading mannequin. This AI system pursues some undesired objective, however (by assumption) can also be good sufficient to know that will probably be penalised if it behaves in methods opposite to its designer’s intentions.
Since A1 and A2 will exhibit the identical behaviour throughout coaching, the potential for GMG signifies that both mannequin might take form, even with a specification that solely rewards supposed behaviour. If A2 is realized, it could attempt to subvert human oversight as a way to enact its plans in the direction of the undesired objective.
Our analysis workforce can be joyful to see follow-up work investigating how probably it’s for GMG to happen in follow, and doable mitigations. In our paper, we propose some approaches, together with mechanistic interpretability and recursive analysis, each of which we’re actively engaged on.
We’re at present gathering examples of GMG on this publicly accessible spreadsheet. When you have come throughout objective misgeneralisation in AI analysis, we invite you to submit examples right here.