Radical Optionality
Preview — Password Required
An Essay on AI Governance

Radical
Optionality

"If capacity-building begins only once risks and benefits are unmistakable, the decisive window for proportionate action will likely have closed."
AuthorsChristoph Winter & Charlie Bullock
Reading time25 minutes
↓ Download PDF
Scroll to read
Contents
  1. IThe Challenge of Regulating Transformative AI
  2. IILet the Market Handle It
  3. IIIAnticipatory Governance & the Precautionary Principle
  4. IVThe Case for Radical Optionality
  5. VPolicy Details
  6. VIObjections
  7. VIIConclusion
  8. VIIIReferences

Humanity may soon face the most consequential regulatory challenge in history: governing artificial intelligence systems powerful enough to precipitate a societal transformation on par with the Industrial Revolution, but compressed into years rather than generations. A number of prominent voices in tech and academia have predicted that such a transition is a distinct possibility in the near- to medium-term future. The prospect of "transformative AI"[1] presents policymakers with an unprecedented dilemma—overregulation could stifle innovation and forfeit the potential benefits of the technology, while a failure to regulate appropriately could have disastrous implications for public safety and national security.

In this essay, we make the case for an approach to AI governance that we call "radical optionality." In brief, radical optionality entails avoiding overregulation in the short term while building up the government's capacity to regulate competently when and if it becomes clear that regulation is needed. This approach is intended to give democratic governments access to as many options as possible by increasing the quality of the information and the regulatory authorities available to potential regulators. Rather than relying on highly uncertain predictions about the future progress of AI capabilities research, radical optionality would equip governments with a set of tools that would allow for informed and competent regulatory responses to a broad range of developments.

The argument for focusing on optionality is simple, and—if you accept a couple of reasonable assumptions—compelling. These assumptions are:

  1. That there is a real possibility of transformative AI systems being invented in the next, say, fifteen years or so;
  2. That the benefits of transformative AI will likely outweigh its risks, especially if sensible governance measures are implemented; and
  3. That significant uncertainty exists as to how and when transformative AI will be developed and what the best way to govern it will be.

Justifying the first assumption is beyond the scope of this paper. Whether "AGI" or "superintelligence" or "powerful AI" or "transformative AI" will ever arrive, and when, are questions that have been debated extensively elsewhere, and we're not optimistic about our ability to shed new light on the subject. But if you believe that transformative AI is possible, and you're optimistic about its potential value to humanity, we hope to demonstrate that the case for radical efforts to maximize optionality is overwhelmingly strong.

I

The Challenge of Regulating Transformative AI


The importance of getting the regulatory response to a truly transformative technology right is obvious. The complexity of the problem may not be. Scholars who study the regulation of emerging technologies have long acknowledged the difficulties posed by the "pacing problem." In brief, technological progress often occurs at such a rapid pace that laws, regulations, and the legal system are unable to adapt quickly enough to keep up. This makes it difficult for policymakers to effectively govern emerging technologies. In the AI context, this problem is compounded by the fact that AI systems are in some ways uniquely difficult to understand, and may possess capabilities that even the people who created them are initially unaware of. In other words, AI governance involves decision making under extreme uncertainty about the future capabilities of the technology, the nature and severity of its risks it might pose, and the benefits it might offer.

By some measures, AI model performance has been improving exponentially, and some researchers believe that this trend will continue in the coming years. One particularly exciting and concerning prospect is the possibility of recursive self-improvement. As increasingly capable AI systems are developed, perhaps with superhuman programming and/or research abilities, these systems might facilitate the development of even more capable systems, which might facilitate the development of still more capable systems, and so on. An early version of this phenomenon is arguably already occurring, and it may become increasingly relevant as AI systems' programming abilities approach the level of human experts.

Because human beings are psychologically disinclined to accept the implications of exponential growth, the exponential progress of AI capabilities research further compounds the difficulty of governing transformative AI. Historically, institutions have often failed to grapple with the reality of exponential trends until it was too late to respond effectively. For instance, epidemiologists in early 2020 dismissed COVID-19 as being less prevalent than the flu, and the International Energy Agency systematically underestimated solar power growth for over a decade, repeatedly predicting that it would level off or decrease when in fact the industry maintained roughly 25% annual growth. Wrongly assuming that exponential trends will continue indefinitely can be equally harmful, though: economists believe that over-investment due to optimistic extrapolations based on Japan's booming economic growth in the 1980s was partially responsible for the decades of economic stagnation that followed.

All this is to say that we have every reason to believe that regulating AI effectively will be unusually difficult even in comparison to past efforts to regulate emerging technologies, which have been far from universally successful.

II — Approaches to Regulation

Let the Market Handle It


In light of these challenges, how should the government regulate transformative AI? One possible answer is that it shouldn't. Libertarian writers like Adam Thierer have made the case for a culture of "permissionless innovation" for AI development, in which the role of government would be limited to enforcing existing laws and facilitating industry self-regulation with "soft law" tools such as voluntary standard-setting.

The appeal that this view has for techno-optimist proponents of innovation isn't hard to understand. Historically, governments have often struggled to regulate emerging technologies in a way that does more good than harm. If you squint, calls for regulation to preemptively mitigate speculative future AI risks look a lot like historical calls to (over)regulate nuclear energy, which arguably led to disastrous economic and environmental consequences in the form of missed opportunities to generate cheap and abundant clean energy. Hundreds of thousands of lives have likely been lost prematurely due to air pollution that could have been prevented by removing the regulatory barriers that prevent nuclear energy from being cost-effective. And the respective trajectories of the tech industries of Europe and the United States over the course of the last few decades is like something out of an Ayn Rand novel; it gives rise to the same instinctive sense of bewildered contempt in the breast of the libertarian observer as a satellite photograph of the Korean peninsula at night. From this perspective, it's tempting to argue that the US shouldn't follow the lead of a continent that has succeeded only in regulating its own tech industry out of existence,[2] and should side with the builders and visionaries rather than the takers and bureaucrats.

But, ironically, support for this techno-optimist perspective typically depends (at least tacitly) on skepticism about the future trajectory of AI capabilities. Again, consider the example of nuclear power. However well libertarian objections to the overregulation of nuclear power plants have aged, it's much harder to make the case that the advent of the nuclear age required no new laws or regulations whatsoever. Presumably, reasonable people can agree that a laissez-faire approach to regulating the private acquisition and possession of nuclear weapons would be inadvisable. "Every private individual should be allowed to buy as many nuclear weapons as they want, free from government interference" is a very difficult view to defend with a straight face, no matter how strong your pro-market priors.

A truly transformative general-purpose AI system would likely have significant military applications, perhaps even as significant as the military applications of nuclear fission. To the best of our knowledge, no one anticipates that warfighting will be a uniquely difficult domain in which to find uses for highly capable AI systems; to the contrary, most national security commentators predict that the importance of AI on the battlefield will continue to rapidly increase. If this is the case, we should not expect a totally laissez-faire approach to AI governance to be any more practically or politically feasible than a laissez-faire approach to the governance of nuclear weapons. In other words, a truly transformative dual-use[3] technology will almost certainly require a nonzero amount of regulation.

A crucial difference between nuclear weapons and transformative AI is that nuclear weapons unquestionably exist, while transformative AI is still only a possibility. Promulgating detailed regulations addressing nuclear weapons before they were even being developed would likely have been foolish, given the well-documented difficulties that regulators have historically had in predicting the future course of technological progress. But at some point, once a transformative dual-use technology is actually under development, regulation becomes unquestionably necessary—and at that point, all stakeholders have a mutual interest in ensuring that regulations are competently designed and enforced. Insufficient government regulatory capacity could lead to hamfisted and overly harsh regulation down the line, once stakeholders realize that society is about to be fundamentally transformed. Worse yet, an unprepared government might regulate incompetently, harming industry without helping the public.

If that's the case, then, given the stakes, wouldn't it be a good idea to start preparing? We think it would. That's the basic idea behind radical optionality. Even if "let the market handle it" is typically sound wisdom, that heuristic alone can't fully resolve issues like this one where the national security implications of a particular technology are likely to necessitate some degree of government oversight at some point. And given the scale of the costs and benefits at issue, "let the market handle it until these national security issues manifest themselves, if they ever do" isn't an adequate solution either. If there are steps that can be taken to increase the regulatory capacity of the relevant government bodies without significantly inhibiting innovation, it's critical that we take them as soon and as well as possible.

III

Anticipatory Governance & the Precautionary Principle


On the other side of the aisle/pond, some will object to radical optionality from the opposite direction, on the grounds that avoiding substantive regulation in the short term is too risky. Those concerned primarily with risks from transformative AI rather than benefits might argue that the regulatory response to the possibility of transformative AI should observe the precautionary principle, or that regulators should try to predict the future of AI progress and implement an anticipatory governance approach.

In its most extreme form, the precautionary principle dictates that any action which might pose a risk to public health or safety should be prohibited unless the party wishing to undertake the action can prove that the action is not dangerous. Because transformative AI, like any other revolutionary technology, almost certainly would come with risks as well as benefits, this "hard" precautionary principle would prohibit development for the foreseeable future. This version of the precautionary principle is simply bad policy, because it ignores the possibility that regulation might itself cause more harm in expectation than the risks that the regulation is intended to address. However, there are also a number of less unreasonable alternative formulations. The EU, for example, endorses a version of the precautionary principle that involves conducting a cost-benefit analysis that takes both the costs of regulating and the costs of failing to regulate into account; this is the precautionary principle that the EU AI Act's Code of Practice for General-Purpose AI invokes. And some scholars have argued that the precautionary principle is justified in cases where there is a real danger of truly catastrophic harm, because even a very low probability of a cost that can be said to be "infinite" (such as the extinction of all life on the planet) outweighs even a very high probability of a very substantial finite benefit.

A precautionary principle that incorporates cost-benefit analysis might not be inconsistent with radical optionality, under reasonable assumptions about the costs of preemptively regulating emerging technologies. And the catastrophic harm argument, while sound in theory, doesn't apply well to the real-world AI governance context. For one thing, it isn't clear that restricting innovation in liberal democracies would lower the overall long-term probability of catastrophic outcomes, given that AI research would almost certainly continue to be conducted in authoritarian states abroad. For another, we're not convinced that the potential benefits associated with transformative AI are more finite, in any meaningful way, than the potential costs. To take one obvious example, AI systems in the right hands could prevent a catastrophe that would otherwise have occurred, such as a nuclear war or a global pandemic. More generally, it's plausible that a wealthier society with more access to intelligence will be more willing and able to invest in averting catastrophe, meaning that growth and progress may in fact be anti-correlated with catastrophic risk.

There's no scientifically certain way of determining, from our current vantage point, whether these hypothetical benefits outweigh the hypothetical risks. In the face of this kind of uncertainty, relying on rough heuristics may be the best we can do. The precautionary principle is an application of one possible heuristic—namely, that policy should treat changes as likely to be harmful unless they can be proven harmless. The almost equally rough heuristic that we prefer is based on the observation that throughout history new technologies have typically (though not invariably) produced a net benefit for society in the long term.[4]

Unlike precautionary principle approaches, anticipatory governance approaches to AI governance don't rely on techno-pessimist assumptions. Instead, proponents of anticipatory governance are optimistic about the ability of regulators to predict the course of technological progress and implement effective regulations before they're needed. The great advantage of this approach is that it can prevent harms before they happen. It's not unreasonable to expect an ounce of prevention to be worth a pound or more of cure.

Our objection to an anticipatory governance approach is based on the observation that predicting the future trajectory of technological progress is difficult, and that governments have historically been terrible at it. We've written about this problem in more detail elsewhere, but essentially, we think that the history of regulation of emerging technologies shows that attempts to address problems that don't yet exist and may not exist for many years to come often result in legal regimes that are ineffective or even counterproductive. The hands-off approach that the U.S. government took when the Internet was coming into existence, for example, holds up better in hindsight than the same government's attempts to anticipatorily regulate home taping via the Audio Home Recording Act of 1992, which was rendered mostly obsolete almost as soon as it was passed by the advent of the personal computer.

Some degree of anticipation is necessary, of course. Even our suggestion that governments should focus on building capacity is based on predictions—we predict that transformative AI systems may be developed and that, if developed, they will likely be dual-use. Decision making under uncertainty is necessary and inevitable, and radical optionality is a formula for minimizing uncertainty, not for eliminating it. But we think there's a great deal to be gained by minimizing uncertainty and making informed decisions.

IV

Radical Optionality


Instead of regulating or failing to regulate, governments can prepare to regulate in a way that will improve their ability to respond to a wide range of possible scenarios, foreseen or unforeseen. This can be done by building strong regulatory institutions, equipping them with appropriately flexible authorities, and ensuring that they have access to the information they'll need to respond competently and decisively if it ever becomes clear that regulation is needed to address some intolerable risk to public safety or national security. Unlike the precautionary principle approach, the policy measures involved would impose only negligible burdens on AI companies and would have a negligible impact on innovation. Instead, the costs of an optionality-maximizing approach would be measurable in taxpayer dollars.

This is where the "radical" comes in. Simply focusing on maintaining optionality is, if anything, a rather moderate proposal. But our suggestion is not simply that governments should focus on maintaining optionality by building capacity. Rather, we argue that governments should be willing to spend an almost unlimited amount of money, effort, and political capital on maximizing optionality.

This follows from the scale of the problem—again, the premise from which we're starting is that there is a distinct possibility of a transition at least as significant as the Industrial Revolution occurring over the course of the next few years or decades. If governments genuinely accept that dual-use transformative AI systems may arrive in the near- or medium-term future, the logical consequence is that virtually anything they can do to even marginally improve the odds of the transition going well will be cost-justified. The only non-negligible costs at issue, then, are costs to innovation. The government should be wary of counterproductive interventions, but not at all concerned with the actual pecuniary cost of any realistic measure that seems likely to have net-positive results. Even if there's a 95% chance that the money spent on a given policy measure is wasted, a five percent chance of some positive impact in terms of mitigating the risks or realizing the benefits of the most important invention in human history would mean that the costs were justified a thousand times over in expectation.

Of course, there may come a time when preparation ceases either to be necessary or to be sufficient. If progress in AI capabilities research plateaus and it becomes clear that transformative systems are not on the horizon, the policies we're suggesting would mostly be redundant. On the other hand, if transformative AI systems are imminent and dire risks to public safety are manifesting, a substantive regulatory response could become necessary (and politically inevitable).

Consider former OpenAI employee Leopold Aschenbrenner's series of essays on "Situational Awareness." Aschenbrenner proposes that it is necessary and "inevitable" for the U.S. government, motivated by national security concerns, to prohibit private companies from working on transformative AI systems and instead invest trillions of dollars in a government-run "AGI Manhattan Project," sometime within the next three years or so. Unsurprisingly, this take generated a fair amount of criticism as well as its share of approbation. In another influential recent piece, the crypto pioneer Vitalik Buterin proposed a way of avoiding a future in which the development of transformative AI happens in a closed, centralized, national-security-focused manner. His solution was a philosophy that he calls defensive acceleration, or "d/acc," which would focus on the development of defense-favoring technologies.

These two perspectives are profoundly different, but they both generally deal with the problem of balancing the need to encourage innovation and the need to mitigate risks. Radical optionality is a strategy for doing exactly this—or rather, a strategy that takes advantage of the fact that safety and innovation aren't necessarily conflicting values. The appeal of this approach is that it promises to work well in a wide variety of futures. Regardless of whether you agree with Aschenbrenner or Buterin (or, as will be the case for most people, with neither) about the right way to handle the development of transformative AI when it happens, the correct course of action at this point in time is to avoid overregulating for now while preparing our institutions for the challenges that may lie ahead.

In the scenario Aschenbrenner predicts, the U.S. government will eventually and suddenly realize the necessity of urgent action and, like a student pulling an all-nighter before an exam, throw together the most complex public project in human history at the last minute. We don't necessarily agree with Aschenbrenner's timelines (he estimates that the project might begin as soon as 2027) or his apparent certainty that the very specific course of action he predicts will unfold in exactly the way he anticipates. But if he does turn out to be correct, the project he contemplates will produce better outcomes if the government is better prepared to spring into action—if it has access to useful information, qualified personnel, and flexible governance mechanisms. And if, like Buterin, you want to avoid a future where transformative AI's development is centralized and securitized, an open, democratic future is more likely to come about if governments have good options available to them other than springing into action to avoid imminent disaster.

Whether you're optimistic or pessimistic about transformative AI, the stakes are high enough and the uncertainty deep enough that building governance capacity now is justified.

Policy Recommendations

Steps that can and should be taken as soon as possible to increase regulatory capacity without creating significant barriers to innovation.

Reporting & Transparency

Information-Gathering Authorities

Implement well-designed transparency and reporting requirements that allow governments to develop expertise in securely collecting, analyzing, and sharing information about frontier AI systems.

Secure Disclosure Channels

Whistleblower Protections

Ensure employees at frontier AI companies can report information about risks to public safety or national security, without fear of retaliation, to appropriate government offices.

Inter-Governmental Coordination

Information Sharing

Establish channels for securely sharing appropriate information about model capabilities and risks between governments and close allies.

Adaptive Regulatory Frameworks

Flexible Rules & Definitions

Create regulatory definitions that can be updated more rapidly and reliably than definitions baked into statutes, reducing the risk of obsolescence.

Pre-Deployment Testing

Assessments & Evaluations

Build capacity for pre-deployment testing of frontier models, including voluntary and mandatory government testing regimes.

National Security Priorities

Securing Model Weights

Promulgate comprehensive voluntary standards for physical and cybersecurity throughout the frontier AI development supply chain.

Institutional Capacity

Hiring & Talent

Recruit and retain top-tier talent for AI governance roles through new hiring authorities, competitive compensation, and creative approaches to building a deep reserve of expertise.

Regulatory Architecture

Avoiding Premature Preemption

Preserve state regulatory capacity as a second-best option until a coherent federal framework is in place, rather than preempting state laws and replacing them with nothing.

V — Policy Details

Information-Gathering Authorities


One top priority for preserving optionality is the implementation of well-designed information-gathering authorities. Like corporations, LLMs, and other complex systems, government agencies thrive on a diet of information; it's been said that "information is the lifeblood of good governance." It's a generally accepted fact, supported by reams of legal scholarship[5] as well as by basic common sense, that governmental decision makers make better decisions when they have access to better information. By beginning to gather information about frontier models now, governments can develop expertise in securely collecting, analyzing, and sharing information about advanced AI systems. On the other hand, if information-gathering doesn't begin until the advent of transformative AI systems makes it undeniably necessary, the agencies charged with collecting and processing information will lack institutional expertise and make worse decisions. This would impose costs, not just on the public, but also on AI companies whose sensitive and valuable information would be processed less securely and efficiently.

Information-gathering authorities can be grouped into two broad categories: transparency requirements, which mandate that companies publish certain information about their models publicly, and reporting requirements, which require companies to share information with a government agency. Transparency requirements have the advantage of allowing academics, civil society organizations, independent researchers, and industry groups to review the disclosed material, increasing the total resources that can be brought to bear on analyzing disclosures. But requiring companies to reveal trade secrets or other sensitive business information publicly carries costs to innovation. The advantage of reporting requirements is that broader and more detailed disclosures can be required without imposing costs on the disclosing companies.

Most of the significant items of AI safety legislation that have been proposed or enacted to date consist primarily of either reporting or transparency requirements. The EU AI Act, for example, is operationalized by a Code of Practice that contains a number of reporting requirements. And the most significant state AI bills in the U.S., such as California's SB 53 and New York's RAISE Act, consist primarily of transparency requirements. This is a sensible approach, because any more ambitious substantive regulation introduced in the future will benefit from being designed and implemented by better-informed actors, and information-gathering authorities are typically minimally burdensome and easy to enforce.

Ultimately, transparency and reporting requirements are complementary. Governments can maximize optionality by using reporting requirements to ensure that risk-relevant information is collected by government offices that can be trusted to securely process it, while using transparency requirements to facilitate public access to non-sensitive information that companies can produce without incurring competitive harm.

State governments generally lack the institutional capacity to securely process and make use of the more detailed information that reporting requirements are well-suited to producing. Therefore, while state transparency requirements are a decent substitute for a federal transparency framework (as long as the state requirements are harmonized so as to avoid a patchwork that would impose significant compliance costs on companies), reporting requirements should be implemented at the federal level. Previously, the U.S. federal government collected information about frontier models via reporting requirements administered by the Bureau of Industry and Security. Notably, the fundamental idea of this kind of information-gathering didn't meet with any objection from the companies subject to the rule—although companies like OpenAI and Anthropic suggested changes to the nature and frequency of the requirements and emphasized the importance of securely handling the reported information.

Whistleblower Protections


Whistleblower protections are another important tool for increasing government access to information about frontier AI systems. Ideally, whistleblower protections would ensure that employees at frontier AI companies could report information about risks posed by frontier AI systems, without fear of retaliation, to an appropriate government office that has expertise in handling sensitive information securely. Well-designed whistleblower protections do not burden innovation to any significant degree, as they are extremely light-touch and impose virtually no positive obligations on affected companies.

Currently, most frontier AI company employees are entitled to the whistleblower protections of California law, which protects employees from retaliation for reporting violations of any state, federal, or local law or regulation. But there's still a need for additional federal whistleblower legislation to universalize these protections and to protect the secure disclosure, to a designated federal agency, of information about significant risks to public safety or national security even when no law has been broken.

The AI Whistleblower Protection Act, a bipartisan bill recently introduced by Senator Chuck Grassley of Iowa, would fill this gap in existing whistleblower protections by prohibiting retaliation against whistleblowers who disclose information about "substantial and specific" dangers to public health, public safety, or national security to an appropriate government agency. Passing that bill, or something like it, would be a solid first step towards building the kind of U.S. government capacity for securely gathering and processing information about risks from frontier AI systems that is needed to maintain optionality.

Information Sharing


Relatedly, the importance of securely gathering and sharing information within and between governments, and between governments and outside stakeholders, shouldn't be underestimated. The government's role as a coordinator and facilitator of discussions between a variety of stakeholders is difficult to entirely replace with private governance mechanisms.

It might appear that the U.S. has nothing to gain from international information-sharing, given that AI innovation to date has primarily taken place in the U.S. But that's a shortsighted view of things. In the long run, the U.S. stands to gain from sharing and receiving some information about model capabilities with and from close allies, as when the UK AI Safety Institute shared the results of its pre-deployment testing of Anthropic's Claude 3.5 Sonnet with its U.S. counterpart. Wholesale objections to any degree of international cooperation simply don't make sense unless they come from a place of skepticism about the possibility of transformative AI.

Even "Situational Awareness," which posits not only the likelihood but the absolute certainty of superintelligence being developed exclusively by and for the U.S. national security enterprise, recognizes the importance of a "tighter alliance of democracies" for pooling resources and protecting supply chains. Anthropic CEO Dario Amodei's essay "Machines of Loving Grace" suggests an "entente strategy" for bringing together a "coalition of democracies" to exercise control over the AI supply chain and distribute the benefits of powerful AI in order to promote democracy. In these and every other sensible proposal for internationally coordinating AI governance efforts, the first step is to establish channels for securely sharing appropriate information about model capabilities and risks.

Better information-sharing within government is also critical. The reason we call it "radical" optionality is that we want to preserve optionality even with respect to very drastic regulatory options, if drastic developments should make them necessary. But coordinated whole-of-government responses to emergencies are likely to require a degree of coordination between agencies with varying expertise and resources that will be impossible to set up on short notice unless some sort of framework for efficiently and securely distributing sensitive information is already in place and in use. The better informed the government is, the less heavy-handed its response in such emergency situations will have to be.

Flexible Rules & Definitions


The importance of flexible, adaptable rules to the effective governance of emerging technologies is well established. Prematurely implementing rigid rules increases the risk of misspecification, and flexible rules are less likely to be rendered obsolete by technological progress.

Another crucial component of regulatory flexibility is the creation of flexible definitions. For example, many recently proposed AI laws apply only to "frontier models" or an equivalent term. However, defining "frontier model" often proves a more difficult task than you might expect. Early efforts generally relied on a compute threshold. But a statutory definition that relied on a simple compute threshold would rapidly become obsolete as low-compute models become radically more capable and as the cost of compute decreases. This being the case, it's generally a good idea to preserve optionality by leaving the task of defining "frontier model" to a regulatory agency that can promulgate and regularly update a regulatory definition.

California's SB 1047 illustrates the importance of this point. If SB 1047 had not been vetoed, it would have placed requirements on "covered models," and defined "covered model" to include only models that cost in excess of a hundred million dollars to train. This cutoff would have made sense in the summer of 2024, when SB 1047 was passed, but it would have been rendered more or less obsolete mere months later by models like those released in January 2025 by the Chinese startup DeepSeek, which apparently cost less than six million dollars to train but still had state-of-the-art capabilities upon its release.

The EU AI Act provides another case study on the importance of flexibility. The Act has been praised for its inclusion of sophisticated updating mechanisms and other adaptability-increasing features. However, while the Act "delegates substantial authority to amend annexes and procedural regimes, … core definitional frameworks remain frozen."[6]

Assessments & Evaluations


Governments around the world generally seem to realize the importance of evaluating and assessing frontier models. Like information-gathering, government testing of models is both instrumentally and intrinsically valuable; in addition to producing potentially valuable information about existing models, it provides an opportunity for agencies to develop expertise in conducting and securely sharing information about assessments and evaluations.

Radical optionality doesn't dictate a particular answer to the question of whether governmental testing should be mandatory or voluntary. That said, any testing regime should include ample opportunity for the relevant agency to conduct pre-deployment, as well as post-deployment, testing of frontier models. Building capacity for pre-deployment testing is important because, at some point in the future, certain models may pose serious risks to public safety. Security flaws detected prior to the public release of a model can be addressed in a variety of ways, but the option space for addressing problems detected after a model is deployed is significantly reduced.

Securing Model Weights & Algorithmic Secrets


In recent months, there's been increasing recognition of the importance of lab security. Aschenbrenner dedicates an entire essay to this topic in Situational Awareness, concluding that security at frontier AI companies is currently so poor that there's little realistic chance of protecting model weights and algorithmic secrets from foreign adversaries in the face of a serious and sustained attempt to acquire them. Securing access to frontier AI research is critical to retaining optionality.

There are a number of measures that governments should take as soon as possible. Perhaps most importantly, the U.S. federal government should promulgate comprehensive voluntary standards for physical and cybersecurity throughout the frontier AI development supply chain. One promising way to encourage compliance with these standards is to make compliance with them a condition of federal grants and contracts, as the Department of Defense currently does with its Cybersecurity Maturity Model Certification program.

Securing information against state-level espionage efforts is one area where certain government institutions have significantly more expertise than just about any private entity. Setting good standards, and creating efficient and secure channels for sharing security information, should therefore be viewed as a national security priority.

Hiring


The most important factor in building governmental capacity for AI governance is recruiting and retaining top-tier talent. Ideally, governments should be focused on acquiring more elite talent than is required to meet the demands of the current regulatory landscape, because the existence of a deep reserve of talent increases optionality.

Currently, however, the governments of both the EU and U.S. are struggling to hire highly qualified employees to fill critical AI-related positions. Many factors contribute to this ongoing failure: private sector compensation packages can exceed government salaries by orders of magnitude, government hiring processes in the U.S. are outdated and inefficient, and legislatures and agencies generally lack the sense of urgency that the political moment calls for.

Increasing funding and enabling higher pay is necessary, but not sufficient. New hiring and contracting authorities are needed. In the U.S., this could mean establishing a "reserve corps" of private-sector experts who could be called in to advise the government in an emergency, or reforming the Intergovernmental Personnel Act to allow it to be used to draw on private-sector AI talent. In the EU, it could mean reducing bureaucratic delay in the hiring process.

Avoiding Premature & Overbroad Preemption


Ultimately, regulating frontier AI systems in the United States should primarily be the responsibility of the federal government. Uniform federal requirements are, in principle, preferable to a patchwork of state regulations. At some point, it will become necessary for the federal government to preempt some state AI regulations. The question is not whether preemption should ever occur, but rather when and how.

Radical optionality provides a useful perspective on this problem. Consider the moratorium that was introduced as part of the recent reconciliation bill before being stripped out by a 99-1 vote in the U.S. Senate. This moratorium would have prohibited states from enforcing any law regulating "AI," broadly defined, for ten years.[7] In our opinion, this moratorium was ill-advised for a number of reasons, including its effect on optionality.

It is difficult to pass federal legislation in the United States. Practically speaking, a broad preemption bill would be both unlikely to be reversed and unlikely to be followed by any substantial federal AI legislation in the near future. In other words, preempting state AI legislation and replacing it with nothing radically reduces the available regulatory option space. State transparency bills such as SB 53 in California and New York's RAISE Act are a second-best option; eliminating the second-best options without replacing them with any federal framework, while acknowledging that a federal framework is necessary, would be incredibly foolish.

Essentially, radical optionality suggests that we should wait to preempt state AI laws until we know what we're preempting and why. This means that preemption should take place after a federal approach to a given regulatory problem has been decided, not before.

It's worth noting that the approach suggested by radical optionality is the approach that has been taken for the regulation of every past emerging technology in the history of the U.S. The approach that the moratorium attempted to take—preempting nearly all state laws regulating a given technology before any federal law had been passed—was totally unprecedented. Trying to answer the question of how regulatory responsibilities should be distributed before the contours of the problem are even clear is like trying to put together a jigsaw puzzle while blindfolded.

VI

Objections


Objection #1 — Giving the Government a Hammer

It could be argued that increasing the capacity of government agencies to take dramatic regulatory action will increase the likelihood of dramatic regulatory action being taken, perhaps prematurely or unwisely. Giving regulators hammers, in other words, might lead them to hallucinate nails. This is a legitimate concern. That said, attempting to prevent overregulation by intentionally hamstringing regulators seems short-sighted. Instead, we should guard against the risk of overzealous regulators by ensuring that democratically elected branches of government have meaningful oversight capabilities.

Consider the specific interventions recommended: information-gathering authorities, whistleblower protections, risk assessments and evaluations, personnel recruitment measures, and measures to increase lab security. These aren't weighty substantive authorities that lend themselves to abuse, they're common-sense ways of increasing regulatory capacity so that action can be taken when and if the appropriate authorities decide that action is needed.

Objection #2 — Democratic Legitimacy

It could also be argued that preserving optionality might come at an unacceptable cost to democratic legitimacy. Flexibility and democratic legitimacy are often in tension. At the end of the day, a balance has to be struck between democratic responsiveness and flexibility. Because powerful AI systems threaten to "reshape the delicate balance between state capacity and individual liberty that sustains free societies," it's vitally important for the processes by which AI is governed to be legitimate and responsive to public concerns. But failing to adequately prepare government institutions increases the risk of extremely undemocratic outcomes, like the "Manhattan Project" scenario that Aschenbrenner predicts. Maintaining optionality is a way of increasing the odds that society will have the time and resources necessary to choose between options in a democratically legitimate way.

Objection #3 — Private Governance Is All You Need

Dean Ball has argued that AI governance should depend on private governance mechanisms "just as much, if not more than" government laws and regulations. This seems probably correct, but it does not mean that government institutions have no role to play.

The advantages of private governance mechanisms in the AI governance context are numerous: they are more flexible, they can take advantage of the resources and personnel of the sophisticated actors developing frontier AI models, and it will likely be easier to incorporate AI tools into private governance institutions than into government agencies. But a fundamental feature of the leading proposals for private governance is a highly competent and well-resourced government office charged with overseeing the private regulators. In other words, both proposals acknowledge that private governance alone is insufficient.

Since transformative AI would profoundly affect the lives of everyone in the world, it simply isn't acceptable that important decisions about how its benefits and risks should be weighed should be left solely to the companies creating it. This isn't a matter of trusting or not trusting AI labs, it's simply a recognition of the reality that private for-profit companies should, do, and in many cases are legally obligated to prioritize the interests of their shareholders above the interests of the general public.

No amount of ingenuity in designing novel private governance structures is likely to create an adequate substitute for governments' monopoly on violence, or to confer democratic legitimacy on private corporations that do not and should not primarily serve the public interest. It's true that government institutions are often cumbersome, bureaucratic, and slow to act, but this just means that laying the groundwork to minimize delay and maximize competence is all the more important.

VII

Conclusion


Recall the three basic assumptions from the beginning of this essay. We assume that there is a real possibility that transformative AI will be developed within the next fifteen years or so; we predict that the benefits of transformative AI could outweigh its risks, especially if sensible governance measures are implemented; and we don't think anyone knows for sure what the capabilities and tendencies of transformative AI will be or what will be the best way to govern it.

If you generally agree with the above, then you should support an approach to AI governance that focuses on maintaining optionality. Instead of attempting to anticipate exactly how transformative AI will be developed and what it will be capable of, governance efforts should focus on ensuring that, when and if important decisions need to be made, governments have the institutional capacity to make them well. This can be accomplished by taking steps that will be helpful regardless of when and how transformative AI is developed, without burdening innovation to any significant extent. The concrete measures we've suggested, from establishing procedures and channels for governmental information-gathering to securing model weights and algorithmic secrets, would be a good start.

VIII

References


  1. [1]Defined as "AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution." See Ross Gruetzemacher and Jess Whittlestone, The transformative potential of artificial intelligence, 135 Futures 102884 (2022).
  2. [2]Regulation isn't the only factor that has contributed to the relative stagnation of Europe's tech industry, and may not even be the primary cause, although it very likely plays a role. See Ian Hogarth, Can Europe build its first trillion-dollar start-up?
  3. [3]"Dual-use" technologies have peaceful civilian applications, but can also be used for military, terroristic, or otherwise harmful purposes. The U.S. government has described advanced general-purpose AI models with certain potentially dangerous capabilities as "dual-use foundation models."
  4. [4]Of course, this benefit hasn't always resulted from market forces alone. Government intervention to address harmful externalities has played an important role, as when the depletion of earth's ozone layer was successfully reversed as a result of the Montreal Protocol.
  5. [5]See Matthew C. Stephenson, Information Acquisition and Institutional Design, 124 Harvard L. Rev. 1422 (2011); Cary Coglianese et al., Seeking Truth for Power, 89 Minn. L. Rev. 277 (2004).
  6. [6]The AI Office states in its Q&A concerning GPAI models that the "specific circumstances in which a downstream entity becomes a provider of a new model is a difficult question with potentially large economic implications."
  7. [7]The final version of the moratorium would have lasted for only five years and exempted some state laws from preemption, including "generally applicable laws." It also applied only to states that accepted certain federal broadband infrastructure funds.