AI Governance

Structuring Corporations for Safety from Technology, and Ourselves

Boston Nyer
August 17, 2023
5 min read
Full name
11 Jan 2022
5 min read

“I tend to think that most fears about A.I. are best understood as fears about capitalism.”

That’s Ted Chiang, the award winning science fiction author on Ezra Klein's podcast. He offers a thought experiment:

“How much would you fear [Artificial Intelligence] if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there.”

The answer: less than we do now.

AI safety leaders often talk about the “Alignment Problem” - the idea that AI would become so powerful, that if given bad incentives it could behave in ways that are detrimental to society.

But perhaps the thing that’s misaligned isn’t the AI. Maybe what is misaligned is the system the AI is built to serve.

Maybe the misalignment is with capitalism.

the problem is capitalism

Fine tuning and optimizing the algorithms that power things like chatGPT is of course crucial, but the core of AI safety actually extends beyond the technology itself.

After all, humans have inflicted countless atrocities and untold suffering with technologies far more primitive.

In order to build AI safely, what we actually must guardrail is us.

Governance for AI Safety

In order to properly guardrail AI, we must first build the right structures and incentives, and that starts with governance.

For most organizations, that basically means one thing: building a better board.

As with most nonprofits we serve, our board is a crucial part of our operation, and the single most important piece in our governance puzzle.

But our board is also not the right group to govern AI safety for us.

We love our board members, but they are not experts in AI. And that’s okay, most people aren’t!

Plus, our board members are busy people. They often have other jobs (or at least engagements), and their role as a member of our board can extend far beyond just issues around AI - they’re accountable for governing our entire operation.

So for something as crucial as AI, we want experts, and we want them focused.

Enter, the Board of Trustees.

A Trust That Governs The Board

You would be forgiven if you were confused about the difference between a board of trustees and a board of directors.

The differences are technical in nature and the terms are often used imprecisely.

board of trustees vs directors
Source

For our purposes, the primary difference is simple: our board of directors oversees operations and finances. Our board of trustees oversees AI.

Think: Directors = Operations, Trust = AI.

Without diving into the technical legalese, the basic structure is as follows:

  • On issues relating to AI safety, the Board of Directors is required in the bylaws to refer the decision to the Board of Trustees.
  • The Board of Trustees holds a legally binding mandate to support long-term ethics, good governance, and accountability on issues related to AI safety. For AI issues, they are required to vote in accordance with these values.
  • Critically, the Board of Trustees is not to consider the health of our business when making their decision and voting.
  • In order to preserve aligned incentives, the Board of Trustees hold no financial affiliation with the organization.
  • Whatever decision the Board of Trustees reaches on issues of AI safety is required to be adopted by the Board of Directors and cannot be amended or overruled.

So, we have designed a Board of Trustees that is empowered to make the hard decisions. It’s what we want them to do.

corporate governance with a trust

The intention with this structure and design is to empower Trustees to make decisions that run upstream of the incentives of capitalism.

To choose ethics over profits.

Building a Great AI Safety Board

In order for the Board of Trustees to have any hope of succeeding, it must be made up of actual experts in AI and AI Safety.

But again, we are dealing with people. And whenever there are people there are incentives.

  • Can’t have equity
  • Must have expertise
  • Ideally this is what they do (focus)

Scaling The Trust

One clear flaw in this plan emerges quickly: the actual logistics of constructing a trust like this are challenging. Finding folks that are expert enough in AI AND willing to donate their time is challenging.

So, how do we stretch our pool of AI knowledge? By distributing the resources of the Trustees.

That is, one Trust, comprised of excellent trustees, can govern many businesses.

scaling the ai safety trust

The simple reality is that while AI is developing rapidly, many of the issues that companies (at least to this point) have run into are similar ones, and a single board is likely to be able to issue guidance on the already known issues and have them apply to a broad swath of current AI businesses.

The Final Word

As a technology, AI is still in it’s nascency. But capitalism is not.

As we sprint into the next chapter of technological evolution, we must remember that for now at least, the people making all the decisions are still human. And we must design our systems of governance with them in mind.

Share this post

Subscribe to our newsletter

Get the latest information on major gift fundraising, donor psychology and more. Straight to your inbox.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.