• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Inside Philanthropy

Inside Philanthropy

Who's Funding What & Why

Facebook LinkedIn X
  • Grant Finder
  • For Donors
  • Learn
    • State of American Philanthropy
    • Explainers
  • Articles
    • Arts and Culture
    • Civic
    • Economy
    • Education
    • Environment
    • Global
    • Health
    • Science
    • Social Justice
  • Places
  • Jobs
  • Search Our Site

Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security

Tate Williams | March 20, 2019

Share on Facebook Share on LinkedIn Share on X Share via Email
Banner for article What Does the Arizona Community Foundation See as the State's Top Needs?
The Arizona Community Foundation has become known as a huge health funder in the state

The Open Philanthropy Project is one of the biggest practitioners of effective altruism, a rationalist approach to giving that starts with a wide range of issues and then attempts to pick priorities based on the total good possible per dollar granted.

The cause probably most associated with effective altruism is global health, where it’s comparatively easy to measure how funding impacts lives. Open Philanthropy has given millions in recent years to causes such as preventing malaria and controlling schistosomiasis, a disease caused by parasitic worms.

But the logic of effective altruism has also led the organization down some unexpected paths. For example, if there’s even a small chance that giving can prevent a global catastrophe that could impact generations of humanity, wouldn’t that be warranted?

That’s the premise that led Open Philanthropy to explore the potential risks of artificial intelligence, one of two topics within its “global catastrophic risks” initiative (the other being biosecurity). The organization has given over $110 million to date to the issue, including its largest grant on the topic—$55 million to establish a new research and policy analysis center at Georgetown focused on AI.

That recent grant makes the Open Philanthropy Project potentially the largest funder backing oversight of the fast-moving field of AI, an area of giving that’s drawn interest from donors like Elon Musk and Reid Hoffman in recent years. The grantmaker has a particular focus on long-term and geopolitical risks as AI becomes more ubiquitous. The newly established Center for Security and Emerging Technology (CSET) will provide research and analysis to policymakers on AI and other technologies.

“We think that the future of AI will have enormous benefits to humanity, and also pose some significant risks,” says Senior Research Analyst Luke Muehlhauser. “Open Philanthropy is really focused on that long-term consideration in AI development, and making sure that humanity can seize those benefits, and avoid the risks, and make sure the benefits are broadly distributed.”

Of course, that’s a profoundly unpredictable space, and oversight efforts are vastly overshadowed by corporate and government money going toward advancing AI technology. This makes the outcomes of Open Philanthropy’s investments on AI quite uncertain. But Muehlhauser explained the demand the grantmaker is trying to meet and why it’s worth the risk.

Rewards and Risks

Artificial intelligence has become a hot topic for a set of funders, including massive private sector investment pouring into research, and philanthropists coming at it from many angles. 

We’ve been documenting this giving, with standouts including the late Paul Allen, auto and tech companies, and a bunch of grants to boost universities in the field. Our friends over at the Chronicle of Philanthropy totaled up some $583 million in donations toward this space since 2015. 

Particularly fascinating are efforts to explore and highlight potential negative consequences. That’s included a philanthropy-backed center anchored at MIT and Harvard, looking at the legal and ethical concerns of AI, and how the technology might impact areas like criminal justice and democratic norms. 

Open Philanthropy Project CEO Holden Karnofsky first took an interest in the issue around 2007, and over time, went from skeptic to convert. “I previously saw this as a strange preoccupation of the [effective altruism] community, and now see it as a major case where the community was early to highlight an important issue,” he wrote in 2016. 

The organization’s decision-makers blog at length about the evolution of their thinking on issues, part of the effective altruism goal of starting out agnostic about causes and reasoning your way toward grantmaking decisions. The outfit is a result of a merger between GoodVentures, the philanthropy of Dustin Moskovitz and Cari Tuna, and GiveWell, a nonprofit that evaluates causes for donors based largely on effective altruism principles. Those principles are often debated, meaning different things to different people, but Open Philanthropy considers itself an effective altruist organization based on its overall goals.

Nearly everything about the organization comes across as highly rational, so accordingly, Muehlhauser isn’t one to throw around doomsday scenarios about AI. But that’s partially due to the amount of uncertainty involved.

“Because AI developments are moving so rapidly and AI could be a very general purpose technology, and could transform a lot of different parts of society, there are huge benefits there, and also huge risks,” he says. 

One argument for concern he cites is the paper “Technology Roulette,” by national security consultant Richard Danzig. Danzig makes the case that decision-makers in security tend to pursue technological superiority, but that doesn’t necessarily lead to greater security in the case of AI and other tech, as it expands the risk of accidents, unanticipated effects, misunderstandings and sabotage.

“The multinational reliance on ever-advancing technological capabilities is like loading increasing numbers of bullets into increasing numbers of revolvers held to the head of humanity. Even if no one ever wants a gun to go off, inadvertent discharges may occur,” Danzig writes. 

The Case for Taking on Catastrophe

Open Philanthropy made the potential risks of AI a priority for giving in 2016. In addition to the latest grant establishing CSET at Georgetown, the funder supports research fellowships and other universities and institutes working on the topic.

Such a potentially distant and uncertain threat might seem unlikely for a funder that takes such a calculated perspective. And Open Philanthropy Project does give quite a lot to the effective altruism standby of global health and development, with more than $327 million in giving making it its largest focus area to date. 

Muehlhauser explains that all of the organization’s decisions hinge on three criteria—importance, tractability and neglectedness. When it came to AI, he says, Open Philanthropy judged it to be an issue that was very important, and yet mostly neglected by the funding community. The tractability—whether they could make a difference—was a lot less certain, and still is, he says.

“Because we’re focused on those long-term issues and those global issues, we always have some uncertainty about how much impact we can really have,” he says. 

Ultimately the team deemed that taking on AI was worth it, based on another principle they embrace called “hits-based giving.” This is a VC-like approach that assumes much of its funding won’t have the intended effect, which is OK as long as some funding hits in a big way. 

“We are open to things that have even potentially a quite small chance of having a positive impact, so long as if that impact happened, it would be large enough,” Muehlhauser says. So the idea is that Open Philanthropy tries to layer some giving with uncertain outcomes, along with grants that have more tangible impacts.

And if you’re wondering (as I did) why climate change wasn’t a more obvious global catastrophe to pick as a priority, Open Philanthropy’s spokesman points out that they have actually given millions related to climate change, and deem it a major issue that could become a larger focus in the future. But in choosing initial priorities, other issues stood out as more neglected and tractable.

A Gap Between Tech and Policy

Another factor Open Philanthropy takes into account is how much of a logical fit a topic is for philanthropy specifically. In the case of AI, the profit motive means the private sector is going to go full throttle on advancing the tech. Meanwhile, though, if we’re looking far enough out beyond election cycles, there’s not as much incentive for government to prioritize concerns about the long-term consequences of AI’s rise. 

Policymakers have a hard time even grasping what the threats posed by AI may look like, which was a big motivator behind the Georgetown grant. “We noticed over the last couple of years that there was a lot of demand for advice about AI policy in D.C.,” Muehlhauser says. 

The concept for CSET came from its founding director, Jason Matheny, who formerly worked for the national intelligence R&D agency IARPA. The idea is to study security impacts of technologies like AI and provide a level of analysis to policymakers that previously had not existed. 

At this stage, part of Open Philanthropy Project’s goal regarding AI is simply being, well, open about where it might go. Its theory of change is extremely simple, and “hopefully, properly humble,” Muehlhauser says—fund great people in the field and things will probably go better than if they didn’t. 

There’s been a lot written about the concept of effective altruism, including some fierce criticism. I tend to have mixed feelings about the movement, which often feels too prescriptive and top-down. At the end of the day, I also find that it actually has a lot more in common with traditional philanthropy in terms of goals—taking risks, finding a niche, having an impact, etc.—than the way it’s sometimes portrayed.

At the same time, there is that openness to possibilities—from animal welfare to runaway tech—and the ongoing transparent analysis of its own choices that makes the Open Philanthropy Project such a compelling endeavor.

Read More

AI Is Here, for Better and for Worse. This Tech Funder Doesn’t Want to Leave It to the Marketplace

AI Is Here, for Better and for Worse. This Tech Funder Doesn’t Want to Leave It to the Marketplace

Philanthropy Must Be at the Table to Imagine a Resilient, Inclusive Economy in the Age of AI

Philanthropy Must Be at the Table to Imagine a Resilient, Inclusive Economy in the Age of AI

With a New, All-Female Investment Committee, This L.A.-Based Funder Is Sending a Message

With a New, All-Female Investment Committee, This L.A.-Based Funder Is Sending a Message

Why Craig Newmark Is Concerned — and Hopeful — for American Democracy

Why Craig Newmark Is Concerned — and Hopeful — for American Democracy

The Leader of This Growing West Coast Funder Is Zeroing in on the Promise and Perils of AI

The Leader of This Growing West Coast Funder Is Zeroing in on the Promise and Perils of AI

Funders Have a New Tool to Help Navigate the Fast-Moving AI Landscape

Funders Have a New Tool to Help Navigate the Fast-Moving AI Landscape

How Might AI Impact Nonprofits and Foundations? Here's a Crash Course

How Might AI Impact Nonprofits and Foundations? Here’s a Crash Course

AI in Education? Salesforce Takes the Plunge

AI in Education? Salesforce Takes the Plunge

AI for the Planet: How One of the World's Biggest Tech Firms Is Backing AI-Powered Climate Science

AI for the Planet: How One of the World’s Biggest Tech Firms Is Backing AI-Powered Climate Science

AI Is Suddenly Everywhere, but Philanthropy Has Been Involved for Years. Here Are the Top Funders

AI Is Suddenly Everywhere, but Philanthropy Has Been Involved for Years. Here Are the Top Funders

The Huge Philanthropic Gift at the Heart of a Major University Initiative to Advance Computing

The Huge Philanthropic Gift at the Heart of a Major University Initiative to Advance Computing

Can AI Help Locate Low-Cost Cancer Treatments? These Funders Want to Find Out

Can AI Help Locate Low-Cost Cancer Treatments? These Funders Want to Find Out

New Climate Finance Database Offers an Early Glimpse of AI’s Role in Philanthropy

New Climate Finance Database Offers an Early Glimpse of AI’s Role in Philanthropy

“It Is Not a Panacea.” A Conversation About Philanthropy With ChatGPT

“It Is Not a Panacea.” A Conversation About Philanthropy With ChatGPT

How Smart Tech Can Help Democratize Philanthropy

How Smart Tech Can Help Democratize Philanthropy

How AI Could Change the Way Donors Give and Nonprofits Fundraise

How AI Could Change the Way Donors Give and Nonprofits Fundraise

Backed by Tech Companies, a New Funder Pursues AI Solutions for COVID-19

Backed by Tech Companies, a New Funder Pursues AI Solutions for COVID-19

Looking to Impact “Real Lives,” a Tech Billionaire Makes a Big Campus Gift for AI

Looking to Impact “Real Lives,” a Tech Billionaire Makes a Big Campus Gift for AI

A Smarter Money Chase? How Artificial Intelligence is Changing Fundraising

A Smarter Money Chase? How Artificial Intelligence is Changing Fundraising

As Millions Flow for Campus Research on Health and AI, What Might Be Some Downsides?

As Millions Flow for Campus Research on Health and AI, What Might Be Some Downsides?

Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security

Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security

Dept. of Disruption: Can Philanthropy-Backed Oversight Keep Up With the AI Boom?

Dept. of Disruption: Can Philanthropy-Backed Oversight Keep Up With the AI Boom?

For a University and a Top Billionaire Donor, a Huge Bet on Artificial Intelligence

For a University and a Top Billionaire Donor, a Huge Bet on Artificial Intelligence

What's Behind Paul Allen's Big New Give for Artificial Intelligence?

What’s Behind Paul Allen’s Big New Give for Artificial Intelligence?

Big Names, Big Funding: Why Are MIT and IBM Joining Forces on AI Research?

Big Names, Big Funding: Why Are MIT and IBM Joining Forces on AI Research?

Don't Let the Robots Rule: Millions Flow to Steer AI in the Right Direction

Don’t Let the Robots Rule: Millions Flow to Steer AI in the Right Direction

As Concern Grows, Another Philanthropy-Backed AI Watchdog Launches

As Concern Grows, Another Philanthropy-Backed AI Watchdog Launches

Google Is Stepping Up Its Giving for Artificial Intelligence. Here's a Closer Look

Google Is Stepping Up Its Giving for Artificial Intelligence. Here’s a Closer Look

U.K. Research Funder is Latest AI Watchdog With Launch of a New Center

U.K. Research Funder is Latest AI Watchdog With Launch of a New Center

Filed Under: IP Articles Tagged With: Editor's Picks, Front Page - More Article, Front Page Most Recent, Global, Science, Science Research, Security & Human Rights

Primary Sidebar

Find A Grant Square Banner

Newsletter

Donor Advisory Center Banner
Consultants Directory Banner

Philanthropy Jobs

Check out our Philanthropy Jobs Center or click a job listing for more information.

Footer

  • LinkedIn
  • X
  • Facebook

Quick Links

About Us
Contact Us
Consultants Directory
FAQ & Help
Terms of Use
Privacy Policy

Become a Subscriber

Individual Subscriptions ▶︎
Multi-User Subscriptions ▶︎

© 2024 - Inside Philanthropy