In Defense of Effective Altruism

The Worst Form of Ethics, Except For All The Others

Note: I like to take on hard challenges with my writing. This is probably the worst possible month to try to defend effective altruism, with SBF being convicted of fraud and OpenAI falling apart. But the best time to defend something valuable is when everyone else is rushing to attack it. So here’s that defense:

This would’ve been a lot easier in August 2022.

  1. William MacAskill had just released What We Owe The Future, wowing hosts like Trevor Noah and Tim Ferriss with his passionate arguments for effective altruism. Elon Musk even recommended MacAskill’s book, saying “Worth reading. This is a close match for my philosophy.”

  2. Sam Bankman-Fried was still worth $24B and promising to donate it all to EA causes.

  3. And the biggest criticism was that EAs were too focused on AI safety.

Flash forward 15 months, and the EA movement is in trouble.

  1. MacAskill’s long-termism was criticized for ignoring the present.

  2. SBF’s EA-coded largesse ended up being cover for massive fraud.

  3. And now EA is even taking the blame for OpenAI firing Sam Altman.

RIP: Effective Altruism (2011-2023). Right?

“The measure of a man’s life is not the number of his breaths, but the action he takes.” – Aristotle

EA isn’t new. It’s just a modern rebranding of the ancient ethical principle of consequentialism.

We’ve been having this debate for centuries.

Fundamentally, there are three main schools of ethics:

In practice, most of the world, including our free markets, works on consequentialism (i.e. EA).

  • A company’s stock is not valued by God or virtue, but by a prediction of future value.

  • Techno-optimism, e/acc, etc. are also consequentialist movements.

Sure, it’s time to build. But what are we building?

Before you say “everything”, beware:

“Mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.” – Elon Musk

We can build nuclear power and nuclear weapons, engineer new vaccines and viruses, create AI with the power to advance and destroy humanity.

Safetyism is not the path forward. It’s weak virtue ethics.

With nuclear, nations got the weapons but we didn’t get the power.

This can’t happen with biology and AI.

But we shouldn’t Leeroy Jenkins our way into the future either.

I certainly don’t think all gas, no brakes toward the future. But I do think we should go to the future… And maybe relative to most people who work on A.I., that does make me an accelerationist. But compared to those accelerationist people, I’m clearly not them. So, I think you want the CEO of this company to be somewhere in the middle — which I think I am.Sam Altman

Decisions that affect the future of humanity deserve nuance.

How do we get more AI knowledge and less AI authoritarianism?

More biologics and fewer bioweapons?

There has to be a way to optimize these existential risks and rewards.

  • What if we all focused on maximizing our net positive impact on the world?

    • Congrats, we just invented effective altruism again.

We need to return EA’s focus to maximizing the impact of our charity and work.

  • That starts with addressing our public and private failures.

  • It also means passionately fighting for what’s working in EA.

In EA’s defense, I offer four arguments, from negative to positive:

This ended up being one of my longest essays ever, clocking in at 2,369 words! And I could’ve written much more. If there’s anything you think I should add, please reply to this email and let me know!

Thanks,
Neil

See My Latest Posts: