Episodes
Published 04/21/24
Forgive the sound quality on this episode; I recorded it live in front of an audience on a platform floating in a lake during the 2024 solar eclipse. This is a standalone essay by David Chapman on metarationaity.com. How scientific research is like cunnilingus: a phenomenology of epistemology. https://metarationality.com/going-down-on-the-phenomenon You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If...
Published 04/17/24
What Is The Role Of Intelligence In Science? Actually, what are “science” and “intelligence”? Precise, explicit definitions aren’t necessary, but discussions of Transformative AI seem to depend implicitly on particular models of both. It matters if those models are wrong. https://betterwithout.ai/intelligence-in-science Katja Grace, “Counterarguments to the basic AI x-risk case”. https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/   What Do Unusually Intelligent...
Published 04/07/24
Radical Progress Without Scary AI: Technological progress, in medicine for example, provides an altruistic motivation for developing more powerful AIs. I suggest that AI may be unnecessary, or even irrelevant, for that. We may be able to get the benefits without the risks. https://betterwithout.ai/radical-progress-without-AI What kind of AI might accelerate technological progress?: “Narrow” AI systems, specialized for particular technical tasks, are probably feasible, useful, and safe....
Published 03/10/24
Recognize that AI is probably net harmful: Actually-existing and near-future AIs are net harmful—never mind their longer-term risks. We should shut them down, not pussyfoot around hoping they can somehow be made safe. https://betterwithout.ai/AI-is-harmful Create a negative public image for AI: Most funding for AI research comes from the advertising industry. Their primary motivation may be to create a positive corporate image, to offset their obvious harms. Creating bad publicity for...
Published 02/18/24
“Apocalypse now” identified the corrosive influence of new viral ideologies, created unintentionally by recommender systems, as a major AI risk. These may cause social collapse if not tackled head-on. You can resist. https://betterwithout.ai/spurn-artificial-ideology Announcement tweet for the Opening Awareness, Opening Rationality discussion group starting on February 1: https://twitter.com/openingBklyn/status/1751314312415567956 Document with more details: ...
Published 02/04/24
Current AI practices produce technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of specific risks of current AI technology, and can lead to safer technologies.   https://betterwithout.ai/fight-unsafe-AI You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the...
Published 01/14/24
The technologies underlying current AI systems are inherently, unfixably unreliable. They should be deprecated, avoided, regulated, and replaced. https://betterwithout.ai/mistrust-machine-learning You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Music is by Kevin MacLeod. This podcast is under a Creative...
Published 12/31/23
Gaining unauthorized access to computer systems is a key source of power in many AI doom scenarios. That is easy now, because there are scant incentives for serious cybersecurity; so nearly all systems are radically insecure. Technical and political initiatives must mitigate this problem. https://betterwithout.ai/cybersecurity-vs-AI   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show,...
Published 12/17/23
Practical Actions You Can Take Against AI Risks: We can and should protect against current and likely future harmful AI effects. This chapter recommends practical, near-term risk reduction measures. I suggest actions for the general public, computer professionals, AI ethics and safety organizations, funders, and governments. https://betterwithout.ai/pragmatic-AI-safety End Digital Surveillance: Databases of personal information collected via internet surveillance are a main resource for...
Published 12/10/23
This concludes the "Apocalypse Now" section of Better Without AI.   AI systems may cause near-term disasters through their proven ability to shatter societies and cultures. These might potentially cause human extinction, but are more likely to scale up to the level of the twentieth century dictatorships, genocides, and world wars. It would be wise to anticipate possible harms in as much detail as possible.   https://betterwithout.ai/incoherent-AI-apocalypses   You can support the...
Published 12/03/23
Who is in control of AI? - It may already be too late to shut down the existing AI systems that could destroy civilization. https://betterwithout.ai/AI-is-out-of-control What an AI apocalypse may look like - Scenarios in which artificial intelligence systems degrade critical institutions to the point of collapse seem to me not just likely, but well under way. https://betterwithout.ai/AI-safety-failure   This episode mentions the short story "Sort By Controversial" by Scott...
Published 11/12/23
"In this audiobook... A LARGE BOLD FONT IN ALL CAPITAL LETTERS SOUNDS LIKE THIS."   Apocalypse now - Current AI systems are already harmful. They pose apocalyptic risks even without further technology development. This chapter explains why; explores a possible path for near-term human extinction via AI; and sketches several disaster scenarios.   https://betterwithout.ai/apocalypse-now   At war with the machines - The AI apocalypse is now.   https://betterwithout.ai/AI-already-at-war ...
Published 11/06/23
Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes.   https://betterwithout.ai/fear-AI-power   For much of the AI safety community, the central question has been “when will it happen?!” That is futile: we don’t have a coherent description of what “it” is, much less how “it” would come about. Fortunately, a prediction wouldn’t be useful anyway. An AI...
Published 10/22/23
Many people call the future threat “artificial general intelligence,” but all three words there are misleading when trying to understand risks.   https://betterwithout.ai/artificial-general-intelligence   AI may radically accelerate technology development. That might be extremely good or extremely bad. There are currently no good explanations for how either would happen, so it’s hard to predict which, or when, or whether. The understanding necessary to guide the future to a good outcome...
Published 10/15/23
Thanks for your patience while I ran Fluidity Forum. We now resume "Better Without AI" by David Chapman.   Speculations about autonomous AI assume simplistic theories of motivation. They also mistakenly confuse those with ethical theories. Building AI systems on these ideas would produce monsters. https://betterwithout.ai/AI-motivation   Coherent Extrapolated Volition ...
Published 10/08/23
It’s a mistake to think that human-like agency is the only dangerous kind. That risks overlooking AIs causing agent-like harms in inhuman ways. https://betterwithout.ai/diverse-agency#fn_meme_critics You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a...
Published 09/18/23
Most apocalyptic scenarios involve an AI acting as an autonomous agent, pursuing goals that conflict with human ones. Many people reject AI risk, saying that machines can’t have real goals or intentions. However, agency seems nebulous; and subtracting “real” agency from the scenario doesn’t seem to remove the risk.   https://betterwithout.ai/agency   A video in which white blood cells look as if they have agency:   https://www.youtube.com/watch?v=3KrCmBNiJRI   The US National Security...
Published 09/10/23
We have a powerful intuition that some special mental feature, such as self-awareness, is a prerequisite to intelligence. This causes confusion because we don’t have a coherent understanding of what the special feature is, nor what role it plays in intelligent action. It may be best to treat mental characteristics as in the eye of the beholder, and therefore mainly irrelevant to AI risks. https://betterwithout.ai/mind-like-AI You can support the podcast and get episodes a week early, by...
Published 09/04/23
Scary AI: Apocalyptic AI scenarios usually involve some qualitatively different future form of artificial intelligence. No one can explain clearly what would make that exceptionally dangerous in a way current AI isn’t. This confusion draws attention away from risks of existing and near-future technologies, and from ways of forestalling them. https://betterwithout.ai/scary-AI Superintelligence: Maybe AI will kill you before you finish reading this section. The extreme scenarios typically...
Published 08/27/23
We now begin narrating the book Better Without AI, by David Chapman.   https://betterwithout.ai/only-you-can-stop-an-AI-apocalypse   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold] Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Published 08/20/23
Published 05/22/23
Published 05/14/23