Episodes
Or do we?   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3
Published 10/23/18
Ted interviews Jacob Ward, former editor of Popular Science, journalist at many outlets. Jake’s article about the book he’s writing: Black Box Jake’s website JacobWard.com Implicit bias tests at Harvard We discuss the idea that we’re currently using narrow AIs to inform all kinds of decisions, and that we’re trusting those AIs way more than […]
Published 09/05/18
Sane or insane?
Published 07/23/18
We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!
Published 07/06/18
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3
Published 05/03/18
There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3  
Published 04/19/18
We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3
Published 04/05/18
Ted gave a live talk a few weeks ago.
Published 03/26/18
  http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3
Published 03/16/18
Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?
Published 03/02/18
Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and [email protected] For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3
Published 02/13/18
SpectreAttack.com               http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0059-2018-01-14.mp3    
Published 01/30/18
There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0058-2018-01-07.mp3
Published 01/16/18
If the Universe Is Teeming With Aliens, Where is Everybody?                 http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0057-2017-11-12.mp3
Published 01/02/18
Julia Hu, founder and CEO of Lark, an AI health coach, is our guest this episode. Her tech is really cool and clearly making a positive difference in lots of people's lives right now. Longer term, she doesn't see much to worry about.
Published 12/19/17
Ted had a fascinating conversation with Sean Lane, founder and CEO of Crosschx.
Published 12/05/17
We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when we might see superintelligence, etc. Back in January, we made up a 3 number system for talking about our own predictions and asked our community on facebook to play along […]
Published 11/21/17
Great voice memos from listeners led to interesting conversations.
Published 11/07/17
We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also: 0050: Paths to AGI #3: Personal Assistants 0047: Paths to AGI #2: Robots 0046: Paths to AGI #1: Tools   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0052-2017-10-08.mp3
Published 10/24/17
Rodney Brooks article: The Seven Deadly Sins of Predicting the Future of AI
Published 10/10/17
3rd in a series about future of current narrow AIs.
Published 09/25/17
Read After On by Rob Reid, before you listen or because you listen.
Published 09/11/17
This is our 2nd episode thinking about possible paths to superintelligence focusing on one kind of narrow AI each show. This episode is about embodiment and robots. It's possible we never really agreed about what we were talking about and need to come back to robots. Future ideas for this series include: personal assistants (Siri, Alexa, etc) non-player characters search engines (or maybe those just fall under tools social networks or other big data / working on completely different time /...
Published 09/05/17
For show notes, please see https://concerning.ai/2017/08/29/0048-ai-xprize-and-thrival-festival-special-mini-episode/
Published 08/29/17
How might we get from today's narrow AIs to AGI? This episode focus is tools.
Published 08/22/17