Episode 15 - AI Controllability, AGI, and Possible AI Futures with Roman Yampolskiy
Listen now
Description
For this episode I was very pleased to be once again joined by Roman Yampolskiy. Dr. Yampolskiy is a professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering at the University of Louisville in Kentucky and has authored dozens of peer-reviewed academic papers and some books. In this discussion, I first asked my guest about the recent AGI-21 conference organised by Ben Goertzel’s SingularityNET, held in San Francisco from the 15th to the 18th October, to which he remotely contributed. Roman summarised his presentation on AI Controllability, an incredibly important topic from an AI risk standpoint, but one that has not received nearly enough attention. The conference provided a neat segue into the topic comprising the bulk of our discussion, namely AGI, or artificial general intelligence. I threw plenty at my interviewee, primarily perspectives gleaned from some papers and books, as well as interviews, to which I was recently exposed. However, Roman parried most of my challenging salvos with impressive aplomb. I then shifted focus to some provocative possible future scenarios, both positive and negative, involving AI systems gaining greater intelligence and competency. Lastly, we ventured onto more personal terrain as I asked Roman about his family’s move to the United States, his interest in computers, intellectual influences, and what the secret is to his astonishing productivity. Roman Yampolskiy’s page at the University of Louisville: http://cecs.louisville.edu/ry/ List of Yampolskiy’s papers at Research Gate: https://www.researchgate.net/profile/Roman-Yampolskiy Yampolskiy’s ‘AI Risk Skepticism’ paper: https://www.researchgate.net/publication/351368775_AI_Risk_Skepticism AGI Control Theory Presentation at AGI-21: https://www.youtube.com/watch?v=Palb2Ue_RjI ‘Human ≠ AGI’ paper: https://arxiv.org/ftp/arxiv/papers/2007/2007.07710.pdf ‘Personal Universes’ paper: https://arxiv.org/ftp/arxiv/papers/1901/1901.01851.pdf ‘Here’s Why We May Need to Rethink Artificial Neural Networks’ by Alberto Romero: https://towardsdatascience.com/heres-why-we-may-need-to-rethink-artificial-neural-networks-c7492f51b7bc ‘Evil Robots, Killer Computers, and Other Myths’ by Steve Shwartz: https://www.aiperspectives.com/evil-robots Twitter account for Skeptically Curious: https://twitter.com/SkepticallyCur1 Patreon page for Skeptically Curious: https://www.patreon.com/skepticallycurious  
More Episodes
If I were to hazard a guess, the odds are far more likely that someone has heard of climate change than they have of the Anthropocene. While its use has exploded since first being coined by Nobel Prize-winning chemist, Paul Crutzen, in 2000, particularly in academic circles, but also including...
Published 12/01/21
Published 11/10/21
This episode of Skeptically Curious features the second interview with Dr. Kent Kiehl, one of the foremost contemporary experts on psychopathy, and author of The Psychopath Whisperer. This exceptional book recounts Dr. Kiehl’s illustrious career while also serving as a lucid guide to the latest...
Published 11/03/21