Description
For this episode I was delighted to be joined by Dr. Roman Yampolskiy, a professor of Computer Engineering and Computer Science at the University of Louisville. Few scholars have devoted as much time to seriously exploring the myriad of threats potentially inhering in the development of highly intelligent artificial machinery than Dr. Yampolskiy, who established the field of AI Safety Engineering, also known simply as AI Safety. After the preliminary inquiry into his background, I asked Roman Yampolskiy to explain deep neural networks, or artificial neural networks as they are also known. One of the most important topics in AI research is what is referred to as the Alignment Problem, which my guest helped to clarify. We then moved onto his work on two other vitally significant issues in AI, namely understandability and explainability. I then asked him to provide a brief history of AI Safety, which as he revealed built on Yudkowsky’s ideas of Friendly AI. We discussed whether there is an increased interest in the risks attendant to AI among researchers, the perverse incentive that exists among those in this industry to downplay the risks of their work, and how to ensure greater transparency, which as you will hear is worryingly far more difficult than many might assume based on the inherently opaque nature of how deep neural networks perform their operations. I homed in on the issue of massive job losses that increasing AI capabilities could potentially engender, as well as the perception I have that many who discuss this topic downplay the socioeconomic context within which automation occurs. After I asked my guest to define artificial general intelligence, or AGI, and super intelligence, we spent considerable time discussing the possibility of machines achieving human-level mental capabilities. This part of the interview was the most contentious and touched on neuroscience, the nature of consciousness, mind-body dualism, the dubious analogy between brains and computers that has been all to pervasive in the AI field since its inception, as well as a fascinating paper by Yampolskiy proposing to detect qualia in artificial systems that perceive the same visual illusions as humans. In the final stretch of the interview, we discussed the impressive language-based system GPT3, whether AlphaZero is the first truly intelligent artificial system, as Gary Kasparov claims, the prospects of quantum computing to potentially achieve AGI, and, lastly, what he considers to be the greatest AI risk factor, which according to my guest is “purposeful malevolent design.” While this far-ranging interview, with many concepts raised and names dropped, sometimes veered into various weeds some might deem overly specialised and/or technical, I nevertheless think there is plenty to glean about a range of fascinating, not to mention pertinent, topics for those willing to stay the course.
Roman Yampolskiy’s page at the University of Louisville: http://cecs.louisville.edu/ry/
Yampolskiy’s papers: https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en
Roman’s book, Artificial Superintelligence: A Futuristic Approach: https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1482234432
Twitter account for Skeptically Curious: https://twitter.com/SkepticallyCur1
Patreon page for Skeptically Curious: https://www.patreon.com/skepticallycurious
If I were to hazard a guess, the odds are far more likely that someone has heard of climate change than they have of the Anthropocene. While its use has exploded since first being coined by Nobel Prize-winning chemist, Paul Crutzen, in 2000, particularly in academic circles, but also including...
Published 12/01/21
For this episode I was very pleased to be once again joined by Roman Yampolskiy. Dr. Yampolskiy is a professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering at the University of Louisville in Kentucky and has authored dozens of peer-reviewed...
Published 11/10/21