October 28, 2009 Episode


Episode Artwork
1.0x
0% played 00:00 00:00
Oct 27 2009 30 mins  

Eliezer Yudkowsky (co-founder and research fellow of the Singularity Institute for Artificial Intelligence) is today's featured guest.

Topics: the Singularity and the creation of Friendly AI; his estimate of the probability of success in making a Friendly AI; and why achieving AI using evolutionary software might be monumentally dangerous. He also talks about human rationality, such as: the percentage of humans today who can be considered rational; his own efforts to increase that number; how the listener can seek the path to greater rationality in his or her own thinking; the benefits of greater rationality; and the amount of success that can be expected in this pursuit.

Hosted by Stephen Euin Cobb, this is the October 28, 2009 episode of The Future And You. [Running time: 30 minutes] (This interview was recorded on October 4, 2009 at the Singularity Summit in New York City.)

Eliezer Yudkowsky is an artificial intelligence researcher concerned with the Singularity, and an advocate of Friendly Artificial Intelligence. He is the author of The Singularity Institute for Artificial Intelligence publications Creating Friendly AI (2001) and Levels of Organization in General Intelligence (2002). His most recent academic contributions include two chapters in Oxford philosopher Nick Bostrom's edited volume Global Catastrophic Risks.

Aside from research, he is also notable for his explanations of technical subjects in non-academic language, particularly on rationality, such as his article An Intuitive Explanation of Bayesian Reasoning. Also, along with Robin Hanson, he was one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found LessWrong.com, a community blog devoted to refining the art of human rationality.