daaloft.blogg.se

Harry Potter and the Methods of Rationality by Eliezer Yudkowsky
Harry Potter and the Methods of Rationality by Eliezer Yudkowsky






His transhumanist philosopher and existential risk theorist friend Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, wrote Superintelligence: Paths, Dangers, Strategies on the above ideas in 2014. AIs will, at some point, gain virtually unlimited control over the physical world through nanotechnology.However - the reader may note the beginning of a pattern here - the transcripts are unpublished. Interestingly, Yudkowsky claims to have tested this idea by role-playing the part of supersmart AI, with a lesser mortal playing the jailer. Human-level or better AIs could not be imprisoned or threatened with having their plugs pulled because they would talk their way out of the situation.AIs and humans would want them to improve themselves.An AI-based singularity will occur soon."AI foom," short for "recursively self-improving Artificial Intelligence engendered singularity," comes from these ideas: Such an AI won't kill us, inadvertently or otherwise. Believing AI is imminent, Yudkowsky's taken it upon himself to create a Friendly AI (FAI). Yudkowsky believes he has identified a "big problem in AI research" in that we can't assume an AI would care about humans or ethics without our evolutionary history. Eliezer Yudkowsky on how to save the human race. If the thing that you're best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity Institute where will be used by other people. “ ”Find whatever you're best at if that thing that you're best at is inventing new math of artificial intelligence, then come work for the Singularity Institute.








Harry Potter and the Methods of Rationality by Eliezer Yudkowsky