Read Ebook {PDF EPUB} the Less Wrong Sequences by Eliezer Yudkowsky Download Now! We Have Made It Easy for You to Find a PDF Ebooks Without Any Digging
Total Page:16
File Type:pdf, Size:1020Kb
Read Ebook {PDF EPUB} The Less Wrong Sequences by Eliezer Yudkowsky Download Now! We have made it easy for you to find a PDF Ebooks without any digging. And by having access to our ebooks online or by storing it on your computer, you have convenient answers with The Less Wrong Sequences Ebook Eliezer Yudkowsky . To get started finding The Less Wrong Sequences Ebook Eliezer Yudkowsky , you are right to find our website which has a comprehensive collection of manuals listed. Our library is the biggest of these that have literally hundreds of thousands of different products represented. Finally I get this ebook, thanks for all these The Less Wrong Sequences Ebook Eliezer Yudkowsky I can get now! cooool I am so happy xD. I did not think that this would work, my best friend showed me this website, and it does! I get my most wanted eBook. wtf this great ebook for free?! My friends are so mad that they do not know how I have all the high quality ebook which they do not! It's very easy to get quality ebooks ;) so many fake sites. this is the first one which worked! Many thanks. wtffff i do not understand this! Just select your click then download button, and complete an offer to start downloading the ebook. If there is a survey it only takes 5 minutes, try any survey which works for you. LessWrong. LessWrong (aka Less Wrong ) is a discussion forum founded by Eliezer Yudkowsky focused on rationality and futurist thinking. It is operated by the Machine Intelligence Research Institute. Содержание. History. According to the LessWrong FAQ, [ citation needed ] the site developed out Overcoming Bias, an earlier group blog focused on human rationality. Overcoming Bias originated in November 2006, with artificial intelligence (AI) theorist Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog. [ citation needed ] LessWrong has been closely associated with the effective altruism movement. Effective-altruism-focused charity evaluator GiveWell has benefited from outreach to LessWrong. [ citation needed ] Roko's basilisk. In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures anyone who does not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, later writing that he did so because although Roko's reasoning was mistaken, the topic shouldn't be publicly discussed in case some version of the argument could be made to work. Discussion of Roko's basilisk was banned on LessWrong for several years thereafter. [ citation needed ] Media coverage. LessWrong has been covered in Business Insider and Slate. Core concepts from LessWrong have been referenced in columns in The Guardian. [ citation needed ] LessWrong has been mentioned briefly in articles related to the technological singularity and the work of the Machine Intelligence Research Institute (formerly called the Singularity Institute). It has also been mentioned in articles about online monarchists and neo-reactionaries in positive light. [1] Jargon and community. Less wrong uses an extensive set of in-group jargon and memes. Useful ones to know are: There are also international meetup groups around the world for people who subscribe to the associated ideas. The association with the main site is looser in recent years, often referred to as the 'Rationalist movement'. Current status. Less Wrong is currently far less active since it's 2012 peak with many core contributors having gone on to form their own blogs, or otherwise join what's commonly know as the Less Wrong diaspora. [2] Eliezer Yudkowsky. Eliezer Yudkowsky is a research fellow of the Machine Intelligence Research Institute, which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a Friendly AI, such as a reflective decision theory that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong , writing the Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on. Tag: Eliezer Yudkowsky. I read The Sequences (and lived to tell the tale) It’s hard to say any one thing about Rationality: From AI to Zombies , the 1600-page compendium of blog posts by Eliezer Yudkowsky that has become known as “The Sequences.” I can see why Yudkowsky wields so much influence in certain circles: The Sequences are dense with sharp observations about human irrationality and clever strategies for thinking more clearly, evidence of Yudkowsky’s sincerity and intensity. The book has many shortcomings, but I want to convey my overall positive feelings about it before proceeding to pick nits. It’s good to get comfortable with Bayes’ theorem, good to understand the ways in which human reasoning predictably fails, and good to be energized by the pursuit of truth. An abbreviated, heavily-edited version of Rationality: From AI to Zombies would be an ideal introduction to this ethos. No “Straw Rationalist” It would be an ideal introduction because it is clearly written and nearly comprehensive, but also because Yudkowsky avoids many of the pitfalls of rationalism as it exists in my observation (or perhaps my imagination). Indeed Yudkowsky spends a lot of time either distancing himself from or arguing against the “straw rationalist.” The Sequences do not advocate a simplistic adherence to formal reasoning: Yudkowsky admits from the start that it is impossible to make a rigorous probabilistic inference in most situations. The important thing is to keep your eye on the prize: “Rational agents should WIN.” Learning the mathematics of probability is crucial, since without study people tend to make incorrect inferences and adopt losing strategies. But author is happy to pick up any tool at his disposal and to reject formal reasoning if it leads to an undesirable outcome, as it does for the two-boxer in Newcomb’s Problem. The goal is to win! There’s something honest about that. In addition, Yudkowsky acknowledges that “you need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.” He repeatedly warns against the dangers of “motivated skepticism,” the practice of subjecting beliefs that you don’t want to agree with to a higher degree of scrutiny than is justified, arguing that aspiring rationalists may be especially susceptible to this error as they sharpen their critical skills. He also warns that understanding others’ biases can be a “fully general counterargument”–a way of discrediting others without engaging with their ideas. Yudkowsky is also appropriately wary of the moral dangers of rejecting conventional wisdom. “Funny thing, how when people seem to think they’re smarter than their ethics, they argue for less strictness rather than more strictness,” he observes. This seems to me a very apt diagnosis of “cynical consequentialism,” in which someone argues that a moral obligation is a poor trade-off and therefore not utility-maximizing in a larger sense. The cynical consequentialist puts no effort into investigating the counterfactual, but assumes when convenient that moral effort is a limited resource in danger of depletion, rather than a virtue that can be strengthened (or even a resource in oversupply). Thus a very high burden of proof is placed entirely on the demands of conscience–a kind of moral motivated skepticism. In Yudkowsky’s language, the cynical consequentialist is trying to try to be moral, rather than honestly trying do what’s right: It’s far easier to convince ourselves that we are “maximizing our probability of succeeding,” than it is to convince ourselves that we will succeed. Almost any effort will serve to convince us that we have “tried our hardest,” if trying our hardest is all we are trying to do. Better a Hypocrite than a Doofus. Now it cannot honestly be said that The Sequences are free from the errors in reasoning that they describe. One particularly striking example is the book’s treatment of evolutionary psychology. In “The Tragedy of Group Selectionism,” Yudkowsky gravely warns that. Evolution does not open the floodgates to arbitrary purposes… If you start with your own desires for what Nature should do…and then rationalize and extremely persuasive argument for why Nature should produce your preferred outcome for Nature’s own reasons, then Nature, alas, still won’t listen. Yet entirely speculative pseudo-evolutionary tidbits are sprinkled throughout the book, often as explanations for weak points in human reasoning. For instance, in “Politics is the Mind-Killer,” Yudkowsky explains people’s tendency to sacrifice rigor to emotion when discussing politics by claiming that “the evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death.” Now in one view this is a harmless or even contentless statement. But it’s certainly not the place of a rationalist to make baseless pseudoscientific pronouncements. If I were to describe this in Yudkowsky’s own words, I would say that at best this claim serves as a “curiosity- stopper,” preventing inquiry into the nature of political polarization without providing a justified explanation for it. At worst it is a “black-swan bet” that the question doesn’t really matter. Doesn’t it? Perhaps political dysfunction is not as constant throughout time and between societies as Yudkowsky takes it to be.