Read Ebook {PDF EPUB} The Less Wrong Sequences by Download Now! We have made it easy for you to find a PDF Ebooks without any digging. And by having access to our ebooks online or by storing it on your computer, you have convenient answers with The Less Wrong Sequences Ebook Eliezer Yudkowsky . To get started finding The Less Wrong Sequences Ebook Eliezer Yudkowsky , you are right to find our website which has a comprehensive collection of manuals listed. Our library is the biggest of these that have literally hundreds of thousands of different products represented. Finally I get this ebook, thanks for all these The Less Wrong Sequences Ebook Eliezer Yudkowsky I can get now! cooool I am so happy xD. I did not think that this would work, my best friend showed me this website, and it does! I get my most wanted eBook. wtf this great ebook for free?! My friends are so mad that they do not know how I have all the high quality ebook which they do not! It's very easy to get quality ebooks ;) so many fake sites. this is the first one which worked! Many thanks. wtffff i do not understand this! Just select your click then download button, and complete an offer to start downloading the ebook. If there is a survey it only takes 5 minutes, try any survey which works for you. LessWrong. LessWrong (aka Less Wrong ) is a discussion forum founded by Eliezer Yudkowsky focused on rationality and futurist thinking. It is operated by the Machine Intelligence Research Institute. Содержание. History. According to the LessWrong FAQ, [ citation needed ] the site developed out Overcoming Bias, an earlier group blog focused on human rationality. Overcoming Bias originated in November 2006, with artificial intelligence (AI) theorist Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog. [ citation needed ] LessWrong has been closely associated with the effective movement. Effective-altruism-focused evaluator GiveWell has benefited from outreach to LessWrong. [ citation needed ] Roko's basilisk. In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures anyone who does not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, later writing that he did so because although Roko's reasoning was mistaken, the topic shouldn't be publicly discussed in case some version of the argument could be made to work. Discussion of Roko's basilisk was banned on LessWrong for several years thereafter. [ citation needed ] Media coverage. LessWrong has been covered in Business Insider and Slate. Core concepts from LessWrong have been referenced in columns in The Guardian. [ citation needed ] LessWrong has been mentioned briefly in articles related to the technological singularity and the work of the Machine Intelligence Research Institute (formerly called the Singularity Institute). It has also been mentioned in articles about online monarchists and neo-reactionaries in positive light. [1] Jargon and community. Less wrong uses an extensive set of in-group jargon and memes. Useful ones to know are: There are also international meetup groups around the world for people who subscribe to the associated ideas. The association with the main site is looser in recent years, often referred to as the 'Rationalist movement'. Current status. Less Wrong is currently far less active since it's 2012 peak with many core contributors having gone on to form their own blogs, or otherwise join what's commonly know as the Less Wrong diaspora. [2] Eliezer Yudkowsky. Eliezer Yudkowsky is a research fellow of the Machine Intelligence Research Institute, which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a Friendly AI, such as a reflective decision theory that would lay a for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong , writing the Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on. Tag: Eliezer Yudkowsky. I read The Sequences (and lived to tell the tale) It’s hard to say any one thing about Rationality: From AI to Zombies , the 1600-page compendium of blog posts by Eliezer Yudkowsky that has become known as “The Sequences.” I can see why Yudkowsky wields so much influence in certain circles: The Sequences are dense with sharp observations about human irrationality and clever strategies for thinking more clearly, evidence of Yudkowsky’s sincerity and intensity. The book has many shortcomings, but I want to convey my overall positive feelings about it before proceeding to pick nits. It’s good to get comfortable with Bayes’ theorem, good to understand the ways in which human reasoning predictably fails, and good to be energized by the pursuit of truth. An abbreviated, heavily-edited version of Rationality: From AI to Zombies would be an ideal introduction to this ethos. No “Straw Rationalist” It would be an ideal introduction because it is clearly written and nearly comprehensive, but also because Yudkowsky avoids many of the pitfalls of rationalism as it exists in my observation (or perhaps my imagination). Indeed Yudkowsky spends a lot of time either distancing himself from or arguing against the “straw rationalist.” The Sequences do not advocate a simplistic adherence to formal reasoning: Yudkowsky admits from the start that it is impossible to make a rigorous probabilistic inference in most situations. The important thing is to keep your eye on the prize: “Rational agents should WIN.” Learning the mathematics of probability is crucial, since without study people tend to make incorrect inferences and adopt losing strategies. But author is happy to pick up any tool at his disposal and to reject formal reasoning if it leads to an undesirable outcome, as it does for the two-boxer in Newcomb’s Problem. The goal is to win! There’s something honest about that. In addition, Yudkowsky acknowledges that “you need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.” He repeatedly warns against the dangers of “motivated skepticism,” the practice of subjecting beliefs that you don’t want to agree with to a higher degree of scrutiny than is justified, arguing that aspiring rationalists may be especially susceptible to this error as they sharpen their critical skills. He also warns that understanding others’ biases can be a “fully general counterargument”–a way of discrediting others without engaging with their ideas. Yudkowsky is also appropriately wary of the moral dangers of rejecting conventional wisdom. “Funny thing, how when people seem to think they’re smarter than their ethics, they argue for less strictness rather than more strictness,” he observes. This seems to me a very apt diagnosis of “cynical consequentialism,” in which someone argues that a moral obligation is a poor trade-off and therefore not utility-maximizing in a larger sense. The cynical consequentialist puts no effort into investigating the counterfactual, but assumes when convenient that moral effort is a limited resource in danger of depletion, rather than a virtue that can be strengthened (or even a resource in oversupply). Thus a very high burden of proof is placed entirely on the demands of conscience–a kind of moral motivated skepticism. In Yudkowsky’s language, the cynical consequentialist is trying to try to be moral, rather than honestly trying do what’s right: It’s far easier to convince ourselves that we are “maximizing our probability of succeeding,” than it is to convince ourselves that we will succeed. Almost any effort will serve to convince us that we have “tried our hardest,” if trying our hardest is all we are trying to do. Better a Hypocrite than a Doofus. Now it cannot honestly be said that The Sequences are free from the errors in reasoning that they describe. One particularly striking example is the book’s treatment of evolutionary psychology. In “The Tragedy of Group Selectionism,” Yudkowsky gravely warns that. Evolution does not open the floodgates to arbitrary purposes… If you start with your own desires for what Nature should do…and then rationalize and extremely persuasive argument for why Nature should produce your preferred outcome for Nature’s own reasons, then Nature, alas, still won’t listen. Yet entirely speculative pseudo-evolutionary tidbits are sprinkled throughout the book, often as explanations for weak points in human reasoning. For instance, in “Politics is the Mind-Killer,” Yudkowsky explains people’s tendency to sacrifice rigor to emotion when discussing politics by claiming that “the evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death.” Now in one view this is a harmless or even contentless statement. But it’s certainly not the place of a rationalist to make baseless pseudoscientific pronouncements. If I were to describe this in Yudkowsky’s own words, I would say that at best this claim serves as a “curiosity- stopper,” preventing inquiry into the nature of political polarization without providing a justified explanation for it. At worst it is a “black-swan bet” that the question doesn’t really matter. Doesn’t it? Perhaps political dysfunction is not as constant throughout time and between societies as Yudkowsky takes it to be. It would then be interesting to investigate what causes the dysfunction we currently experience, and how more effective societies navigate the issue. What’s the difference between hypocrisy and an honest failure to live up to one’s principles? There’s something perverse about hitting people over the head with their high standards, since it seems to punish them for aiming high, but lower standards encourage trying-to-try in place of trying. I think Rationality gets more points for encouraging clear thinking than it loses for sometimes missing its own target: better a hypocrite than a doofus. But I also think that selective rigor is a problem in the Less Wrong-sphere. The fact is that the community attracts people with particular intuitions about the world, and these intuitions often escape scrutiny. I frequently see posters and commenters appealing to a handful of Grand Narratives which make sweeping claims of the sort that should terrify a rationalist. One pervasive Grand Narrative is that America is undergoing a rapid political and technological decline relative to other countries because of political polarization (or a particular political ideology). I won’t attempt to assess this claim here. I just want to point out how difficult it would be to have justified belief in such a thing. Whether any specific failure indicates “decline” is hard to determine without a detailed understanding of the situation: what is baseline performance? What trade-offs and constraints are in effect? What other consequences are there? Unweaving the complex web of causes that brought about a situation is exponentially harder and rarely seriously attempted. The point is not that these beliefs fail to meet some high standard of evidence and are therefore not kosher; rather, I think they are very likely wrong. It’s Tacky All The Way Down. Despite the positives, I really struggled with the tone and general aesthetic of the book. I was especially unmoved by the snippets of heavily- allegorical fiction and by the authors’ extensive discussion of his own intelligence. Writing this, I’m inspired to do a bit of soul-searching. Isn’t discussing the style of a book a poor substitute for engaging with its contents? In my defense, I’m discussing the style explicitly, not mixing it in to a conversation about substance. But why not just have the conversation about substance? Well, the aesthetic experience of reading the book constitutes a large part of its impact on my life. If the author chose to include fiction in his book, then he must have thought the fiction would serve his purpose, and it’s relevant that I disagree. Similarly the author makes his distaste for “fake humility” clear, opening himself up to dialogue about this aspect of the book. Admittedly, there’s something about the idea of “good taste” that bothers me. Part of me considers Brideshead Revisited the pinnacle of good taste, not in spite of but because of its adulation of aristocracy. On the flip side, I often perceive as tacky opinions that I agree with, simply because I know they are widely held. I would like to think that with effort I can make aesthetic judgments based on an internal sense of proportion, rather than a desire to affiliate myself with certain cultural currents, but I’m not sure this is true. It frequently seems like “taste” functions as a general appeal to status–good taste means whatever the people I look up to think it means. And this is wrong even on purely aesthetic grounds, since pandering is not at all beautiful. With this caveat, I find didactic fiction off-putting. I appreciate authors who carefully pick apart the complexity of life and I dislike it when authors don’t think this is worth doing. Similarly, I value humility, and while it is true, as Yudkowsky points out, that humility in the form of doubting ones’ judgments is more valuable than humility in the form of public self-deprecation, I think it is unsafe to rate oneself above the rest of humanity. At very least, doing so encourages one to disregard others’ arguments, a criticism which frequently applies to Rationality . A Good Book, a Bad Book, and Several Other Books. As I have said, I think a heavily edited version of Rationality would be a great and valuable book. If I were to attempt such a project, I would center the book around Yudkowsky’s introduction to Bayes’ theorem and his survey of behavioral economics research about human irrationality. I would also include much of the discussion of the social and psychological obstacles to rationality, which I found compelling if speculative. I would even include the discussion of religion, including the portion on zombies, because while his reasoning here is far from water-tight, Yudkowsky articulates the challenge better than anyone else I have read. Above all, I would strive to capture the unreserved honesty of a book that asks its readers, “without a convulsive, wrenching effort to be rational, the kind of effort it would take to throw off a religion–then how dare you believe anything?” Meta-Sequences: Introduction & Criteria. I have offered bounties to anyone who can identify a precedent, in mainstream philosophy, for an idea advanced by Eliezer Yudkowsky as his own. These bounties are in the service of a larger accounting: Background & Motivation. LessWrong rationalists and mainstream philosophers are two tribes made up of generally intelligent & knowledgeable people, focused on answering many of the same questions, which are each broadly dismissive of the other’s intellectual production. To a naive observer, it is unclear whether or which of these mutual dismissals is warranted. The rationalists’ claim is that mainstream philosophy is too overrun by junky thinking and bad incentives to reliably “get it right” with any regularity. They point to various institutional & social structures that might encourage dragging out debate unnecessarily, and believe that a few unaccounted- for biases (e.g. map/territory confusion) under-gird many of mainstream philosophy’s “confusions.” Members frequently cite “junky” thinking from philosophy, though this treatment is obviously far from systematic. The claim of those defending mainstream philosophy is typically not itself advanced by mainstream philosophers, who, broadly speaking, have significantly less engagement with LessWrong-rationalists’ ideas than rationalists do theirs. Instead, a self-elected representative of this tribe will make a claim that Yudkowsky’s ideas are not meaningful contributions to philosophical discourse. The more moderate versions of this claim concede that he has made contributions to decision theory (in the form of “timeless” decision theory, or TDT) and to the nascent philosophy of AI. More aggressive versions claim his entire intellectual corpus is an unwitting reinvention, or merely confused. Examples of claims in this vein: “The only original thing in LW is the decision theory stuff and even that is actually Kant.” (src) “Alright, I’ve read a bit more into Less Wrong, and I believe I finally have acquired a fair assessment of it: It’s the number 1 site for repackaging old concepts in Computer Science lingo & passing it off as new. And hubris. Also Eliezer Yudkowsky is a pseudointellectual hack.” (src) “Eliezer Yudkowsky is a pseudointellectual and the sequences are extremely poorly written, poorly argued, and are basically poorly necromanced mid 20th century analytic philosophy.” (src) It strikes me that discerning whether, or the extent to which, each camp is correct has important bearings on our understanding of autodidacticism and more traditional educational modes (where one is first “steeped” in the discourse’s approaches & beliefs before attempting to answer its unresolved questions). For example, when is it “cheaper” to reinvent rather than search for & discover? (And to what extent is the answer a function of philosophy’s signal-to-noise ratio & general accessibility?) In what ways might there be advantages to “starting blind,” similar to how we think of hillclimbing & the relative ability for someone at the foot of the hill, vs. at its peak, to “escape” a discourse’s current local maximum and find some other higher peak? (Sidebar: I haven’t been fully clear on whether this project concerns Yudkowsky’s ideas or the ideas of LessWrong or the ideas of rationalists— an ambiguity which may seem more problematic to an outsider than it would to a member of LessWrong proper. Yudkowsky’s sequences are perceived as the backbone of the LessWrong style of thought, and limiting the inquiry to his writings is an imperfect but in my opinion reasonable proxy for understanding the rationalist community’s intellectual output as a whole. However, I may end up covering non-Yudkowsky rationalist ideas so long as they are perceived, by the LessWrong community, as being both original to it and meaningfully “right” or “useful.”) A second motivation for this project is contingent on what I end up discovering. In 2011, Dave Chalmers comments on LessWrong: As a professional philosopher who’s interested in some of the issues discussed in this forum, I think it’s perfectly healthy for people here to mostly ignore professional philosophy, for reasons given here. But I’m interested in the reverse direction: if good ideas are being had here, I’d like professional philosophy to benefit from them. So I’d be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources. (The two main contributions that I’m aware of are ideas about friendly AI and timeless/updateless decision theory. I’m sure there are more, though. Incidentally I’ve tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.) That no one took Chalmers up on his request was a missed opportunity for both communities to enter into discourse. Hopefully, this series can correct that. Project. From here on out, I will work systematically through both the Sequences and the highlight reel of contemporary philosophy in order to understand their ideas’ relationships. I cannot be comprehensive in my reading of contemporary philosophy, as this is a lifetime project. Instead I will be highly reliant on the bounty system, and the recommendations of knowledgeable insiders more generally, for pointing me toward relevant texts. I will fill in any known gaps in my knowledge with reference to respected secondary sources, such as the Stanford Encyclopedia of Philosophy . From the Sequences I’ll attempt to build a list of the ideas or concepts they present. (Subjective discretion as to what constitutes a concept or idea is inevitable; oh well.) I’ll then work through each item on the list, using bounties & my own research to understand and communicate each idea’s “status” in the mainstream philosophical community: whether it has been advanced in a similar form before, whether it is widely accepted or dismissed, and the contemporary stances around the idea (challenges, rebuttals, qualifications).