He is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. LWers have a very specific jargon they use over and over that serves little functional purpose, which is probably the most distinctly LW thing there is. If you could forecast a systematic departure between what those two explanations predict in terms of immediate next moves, you would know an instrumental inefficiency in AlphaGo.This is one of those texts where lack of substance is obscured in difficult prose, where obvious ideas are expressed poorly in order to seem more profound. This is prose that begs you to believe how much of an authority it is, while obscuring the triviality of the arguments it actually makes.People like Bostrom in real philosophy departments are actually MUCH more coherent than Yudkowsky! I'm really interested in reading a story where a rational main character has to come to terms with a completely irrational and surreal world (i.e.

Why are you so free, and so great in your circumference, when other children your age are small and constrained? Why are so few writers any good at humor? But you can't relate the style of AlphaGo's computation to the style of AlphaGo's move in any way that systematically departs from just reiterating "that output happened because it will lead to a winning board later". Also suffers from the same mistakes.

I still disagree with a lot of them, but at least they are making a good faith effort and trying to construct real, intellectually honest arguments (they don't always succeed, but it can be hard).Introduce your trivial point with a headline that no reader will immediately understand. - Eliezer Yudkowsky , as always, was very clever and humorous in his examination of cognitive bias and how downright insane human beings are really wired to be. A Personally, I think he has a fair amount of interesting things to say, and gets credit for caring about / working on the value alignment problem more than ten years before mainstream academia decided it was worth consideration. The October architecture was simple and, so far as I know, incorporated very little in the way of all the particular tweaks that had built up the power of the best open-source Go programs of the time. He takes his lack of engagement with the academic community at large as a good thing without realizing when that leaves him woefully misinformed (often). While you imagined yourself a child, Mr. Potter, you were a child. But like any arrogant kid he applies the specific knowledge he has in one domain to other fields he knows nothing about, especially to his favourite subjects of artificial intelligence (and quantum physics, but I won't get into that). In a post written the day before the election, Yudkowsky expanded on both themes, providing a simple explanation of how many of the … The primary competing proposal was to literally destroy punitive revenue (exchange for bank reserves, burn the bank reserves). - I wonder now and then.Press J to jump to the feed. By using our Services or clicking I agree, you agree to our use of cookies. This paragraph doesn't strike me as insightful at all.Groan.

11 year old canon Harry wasn't mature enough to comprehend it, at that point.It's a patch to explain why canon and HPMOR Harry couldn't see Thestrals despite having seen, with his eyes, his parents dying as a infant - he didn't comprehend it as an infant.I will say this much, Mr. Potter: You are already an Occlumens, and I think you will become a perfect Occlumens before long. Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher and writer best known for popularizing the idea of friendly artificial intelligence. The crank beliefs he holds are subtle however, and require some explanation. Anyone we can imagine, we can be; and the true difference about you, Mr. Potter, is that you have an unusually good imagination. It's members are supposed to be developing their art of rationality, aka the search for truth.

As such, working out how to create a Friendly AI (FAI) — one that won't kill us, inadvertently or otherwise…

Yudkowsky is an unstudied crank. "Making fun of Yudkowsky is pretty popular in the ML community, which is partially fair: he is definitely a little nuts. People have a problem with him because he's a self proclaimed expert that quite frequently doesn't really understand what he's talking about. But for such as you and I, anyone we can imagine, we can be, in reality and not pretense. I got the impression that line was a patch to explain why canon Harry didn't see thestrals until after Cedric in book 4, despite Quirrell in book 1. Identity does not mean, to such as us, what it means to other people. LessWrong is a locus of self-deception, not empiricism.I think it depends a fair amount on the online forum, e.g. He writes fanfic and is super into polyamory and doesn't have a degree and cares a lot about quantum mechanics and doesn't do any direct research in the field. Crank (n.): a pejorative term used for a person who holds an unshakable belief that most of his or her contemporaries consider to be false. The various talks he gives at rationality conferences are mooostly spot on (except for his weird Bayesian tribalism). The problem is there is no reason to assume an AI would give a damn about humans or what we care about in any way at all, given that it won't have a million years as a savannah ape or a billion years of evolution in its makeup to build up any morality or aversion to killing us. A playwright must contain his characters, he must be larger than them in order to enact them within his mind. He also believes AI is imminent. Yet there are other existences you could support, larger existences, if you wished.

Frankly speaking his saving grace is probably the fact that most people don't know much about him aside from being the rationalist guy who wrote Harry Potter fanfiction, because otherwise he's kind of narcissistic. waking up as a bug in The Metamorphosis).However, the premise certainly doesn't have to be about a MC transforming into something else—just something that explores similar themes of alienation and anxiety in a world that no longer makes sense.Beautiful. Judging by the October architecture, after their big architectural insight, Deepmind mostly started over in the details (though they did reuse the widely known core insight of Monte Carlo Tree Search).