Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Dec. 05 - Dec. 11, 2016

1 MrMind 05 December 2016 07:52AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] David Allen vs. Mark Forster

1 entirelyuseless 05 December 2016 01:04AM

Beware of identifying with school of thoughts

4 ChristianKl 05 December 2016 12:30AM

As a child I decided to do a philosophy course as an extracurricular activity. In it the teacher explained to us the notion of schools of philosophical thought. According to him classifying philosophers as adhering either to school A or school B, is typical for Anglo thought.


It deeply annoys me when Americans talk about Democrat and Republican political thought and suggest that you are either a Democrat or a Republican. The notion that allegiance to one political camp is supposed to dictate your political beliefs feels deeply wrong.


A lot of Anglo high schools do policy debating. The British do it a bit differently than the American but in both cases it boils down to students having to defend a certain side.

Traditionally there's nearly no debating at German high schools.


When writing political essays in German school there’s a section where it's important to present your own view. Your own view isn't supposed to be one that you simply copy from another person. Good thinking is supposed to provide a sophisticated perspective on the topic that is the synthesis of arguments from different sources instead of following a single source.


That’s part of the German intellectual thought has the ideal of 'Bildung'. In Imprisoned in English Anna Wierzbicka tells me that 'Bildung' is a particularly German construct and the word isn't easily translatable into other languages. The nearest English word is 'education'. 'Bildung' can also be translated as 'creation'. It's about creating a sophisticated person, that's more developed than the average person on the street who doesn't have 'Bildung'. Having 'Bildung' signals having a high status.


According to this ideal you learn about different viewpoints and then you develop a sophisticated opinion. Not having a sophisticated opinion is low class. In liberal social circles in the US a person who agrees with what the Democratic party does at every point in time would have a respectable political opinion. In German intellectual life that person would be seen as a credulous low status idiot that failed to develop a sophisticated opinion. A low status person isn't supposed to be able to fake being high status by memorizing the teacher's password.


If you ask me the political question "Do you support A or B?", my response is: "Well, I neither want A or B. There are these reasons for A, there are those reasons for B. My opinion is that we should do C which solves those problems better and takes more concerns into account." A isn’t the high status option so that I can signal status by saying that I'm in favour of A.


How does this relate to non-political opinions? In Anglo thought philosophic positions belong to different schools of thought. Members belonging to one school are supposed to fight for their school being right and being better than the other schools.


If we take the perspective of hardcore materialism, a statement like: "One of the functions of the heart is to pump blood" wouldn't be a statement that can be objectively true because it's teleology. The notion of function isn't made up of atoms.


From my perspective as a German there's little to be gained in subscribing to the hardcore materialist perspective. It makes a lot of practical sense to say that such as statement can be objectively true. I have gotten the more sophisticated view of the world, that I want to have. Not only statements that are about arrangements of atoms can be objectively true but also statements about the functions of organs. That move is high status in German intellectual discourse but it might be low status in Anglo-discourse because it can be seen as being a traitor to the school of materialism.


Of course that doesn't mean that no Anglo accepts that the above statement can be objectively true. On the margin German intellectual norms make it easier to accept the statement as being objectively true. After Hegel you might say that thesis and antithesis come together to a synthesis instead of thesis or antithesis winning the argument.


The German Wikipedia page for "continental philosophy" tells me that the term is commonly used in English philosophy. According to the German Wikipedia it's mostly used derogatorily. From the German perspective the battle between "analytic philosophy" and "continental philosophy" is not a focus of the debate. The goal isn't to decide which school is right but to develop sophisticated positions that describe the truth better than answers that you could get by memorizing the teacher's password.


One classic example of an unsophisticated position that's common in analytic philosophy is the idea that all intellectual discourse is supposed to be based on logic. In Is semiotics bullshit? PhilGoetz stumbles about a professor of semiotics who claims: "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis."


That's seen as a strong violation of how reasoning based on logical positivism is supposed to work. It violates the memorized teachers password. But is it true? To answer that we have to ask what 'logical basis' means. David Chapman analysis the notion of logic in Probability theory does not extend logic. In it he claims that in academic philosophical discourse the phrase logic means predicate logic.


Predicate logic can make claims such:

(a) All men are mortal.

(b) Socrates is a man.

Therefore:

(c) Socrates is mortal.


According to Chapman the key trick of predicate logic is logical quantification. That means every claim has to be able to be evaluated as true or false without looking at the context.


We want to know whether a chemical substance is safe for human use. Unfortunately our ethical review board doesn't let us test the substance on humans. Fortunately they allow us to test the substance on rats. Hurray, the rats survive.


(a) The substance is safe for rats.

(b) Rats are like humans

Therefore:

(c) The substance is safe for humans.


The problem with `Rats are like humans` is that it isn’t a claim that’s simply true or false.

The truth value of the claim depends on what conclusions you want to draw from it. Propositional calculus can only evaluate the statement as true or false and can’t judge whether it’s an appropriate analogy because that requires looking at the deeper meaning of the statement `Rats are like humans` to decide whether `Rats are like humans` in the context we care about.


Do humans sometimes make mistakes when they try to reason by analogy? Yes, they do. At the same time they also come to true conclusions by reasoning through analogy. Saying "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis." sounds fancy, but if we reasonably define the term logical basis as being about propositional calculus, it's true.


Does that mean that you should switch from the analytic school to the school of semiotics? No, that's not what I'm arguing. I argue that just as you shouldn't let tribalism influence yourself in politics and identify as Democrat or Republican you should keep in mind that philosophical debates, just as policy debates, are seldom one-sided.


Daring to slay another sacred cow, maybe we also shouldn't go around thinking of ourselves as Bayesian. If you are on the fence on that question, I encourage you to read David Chapman's splendid article I referenced above:

 

Probability theory does not extend logic

Some thoughts on double crux.

2 ProofOfLogic 04 December 2016 06:03PM

[Epistemic status: quite speculative. I've attended a CFAR workshop including a lesson on double crux, and found it wore counterintuitive than I expected. I ran my own 3-day event going through the CFAR courses with friends, including double crux, but I don't think anyone started doing double crux based on my attempt to teach it. I have been collecting notes on my thoughts about double crux so as to not lose any; this is a synthesis of some of those notes.]

This is a continuation of my attempt to puzzle at Double Crux until it feels intuitive. While I think I understand the _algorithm_ of double crux fairly well, and I _have_ found it useful when talking to someone else who is trying to follow the algorithm, I haven't found that I can explain it to others in a way that causes them to do the thing, and I think this reflects a certain lack of understanding on my part. Perhaps others with a similar lack of understanding will find my puzzling useful.

Here's a possible argument for double crux as a way to avoid certain conversational pitfalls. This argument is framed as a sort of "diff" on my current conversational practices, which are similar to those mentioned by CCC. So, here is approximately what I do when I find an interesting disagreement:

 

  1. We somehow decide who states their case first. (Usually, whoever is most eager.) That person gives an argument for their side, while checking for understanding from the other person and looking for points of disagreement with the argument.
  2. The other person asks questions until they think they understand the whole argument; or, sometimes, skip to step 3 when a high-value point of disagreement is apparent before the full argument is understood.
  3. Recurse into step 1 for the most important-seeming point of disagreement in the argument offered. (Again the person whose turn it is to argue their case will be chosen "somehow"; it may or may not switch.)
  4. If that process is stalling out (the argument is not understood by the other person after a while of trying, or the process is recursing into deeper and deeper sub-points without seeming to get closer to the heart of the disagreement), switch roles; the person who has explained the least of their view should now give an argument for their side.

Steps 1-3 can have a range of possible results [using 'you' as the argument-giver and 'they' as the receiver]:
  • In the best case, they accept your argument, perhaps after a little recursion into sub-arguments to clarify.
  • In a very good case, the process finds a lot of common ground (in the form of parts of the argument which are agreed upon) and a precise point of disagreement, X, such that if either person changed their mind about X they'd change their mind about the whole. They can now dig into X in the same way they dug into the overall disagreement, with confidence that resolving X is a good way to resolve the disagreement.
  • In a slightly less good case, a precise disagreement X is found, but it turns out that the argument you gave wasn't your entire reason for believing what you believe. IE, you've given an argument which you believe to be sufficient to establish the point, but not necessary. This means resolving the point of disagreement X is only potentially changing their mind. At best you may find that your argument fails, in which case you'd give another argument.
  • In a partial failure case, all the points of disagreement are right away; IE, you fail to find any common ground for arguments to gain traction. It's still possible to recurse into points of disagreement in this case, and doing so may still be productive, but often this is a sign that you haven't understood the other person well enough or that you've put them on the defensive so that they're biased to disagree.
  • In a failure case, you keep digging down into reasons why they don't buy one point after another, and never really get anywhere. You don't contact with anything which would change their mind, because you're digging into your reasons rather than theirs. Your search for common ground is failing.
  • In a failure case, you've made a disingenuous argument which your motivated cognition thinks they'll have a hard time refuting, but which is unlikely to convince them. A likely outcome is a long, pointless discussion or an outright rejection of the argument without any attempt to point at specific points of disagreement with it.

I think double crux can be seen as an attempt to modify the process of 1-4 in a way which attempts to make the better outcomes more common. You can still give your same argument in double crux, but you're checking earlier to see whether it will convince the other person. Suppose you have an argument for the disagreement D:

"A.

A implies B.

B implies C.

C implies D.

So, D."

In my algorithm, you start by checking for agreement with "A". You then check for agreement with "A implies B". And so on, until a point of disagreement is reached. In double crux, you are helping the other person find cruxes by suggesting cruxes for them. You can ask "If you believed C, would you believe D?" Then, if so, "If you believed B, would you believe D?" and so on. Going through the argument backwards like this, you only keep going for so long as you have some assurance that you've connected with their model of D. Going through the argument in the forward direction, as in my method, you may recurse into further and further sub-arguments starting at a point of disagreement like "B implies C" and find that you never make contact with something in their model which has very much to do with their disbelief of D. Also, looking for the other person's cruxes encourages honest curiosity about their thinking, which makes the whole process go better.

Furthermore, you're looking for your own cruxes at the same time. So, you're more likely to think about arguments which are critical to your belief, and much less likely to try disingenuous arguments designed to be merely difficult to refute.

A quote from Feynman's Cargo Cult Science:

The first principle is that you must not fool yourself—and you are the easiest person to fool.  So you have to be very careful about that.  After you’ve not fooled yourself, it’s easy not to fool other scientists.  You just have to be honest in a conventional way after that. 

 

I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I’m not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being.  We’ll leave those problems up to you and your rabbi.  I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist.  And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.

 

This kind of "bending over backwards to show how maybe you're wrong" (in service of not fooling yourself) is close to double crux. Listing cruxes puts us in the mindset of thinking about ways we could be wrong.

On the other hand, I notice that in a blog post like this, I have a hard time really explaining how I might be wrong before I've explained my basic position. It seems like there's still a role for baking arguments forwards, rather than backwards. In my (limited) experience, double crux still requires each side to explain themselves (which then involves giving some arguments) before/while seeking cruxes. So perhaps double crux can't be viewed as a "pure" technique, and really has to be flexible, mixed with other approaches including the one I gave at the beginning. But I'm not sure what the best way to achieve that mixture is.

[Link] [Secular Solstice UK] We all have a part to play

2 Raemon 04 December 2016 05:51PM

[Link] A Few Billionaires Are Turning Medical Philanthropy on Its Head

0 ike 04 December 2016 03:08PM

[Link] Construction of practical quantum computers radically simplified

0 morganism 03 December 2016 11:49PM

[Link] This AI Boom Will Also Bust

4 username2 03 December 2016 11:21PM

[Link] Crowdsourcing moderation without sacrificing quality

7 paulfchristiano 02 December 2016 09:47PM

[Link] When companies go over 150 people......

2 NancyLebovitz 02 December 2016 07:57PM

[Link] Contra Robinson on Schooling

4 Vaniver 02 December 2016 07:05PM

Weekly LW Meetups

0 FrankAdamek 02 December 2016 04:47PM

Question about metaethics

3 pangel 02 December 2016 10:21AM

In a recent Facebook post, Eliezer said :

You can believe that most possible minds within mind design space (not necessarily actual ones, but possible ones) which are smart enough to build a Dyson Sphere, will completely fail to respond to or care about any sort of moral arguments you use, without being any sort of moral relativist. Yes. Really. Believing that a paperclip maximizer won't respond to the arguments you're using doesn't mean that you think that every species has its own values and no values are better than any other.

And so I think part of the metaethics sequence went over my head.

I should re-read it, but I haven't yet. In the meantime I want to give an summary of my current thinking and ask some questions.

My current take on morality is that, unlike facts about the world, morality is a question of preference. The important caveats are :

  1. The preference set has to be consistent. Until we develop something akin to CEV, humans are probably stuck with a pre-morality where they behave and think over time in contradictory ways, and at the same time believe they have a perfectly consistent moral system.
  2. One can be mistaken about morality, but only in the sense that, unknown to them, they actually hold values different from what the deliberative part of their mind thinks it holds. An introspection failure or a logical error can cause the mistake. Once we identify ground values (not that it's effectively feasible), "wrong" is a type error.
  3. It is OK to fight for one's morality. Just because it's subjective doesn't mean one can't push for it. So "moral relativism" in the strong sense isn't a consequence of morality being a preference. But "moral relativism" in the weak, technical sense (it's subjective) is.

I am curious about the following :

  • How does your current view differ from what I've written above?
  • How exactly does that differ from the thesis of the metaethics sequence? In the same post, Eliezer also said : "and they thought maybe I was arguing for moral realism...". I did kind of think that, at times.
  • I specifically do not understand this : "Believing that a paperclip maximizer won't respond to the arguments you're using doesn't mean that you think that every species has its own values and no values are better than any other.". Unless "better" is used in the sense of "better according to my morality", but that would make the sentence barely worth saying.

 

[Link] Optimizing the news feed

8 paulfchristiano 01 December 2016 11:23PM

Which areas of rationality are underexplored? - Discussion Thread

11 casebash 01 December 2016 10:05PM

There seems to actually be real momentum behind this attempt as reviving Less Wrong. One of the oldest issues on LW has been the lack of content. For this reason, I thought that it might be worthwhile opening a thread where people can suggest how we can expand the scope of what people write about in order for us to have sufficient content.

Does anyone have any ideas about which areas of rationality are underexplored? Please only list one area per comment.

Making intentions concrete - Trigger-Action Planning

19 Kaj_Sotala 01 December 2016 08:34PM

I'll do it at some point.

I'll answer this message later.

I could try this sometime.

For most people, all of these thoughts have the same result. The thing in question likely never gets done - or if it does, it's only after remaining undone for a long time and causing a considerable amount of stress. Leaving the "when" ambiguous means that there isn't anything that would propel you into action.

What kinds of thoughts would help avoid this problem? Here are some examples:

  • When I find myself using the words "later" or "at some point", I'll decide on a specific time when I'll actually do it.
  • If I'm given a task that would take under five minutes, and I'm not in a pressing rush, I'll do it right away.
  • When I notice that I'm getting stressed out about something that I've left undone, I'll either do it right away or decide when I'll do it.
Picking a specific time or situation to serve as the trigger of the action makes it much more likely that it actually gets done.

Could we apply this more generally? Let's consider these examples:
  • I'm going to get more exercise.
  • I'll spend less money on shoes.
  • I want to be nicer to people.
These goals all have the same problem: they're vague. How will you actually implement them? As long as you don't know, you're also going to miss potential opportunities to act on them.

Let's try again:
  • When I see stairs, I'll climb them instead of taking the elevator.
  • When I buy shoes, I'll write down how much money I've spent on shoes this year.
  • When someone does something that I like, I'll thank them for it.
These are much better. They contain both a concrete action to be taken, and a clear trigger for when to take it.

Turning vague goals into trigger-action plans

Trigger-action plans (TAPs; known as "implementation intentions" in the academic literature) are "when-then" ("if-then", for you programmers) rules used for behavior modification [i]. A meta-analysis covering 94 studies and 8461 subjects [ii] found them to improve people's ability for achieving their goals [iii]. The goals in question included ones such as reducing the amount of fat in one's diet, getting exercise, using vitamin supplements, carrying on with a boring task, determination to work on challenging problems, and calling out racist comments. Many studies also allowed the subjects to set their own, personal goals.

TAPs were found to work both in laboratory and real-life settings. The authors of the meta-analysis estimated the risk of publication bias to be small, as half of the studies included were unpublished ones.

Designing TAPs

TAPs work because they help us notice situations where we could carry out our intentions. They also help automate the intentions: when a person is in a situation that matches the trigger, they are much more likely to carry out the action. Finally, they force us to turn vague and ambiguous goals into more specific ones.

A good TAP fulfills three requirements [iv]:
  • The trigger is clear. The "when" part is a specific, visible thing that's easy to notice. "When I see stairs" is good, "before four o'clock" is bad (when before four exactly?). [v]
  • The trigger is consistent. The action is something that you'll always want to do when the trigger is fulfilled. "When I leave the kitchen, I'll do five push-ups" is bad, because you might not have the chance to do five push-ups each time when you leave the kitchen. [vi]
  • The TAP furthers your goals. Make sure the TAP is actually useful!
However, there is one group of people who may need to be cautious about using TAPs. One paper [vii] found that people who ranked highly on so-called socially prescribed perfectionism did worse on their goals when they used TAPs. These kinds of people are sensitive to other people's opinions about them, and are often highly critical of themselves. Because TAPs create an association between a situation and a desired way of behaving, it may make socially prescribed perfectionists anxious and self-critical. In two studies, TAPs made college students who were socially prescribed perfectionists (and only them) worse at achieving their goals.

For everyone else however, I recommend adopting this TAP:

When I set myself a goal, I'll turn it into a TAP.

Origin note

This article was originally published in Finnish at kehitysto.fi. It draws heavily on CFAR's material, particularly the workbook from CFAR's November 2014 workshop.

Footnotes

[i] Gollwitzer, P. M. (1999). Implementation intentions: strong effects of simple plans. American psychologist, 54(7), 493.

[ii] Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta‐analysis of effects and processes. Advances in experimental social psychology, 38, 69-119.

[iii] Effect size d = .65, 95% confidence interval [.6, .7].

[iv] Gollwitzer, P. M., Wieber, F., Myers, A. L., & McCrea, S. M. (2010). How to maximize implementation intention effects. Then a miracle occurs: Focusing on behavior in social psychological theory and research, 137-161.

[v] Wieber, Odenthal & Gollwitzer (2009; unpublished study, discussed in [iv]) tested the effect of general and specific TAPs on subjects driving a simulated car. All subjects were given the goal of finishing the course as quickly as possible, while also damaging their car as little as possible. Subjects in the "general" group were additionally given the TAP, "If I enter a dangerous situation, then I will immediately adapt my speed". Subjects in the "specific" group were given the TAP, "If I see a black and white curve road sign, then I will immediately adapt my speed". Subjects with the specific TAP managed to damage their cars less than the subjects with the general TAP, without being any slower for it.

[vi] Wieber, Gollwitzer, et al. (2009; unpublished study, discussed in [iv]) tested whether TAPs could be made even more effective by turning them into an "if-then-because" form: "when I see stairs, I'll use them instead of taking the elevator, because I want to become more fit". The results showed that the "because" reasons increased the subjects' motivation to achieve their goals, but nevertheless made TAPs less effective.

The researchers speculated that the "because" might have changed the mindset of the subjects. While an "if-then" rule causes people to automatically do something, "if-then-because" leads people to reflect upon their motivates and takes them from an implementative mindset to a deliberative one. Follow-up studies testing the effect of implementative vs. deliberative mindsets on TAPs seemed to support this interpretation. This suggests that TAPs are likely to work better if they can be carried out as consistently and as with little thought as possible.

[vii] Powers, T. A., Koestner, R., & Topciu, R. A. (2005). Implementation intentions, perfectionism, and goal progress: Perhaps the road to hell is paved with good intentions. Personality and Social Psychology Bulletin, 31(7), 902-912.

Downvotes temporarily disabled

16 Vaniver 01 December 2016 05:31PM

This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.

 

The best place to track changes to the codebase is the github LW issues page.

[Link] Hate Crimes: A Fact Post

7 sarahconstantin 01 December 2016 04:25PM

December 2016 Media Thread

4 ArisKatsaris 01 December 2016 07:41AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] "Decisions" as thoughts which lead to actions.

1 ProofOfLogic 01 December 2016 12:47AM

[Link] What they don’t teach you at STEM school

8 RomeoStevens 30 November 2016 07:20PM

Debating concepts - What is the comparative?

5 casebash 30 November 2016 02:46PM

In order to get a solid handle on a proposal, it isn't enough to just know what the world will look like if you adopt the proposal. It is also very important to know what the current situation or counter-model is, otherwise the proposal may provide less of a benefit than expected. Before I begin, I'll note that this article is about the comparative, I'll write another article for being comparative later, though probably under the name being responsive since this will be less confusing.

One of the best ways to think about what it means to be comparative is that you want to indicate what the two worlds will look like. For example, conscription may provide us with more troops and everyone might agree that troops are important for winning wars, but we also need to look at what the status quo is like. If the country already has enough troops or allies, then the difference in the comparative might not be that significant. When we ignore the comparative, we can often get caught up in a particular frame and fail to realise that the framing is misleading. At first, being able to better win wars might sound like it is really, really important. But when we ask the question of whether or not we need to be better at winning or if we are good enough, we might quickly realise that it doesn't really matter. As can be seen here, there is no need to wait for the other team to bring arguments before you can start being comparative.

Here the first speaker in, This house supports nationalism, provides a good example of being comparative. He explains that he doesn't see the comparative as being some utopian cosmopolitan society, but that people will always choose a particular form of identity. He say that this should be nationalism; not ethno-nationalism, but rather a form of nationalism based on shared values. He argues that this is advantageous since everyone in a nation can opt into this identity and hence it avoids the divisions that occur when people opt into specific identities such as race or gender. The speaker also talks about how nationalism can energise the nation, but if the speaker had only talked about this, then the argument would have been weaker. In this case, thinking about what the world would otherwise look like allows you to make nationalism more attractive since we can see that the alternatives are not particularly compelling.

As another example, consider a debate about banning abortion. Imagine that the government stands up and talks about why they think the fetus is a person and hence it should be illegitimate to abort it. However, their argument will not be as persuasive if they fail to deal with the comparative, which is that many women will get abortions whether or not it is legal and these abortions will be more dangerous. In this debate, the comparative weakens the affirmative case, but it also allows the government to pre-emptively respond to this point. This objection is also common knowledge, so unless this is responded to, this analysis will likely be rejected outright.

So, as we have seen, being comparative allows you to be more persuasive and to think in a more nuanced manner about an issue. It provides the language to explain to a friendly argumentation opponent how you think their argument could be improved or why you don't think that their argument is very persuasive.

 

 

 

 

[Link] Things "Meta" Can Mean

5 ProofOfLogic 30 November 2016 09:52AM

Terminology is important

4 casebash 30 November 2016 12:37AM

As rationalists, we like to focus on substance over style. This is a good habit; unfortunately, most of the public will swallow extremely poor reasoning if it is expressed sufficiently confidently and fluently. However, style is also important when it comes to popularising an idea, even if it is within our own community.


In order for terminology to be useful, a few conditions need to be met:

  • Firstly, the term needs to either be more nuanced or more concise than explaining the same concept in ordinary language. I tend to see it as a bad thing when terms are created just to signal that a person is a member of the in-group.

  • Secondly, the speaker needs to remember the term. If a term is hard to pronounce or it has a complex name, then the speaker may be unable to use it, even if they would like to be able to.

  • Lastly, the person who is hearing the term for the first time should be able to connect it its the meaning. If they are constantly having to pause to remember what the term means, then it will be harder for them to figure out your meaning as a whole. In the best case, a person can guess what the term means before it is even explained to them.


I believe that a lot of the value that Less Wrong or the rationalist community has provided is not just new concepts, but the language that allows us to describe them. The better a term scores on each of the above factors, the more the term will be used and the more we can rely on other speakers within the community also adopting the term. This is a key part of what draws people to the rationalist community, being able to have a conversation from a certain baseline that doesn’t end up getting dragged down in the way that would be typical outside of the community. Instead of getting trapped in an argument at the level of the base assumptions, it allows a conversation to go deeper and become more nuanced.

 

Given all of this, I believe that further developing terminology is a key part of what our project should be. I will begin by writing a series of articles on debating terms which I wish were a part of our common vocabulary. I would like to encourage people to reread old Less Wrong articles and consider whether the concepts have been given a clear and memorable name and if not, to write a new article arguing in favour of this new term. We need to figure out ways of producing more content and a believe that a reasonable number of quality articles could be produced this way. Failing this, if you have a concept that needs a new, I would suggest writing a post arguing why the concept is important, providing examples of when it might be useful and then other people may feel compelled to try to think of a term. My first effort in this direction will be to steal some concepts from debating. Here is my first article - What is the comparative?

[Link] What if jobs are not the solution, but the problem

0 morganism 29 November 2016 11:21PM

[Link] Micropayments on the web, a realistic model

0 morganism 29 November 2016 11:17PM

Articles in Main

3 Vaniver 29 November 2016 09:35PM

Hi all,

We shut off Main back in February to force everything into Discussion, and I still think the Main/Discussion split should be replaced (by better use of the tagging system, or by different subreddits based more about the style of interaction that people are looking for than the content, or so on), but (as mentioned then) we're going to be using Main for posts we want to make sure get into the RSS feed.

This is awkward because everything else is in Discussion, and Main is still a weird visibility zone, where stuff in Main that isn't Promoted is sort of in limbo. There are ways to improve this long-term, but in the short term, it looks like there are some easy options:

1. Open Thread comments linking to new promoted Main posts

2. Linkposts that point to new promoted Main posts

3. Something else?

 

(As said before, this should hopefully be a temporary measure; if we add promoted posts in Discussion to the site RSS (github issue), then those posts will show up in Discussion and in the RSS and everything is great.)

Recent AI control posts

11 paulfchristiano 29 November 2016 06:53PM


Over at medium, I’m continuing to write about AI control; here’s a roundup from the last month.

Strategy

  • Prosaic AI control argues that AI control research should first consider the case where AI involves no “unknown unknowns.”
  • Handling destructive technology tries to explain the upside of AI control, if we live in a universe where we eventually need to build a singleton anyway.
  • Hard-core subproblems explains a concept I find helpful for organizing research.

Building blocks of ALBA

Terminology and concepts

Epistemic Effort

27 Raemon 29 November 2016 04:08PM

Epistemic Effort: Thought seriously for 5 minutes about it. Thought a bit about how to test it empirically. Spelled out my model a little bit. I'm >80% confident this is worth trying and seeing what happens. Spent 45 min writing post.

I've been pleased to see "Epistemic Status" hit a critical mass of adoption - I think it's a good habit for us to have. In addition to letting you know how seriously to take an individual post, it sends a signal about what sort of discussion you want to have, and helps remind other people to think about their own thinking.

I have a suggestion for an evolution of it - "Epistemic Effort" instead of status. Instead of "how confident you are", it's more of a measure of "what steps did you actually take to make sure this was accurate?" with some examples including:

  • Thought about it musingly
  • Made a 5 minute timer and thought seriously about possible flaws or refinements
  • Had a conversation with other people you epistemically respect and who helped refine it
  • Thought about how to do an empirical test
  • Thought about how to build a model that would let you make predictions about the thing
  • Did some kind of empirical test
  • Did a review of relevant literature
  • Ran an Randomized Control Trial
[Edit: the intention with these examples is for it to start with things that are fairly easy to do to get people in the habit of thinking about how to think better, but to have it quickly escalate to "empirical tests, hard to fake evidence and exposure to falsifiability"]

A few reasons I think this (most of these reasons are "things that seem likely to me" but which I haven't made any formal effort to test - they come from some background in game design and reading some books on habit formation, most of which weren't very well cited)
  • People are more likely to put effort into being rational if there's a relatively straightforward, understandable path to do so
  • People are more likely to put effort into being rational if they see other people doing it
  • People are more likely to put effort into being rational if they are rewarded (socially or otherwise) for doing so.
  • It's not obvious that people will get _especially_ socially rewarded for doing something like "Epistemic Effort" (or "Epistemic Status") but there are mild social rewards just for doing something you see other people doing, and a mild personal reward simply for doing something you believe to be virtuous (I wanted to say "dopamine" reward but then realized I honestly don't know if that's the mechanism, but "small internal brain happy feeling")
  • Less Wrong etc is a more valuable project if more people involved are putting more effort into thinking and communicating "rationally" (i.e. making an effort to make sure their beliefs align with the truth, and making sure to communicate so other people's beliefs align with the truth)
  • People range in their ability / time to put a lot of epistemic effort into things, but if there are easily achievable, well established "low end" efforts that are easy to remember and do, this reduces the barrier for newcomers to start building good habits. Having a nice range of recommended actions can provide a pseudo-gamified structure where there's always another slightly harder step you available to you.
  • In the process of writing this very post, I actually went from planning a quick, 2 paragraph post to the current version, when I realized I should really eat my own dogfood and make a minimal effort to increase my epistemic effort here. I didn't have that much time so I did a couple simpler techniques. But even that I think provided a lot of value.
Results of thinking about it for 5 minutes.

  • It occurred to me that explicitly demonstrating the results of putting epistemic effort into something might be motivational both for me and for anyone else thinking about doing this, hence this entire section. (This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.')
  • One failure mode is that people end up putting minimal, token effort into things (i.e. randomly tried something on a couple doubleblinded people and call it a Randomized Control Trial).
  • Another is that people might end up defaulting to whatever the "common" sample efforts are, instead of thinking more creatively about how to refine their ideas. I think the benefit of providing a clear path to people who weren't thinking about this at all outweights people who might end up being less agenty about their epistemology, but it seems like something to be aware of.
  • I don't think it's worth the effort to run a "serious" empirical test of this, but I do think it'd be worth the effort, if a number of people started doing this on their posts, to run a followup informal survey asking "did you do this? Did it work out for you? Do you have feedback."
  • A neat nice-to-have, if people actually started adopting this and it proved useful, might be for it to automatically appear at the top of new posts, along with a link to a wiki entry that explained what the deal was.

Next actions, if you found this post persuasive:


Next time you're writing any kind of post intended to communicate an idea (whether on Less Wrong, Tumblr or Facebook), try adding "Epistemic Effort: " to the beginning of it. If it was intended to be a quick, lightweight post, just write it in its quick, lightweight form.

After the quick, lightweight post is complete, think about whether it'd be worth doing something as simple as "set a 5 minute timer and think about how to refine/refute the idea". If not, just write "thought about it musingly" after Epistemic Status. If so, start thinking about it more seriously and see where it leads.

While thinking about it for 5 minutes, some questions worth asking yourself:
  • If this were wrong, how would I know?
  • What actually led me to believe this was a good idea? Can I spell that out? In how much detail?
  • Where might I check to see if this idea has already been tried/discussed?
  • What pieces of the idea might you peel away or refine to make the idea stronger? Are there individual premises you might be wrong about? Do they invalidate the idea? Does removing them lead to a different idea? 

Why GiveWell can't recommend MIRI or anything like it

1 Bound_up 29 November 2016 03:29PM

There's an old joke about a man, head down, slowly walking in circles under the light of a street lamp. It is dark and he has lost his wallet.

A passerby offers his assistance and asks what they're looking for. A second does the same.

Finally, this second helper asks "Is this where you lost it?"

"No," comes the reply.

"Then why are you looking over here?"

"Because this is where the light is!"

 

The tendency to look for answers where they can be measured or found may also be present in psychological research on rats. We don't really look at rats for psychological insight because we think that's where the psychological insights are, that's just the only place we can look! (Note, I know looking at rats is better than nothing, and we don't only look at rats).

 

Likewise, GiveWell. They've released their new list of seven charities they recommend donating to. Six are efforts to increase health in a cheap way, and the last is direct money transfers to help people break out of poverty traps. In theory, these are the most cost-efficient producers of good in the world.

 

Except, not really. Technological research, especially AI, or perhaps effective educational reform, or improving the scientific community's norms might very well be vastly more fruitful fields.

 

I don't think these are all missing from GiveWell's list only because they don't measure up, but because, by GiveWell's metrics, they can't be measured at all! GiveWell has provided, perhaps, the best of the charities that can be easily measured.

 

What if the best charities aren't easily measurable? Well, then they won't just not be on the list, they can't be on the list.

How can people write good LW articles?

7 abramdemski 29 November 2016 10:40AM

A comment by AnnaSalamon on her recent article:

good intellectual content

Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content.

Probably too meta / too clever an idea, but may be worth some individual brainstorms?

I wouldn't presume to write "How To Write Good LessWrong Articles", but perhaps I'm up to the task of starting a thread on it.

To the point: feel encouraged to skip my thoughts and comment with your own ideas.

The thoughts I ended up writing are, perhaps, more of an argument that it's still possible to write good new articles and only a little on how to do so:

Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there's only so much to be said on the subject of rationality, and the important things have been thoroughly covered. I think this is easily seen to be false, if you go and look at the mountain of literature related to subjects in the sequences. There is a lot left to be sifted through, synthesized, and explained clearly. Really, there are a lot of things which have only been dealt with in a fairly shallow way on LessWrong and could be given a more thorough treatment. A reasonable algorithm is to dive into academic papers on a subject of interest and write summaries of what you find. I expect there are a lot of interesting things to be uncovered in the existing literature on cognitive biases, economics, game theory, mechanism design, artificial intelligence, algorithms, operations research, public policy, and so on -- and that this community would have an interesting spin on those things.

Moreover, I think that "rationality isn't solved" (simply put). Perhaps you can read a bunch of stuff on here and think that all the answers have been laid out -- you form rational beliefs in accord with the laws of probability theory, and make rational decisions by choosing the policy with maximum expected utility; what else is there to know? Or maybe you admit that there are some holes in that story, like the details of TDT vs UDT and the question of logical uncertainty and so on; but you can't do anything meaningful about that. To such an attitude, I would say: do you know how to put it all into practice? Do you know how to explain it to other people clearly, succinctly, and convincingly? If you try to spell it all out, are there any holes in your understanding? If so, are you deferring to the understanding of the group, or are you deferring to an illusion of group understanding which doesn't really exist? If something is not quite clear to you, there's a decent chance that it's not quite clear to a lot of people; don't make the mistake of thinking everyone understands but you. And don't make the mistake of thinking you understand something that you haven't tried to explain from the start.

I'd encourage a certain kind of pluralistic view of rationality. We don't have one big equation explaining what a rational agent would look like -- there are some good candidates for such an equation, but they have caveats such as requiring unrealistic processing power and dropping anvils on their own heads if offered $10 to do so. The project of specifying one big algorithm -- one unifying decision theory -- is a worthy one, and such concepts can organize our thinking. But what if we thought of practical rationality as consisting more of a big collection of useful algorithms? I'm thinking along the lines of the book Algorithms to Live By, which gives dozens of algorithms which apply to different aspects of life. Like decision theory, such algorithms give a kind of "rational principle" which we can attempt to follow -- to the extent that it applies to our real-life situation. In theory, every one of them would follow from decision theory (or else, would do worse than a decision-theoretic calculation). But as finite beings, we can't work it all out from decision theory alone -- and anyway, as I've been harping on, decision theory itself is just a rag-tag collection of proposed algorithms upon closer inspection. So, we could take a more open-ended view of rationality as an attempt to collect useful algorithms, rather than a project that could be finished.

A second, more introspective way of writing LessWrong articles (my first being "dive into the literature"), which I think has a good track record: take a close look at something you see happening in your life or the world and try to make a model of it, try to explain it at a more algorithmic level. I'm thinking of posts like Intellectual Hipsters and Meta-Contrarianism and Slaves to Fashion Signalling.

[Link] Expert Prediction Of Experiments

9 Yvain 29 November 2016 02:47AM

[Link] Newcomb's problem divides philosophers. Which side are you on?

-1 Morendil 28 November 2016 06:34PM

Using a Spreadsheet to Make Good Decisions: Five Examples

10 peter_hurford 28 November 2016 05:10PM

I've been told that LessWrong is coming back now, so I'm cross-posting this rationality post of interest from the Effective Altruism forum.

-

We all make decisions every day. Some of these decisions are pretty inconsequential, such as what to have for an afternoon snack. Some of these decisions are quite consequential, such as where to live or what to dedicate the next year of your life to. Finding a way to make these decisions better is important.

The folks at Charity Science Health and I have been using the same method to make many of our major decisions for the past for years -- everything from where to live to even deciding to create Charity Science Health. The method isn’t particularly novel, but we definitely think the method is quite underused.

Here it is, as a ten step process:

  1. Come up with a well-defined goal.

  2. Brainstorm many plausible solutions to achieve that goal.

  3. Create criteria through which you will evaluate those solutions.

  4. Create custom weights for the criteria.

  5. Quickly use intuition to prioritize the solutions on the criteria so far (e.g., high, medium, and low)

  6. Come up with research questions that would help you determine how well each solution fits the criteria

  7. Use the research questions to do shallow research into the top ideas (you can review more ideas depending on how long the research takes per idea, how important the decision is, and/or how confident you are in your intuitions)

  8. Use research to rerate and rerank the solutions

  9. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable

  10. Repeat steps 8 and 9 until sufficiently confident in a decision.

 

Which charity should I start?

The definitive example for this process was the Charity Entrepreneurship project, where our team decided which charity would be the best possible charity to create.

Come up with a well-defined goal: I want to start an effective global poverty charity, where effective is taken to mean a low cost per life saved comparable to current GiveWell top charities.

Brainstorm many plausible solutions to achieve that goal: For this, we decided to start by looking at the intervention level. Since there are thousands of potential interventions, we placed a lot of emphasis on plausibly highly effectve, and chose to look at GiveWell’s priority programs plus a few that we thought were worthy additions.

Create criteria through which you will evaluate those solutions / create custom weights for the criteria: For this decision, we spent a full month of our six month project thinking through the criteria. We weighted criteria based on both importance and the expected varaince that would occur between our options. We decided to strongly value cost-effectiveness, flexibility , and scalability. We moderately valued strength of evidence, metric focus, and indirect effects. We weakly valued logistical possibility and other factors.
 

Come up with research questions that would help you determine how well each solution fits the criteria: We came up with the following list of questions and research process.

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: Since this choice was important and we were pretty uninformed about the different interventions, we did shallow research into all of the choices. We then produced the following spreadsheet:

Afterwards, it was pretty easy to drop 22 out of the 30 possible choices and go with a top eight (the eight that ranked 7 or higher on our scale).

 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable / Repeat steps 8 and 9 until sufficiently confident in a decision: We then researched the top eight more deeply, with a keen idea to turn them into concrete charity ideas rather than amorphous interventions. When re-ranking, we came up with a top five, and wrote up more detailed reports --SMS immunization reminders,tobacco taxation,iron and folic acid fortification,conditional cash transfers, and a poverty research organization. A key aspect to this narrowing was also talking to relevant experts, which we wish we did earlier on in the process as it could quickly eliminate unpromising options.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: As we researched further, it became more clear that SMS immunization reminders performed best on the criteria being highly cost-effective, with a high strength of evidence and easy testability. However, the other four finalists are also excellent opportunities and we strongly invite other teams to invest in creating charities in those four areas.

 

Which condo should I buy?

Come up with a well-defined goal: I want to buy a condo that is (a) a good place to live and (b) a reasonable investment.
 

Brainstorm many plausible solutions to achieve that goal: For this, I searched around on Zillow and found several candidate properties.

Create criteria through which you will evaluate those solutions: For this decision, I looked at the purchasing cost of the condo, the HOA fee, whether or not the condo had parking, the property tax, how much I could expect to rent the condo out, whether or not the condo had a balcony, whether or not the condo had a dishwasher, how bright the space was, how open the space was, how large the kitchen was, and Zillow’s projection of future home value.
 

Create custom weights for the criteria: For this decision, I wanted to turn things roughly into a personal dollar value, where I could calculate the benefits minus the costs. The costs were the purchasing cost of the condo turned into a monthly mortgage payment, plus the annual HOA fee, plus the property tax. The benefits were the expected annual rent plus half of Zillow’s expectation for how much the property would increase in value over the next year, to be a touch conservative. I also added some more arbitrary bonuses: +$500 bonus if there was a dishwasher, a +$500 bonus if there was a balcony, and up to +$1000 depending on how much I liked the size of the kitchen. I also added +$3600 if there was a parking space, since the space could be rented out to others as I did not have a car. Solutions would be graded on benefits minus costs model.

Quickly use intuition to prioritize the solutions on the criteria so far: Ranking the properties was pretty easy since it was very straightforward, I could skip to plugging in numbers directly from the property data and the photos.

 

Property

Mortgage

Annual fees

Annual increase

Annual rent

Bonuses

Total

A

$7452

$5244

$2864

$17400

+$2000

+$9568

B

$8760

$4680

$1216

$19200

+$1000

+$7976

C

$9420

$4488

$1981

$19200

+$1200

+$8473

D

$8100

$8400

$2500

$19200

+$4100

+$9300

E

$6900

$4600

$1510

$15000

+$3600

+$8610

  

Come up with research questions that would help you determine how well each solution fits the criteria: For this, the research was just to go visit the property and confirm the assessments.

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: Pretty easy, not much changed as I went to actually investigate.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: For this, I just ended up purchasing the highest ranking condo, which was a mostly straightforward process. Property A wins! 
 
This is a good example of how easy it is to re-adapt the process and how you can weight criteria in nonlinear ways.
 

How should we fundraise? 

Come up with a well-defined goal: I want to find the fundraising method with the best return on investment. 

Brainstorm many plausible solutions to achieve that goal: For this, our Charity Science Outreach team conducted a literature review of fundraising methods and asked experts, creating a list of the 25 different fundraising ideas. 

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: The criteria we used here was pretty similar to the criteria we later used for picking a charity -- we valued ease of testing, the estimated return on investment, the strength of the evidence, and the scalability potential roughly equally. 

Come up with research questions that would help you determine how well each solution fits the criteria: We created this rubric with questions

  • What research says on it (e.g. expected fundraising ratios, success rates, necessary pre-requisites)

  • What are some relevant comparisons to similar fundraising approaches? How well do they work?

  • What types/sizes of organizations is this type of fundraising best for?

  • How common is this type of fundraising, in nonprofits generally and in similar nonprofits (global health)?

  • How one would run a minimum cost experiment in this area?

  • What is the expected time, cost, and outcome for the experiment?

  • What is the expected value?

  • What is the expected time cost to get best time per $ ratio (e.g., would we have to have 100 staff or huge budget to make this effective)?

  • What further research should be done if we were going to run this approach?

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: After reviewing, we were able to narrow the 25 down to eight finalists: legacy fundraising, online ads, door-to-door, niche marketing, events, networking, peer-to-peer fundraising, and grant writing.
 
Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We did MVPs of all eight of the top ideas and eventually decided that three of the ideas were worth pursuing full-time: online ads, peer-to-peer fundraising, and legacy fundraising.
 
 

Who should we hire? 

Come up with a well-defined goal: I want to hire the employee who will contribute the most to our organization. 

Brainstorm many plausible solutions to achieve that goal: For this, we had the applicants who applied to our job ad.

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: We thought broadly about what good qualities a hire would have, and decided to heavily weight values fit and prior experience with the job, and then roughly equally value autonomy, communication skills, creative problem solving, the ability to break down tasks, and the ability to learn new skills.
 
Quickly use intuition to prioritize the solutions on the criteria so far: We started by ranking hires based on their resumes and written applications. (Note that to protect the anonymity of our applicants, the following information is fictional.)
 

Person

Autonomy

Communication

Creativity

Break down

Learn new skills

Values fit

Prior experience

A

High

Medium

Low

Low

High

Medium

Low

B

Medium

Medium

Medium

Medium

Medium

Medium

Low

C

High

Medium

Medium

Low

High

Low

Medium

D

Medium

Medium

Medium

High

Medium

Low

High

E

Low

Medium

High

Medium

Medium

Low

Medium

 

Come up with research questions that would help you determine how well each solution fits the criteria: The initial written application was already tailored toward this, but we designed a Skype interview to further rank our applicants. 

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: After our Skype interviews, we re-ranked all the applicants. 

 

Person

Autonomy

Communication

Creativity

Break down

Learn new skills

Values fit

Prior experience

A

High

High

Low

Low

High

High

Low

B

Medium

Medium

Medium

Medium

Low

Low

Low

C

High

Medium

Low

High

High

Medium

Medium

D

Medium

Low

Medium

High

Medium

Low

High

E

Low

Medium

High

Medium

Medium

Low

Medium

  

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: While “MVP testing” may not be polite to extend to people, we do a form of MVP testing by only offering our applicants one month trials before converting to a permanent hire.

 

Which television show should we watch? 

Come up with a well-defined goal: Our friend group wants to watch a new TV show together that we’d enjoy the most. 

Brainstorm many plausible solutions to achieve that goal: We all each submitted one TV show, which created our solution pool. 

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: For this decision, the criteria was the enjoyment value of each participant, weighted equally. 

Come up with research questions that would help you determine how well each solution fits the criteria: For this, we watched the first episode of each television show and then all ranked each one. 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We then watched the winning television show, which was Black Mirror. Fun! 

 

Which statistics course should I take? 

Come up with a well-defined goal: I want to learn as much statistics as fast as possible, without having the time to invest in taking every course. 

Brainstorm many plausible solutions to achieve that goal: For this, we searched around on the internet and found ten online classes and three books.

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: For this decision, we heavily weighted breadth and time cost, weighted depth and monetary cost, and weakly weighted how interesting the course was and whether the course provided a tangible credential that could go on a resume.
 
Quickly use intuition to prioritize the solutions on the criteria so far: By looking at the syllabi, table of contents, and reading around online, we came up with some initial rankings:
 
 

Name

Cost

Estimated hours

Depth score

Breadth score

How interesting

Credential level

Master Statistics with R

$465

150

10

9

3

5

Probability and Statistics, Statistical Learning, Statistical Reasoning

$0

150

8

10

4

2

Critically Evaluate Social Science Research and Analyze Results Using R

$320

144

6

6

5

4

http://online.stanford.edu/Statistics_Medicine_CME_Summer_15

$0

90

5

2

7

0

Berkley stats 20 and 21

$0

60

6

5

6

0

Statistical Reasoning for Public Health

$0

40

5

2

4

2

Khan stats

$0

20

1

4

6

0

Introduction to R for Data Science

$0

8

3

1

5

1

Against All Odds

$0

5

1

2

10

0

Hans Rosling doc on stats

$0

1

1

1

11

0

Berkeley Math

$0

60

6

5

6

0

OpenIntro Statistics

$0

25

5

5

2

0

Discovering Statistics Using R by Andy Field

$25

50

7

3

3

0

Naked-Statistics by Charles Wheelan

$17

20

2

4

8

0

 

Come up with research questions that would help you determine how well each solution fits the criteria: For this, the best we could do would be to do a little bit from each of our top class choices, while avoiding purchasing the expensive ones unless free ones did not meet our criteria. 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: Only the first three felt deep enough. Only one of them was free, but we were luckily able to find a way to audit the two expensive classes. After a review of all three, we ended up going with “Master Statistics with R”.

Crowd sourcing a novel.

2 Arithine 28 November 2016 10:55AM

Hello all. I know I have not been an active member of this community but I am currently working my way through from AI to zombies (having been brought over from HPMOR). 

 

I am writing a novel set post Singularity and think that it would be a great idea to collaborate with some of the folks here. It might be useful to try and imagine a friendly AI to work through some of the challenges. 

 

If any one is interested here is the very beginnings of a rough draft. 

 https://docs.google.com/document/d/1vl5lK4EZOfRnnSF7IZWQ2zGvsv-hoCdpnk7dPH2RsIw/edit?usp=drive_web

 

... I hope I didn't embarace myself too much. 

[Link] Finding slices of joy

3 Kaj_Sotala 28 November 2016 10:10AM

Open thread, Nov. 28 - Dec. 04, 2016

3 MrMind 28 November 2016 07:40AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Be Like Stanislov Petrov

0 Evan_Gaensbauer 28 November 2016 06:04AM

Increase Your Child’s Working Memory

2 James_Miller 27 November 2016 09:57PM

I continually train my son’s working memory, and I urge parents of other young children to do likewise.  While I have succeeded in at least temporarily improving his working memory, I accept that this change might not be permanent and could end a few months after he stops training.  But I also believe that while his working memory is boosted so too is his learning capacity.

 

I have a horrible working memory that greatly hindered my academic achievement.  I was so bad at spelling that they stopped counting it against me in school.  In technical classes I had trouble remembering what variables stood for.  My son, in contrast, has a fantastic memory and even twice won his school’s spelling bee.

 

My son and I had been learning different programming languages through Codecademy.  While I struggle to remember the required syntax of different languages, he quickly gets this and can focus on higher level understanding.  When we do math learning together his strong working memory lets him concentrate on higher order issues rather than having to worry about remembering the details of the problems and the various relevant formulas.

 

You can easily train a child’s working memory.  It requires just a few minutes a day, can be done low tech or on a computer, can be optimized for your child to get him in flow, and easily lends itself to a reward system.  Here are some of the training techniques I have used:  

 

I write down a sequence and have him repeat it.  I say a sequence and have him repeat it. He repeats a sequence backwards.  He repeats the sequence with slight changes such as adding one to each number and “subtracting” one from each letter (e.g. C becomes B).  He repeats a sequence while doing some task like touching his head every time he says an even number and his knee every time he says an odd one.  Before repeating a memorized sequence, he must play repeat after me where I say a random string.  I draw a picture and have him redraw it.  He plays N-back games. He does mental math requiring keeping track of numbers (e.g. 42 times 37). I assign numerical values to letters and ask him math operation questions (e.g. A*B+C).  I write down words, numbers, and phrases on index cards, place the index cards in different places in a room, have him memorize what's on each card, turn over the cards, then ask him questions about what's on a card, or ask him to identify the location of a certain card.  

 

The key is to keep changing how you train your kid so you have more hope of improving general working memory rather than the very specific task you are training.  So, for example, if you say a sequence and have your kid repeat it back to you vary the speed at which you talk on different days and don’t just use one class of symbols in your exercises.  I learned this after my son insisted that I repeat sequences at the same speed.

 

 

Stand-up comedy as a way to improve rationality skills

4 Andy_McKenzie 27 November 2016 09:52PM

Epistemic status: Believed, but hard to know how much to adjust for opportunity costs 

I'm wondering whether stand-up comedy would be a good way to expand and test one's "rationality skills" and/or just general interaction skills. One thing I like about it is that you get immediate feedback: the audience either laughs at your joke, or they don't. 

Prominent existential risk researcher Nick Bostrom used to be a stand-up comedian

For my postgraduate work, I went to London, where I studied physics and neuroscience at King's College, and obtained a PhD from the London School of Economics. For a while I did a little bit stand-up comedy on the vibrant London pub and theatre circuit.

It was also mentioned at the London LW meetup in June 2011

Comedy as Anti-Compartmentalization - Another pet theory of mine. I was puzzled by the amount of atheist comedians out there, who people pay to see tell them that their religion is absurd. (Yes, Christian comedians exist too. Search YouTube. I dare you.) So my theory is that humour serves as a space where patterns and data from different fields are allowed to be superimposed on one another. Think of it as an anti-compartmentalization habit. Due to our brain design, compartmentalization is the default, so humour may be a hack to counter that. And we reward those who do it well with high status because it's valuable. Maybe we should have transhumanist/rationalist stand-up comedians? We sure have a lot of inconsistencies to point out.

Diego Caliero thinks that there would be good material to draw upon from the rationalist community.

Does anyone have any experience trying this and/or have thoughts on whether it would be useful? Also, does anyone in NYC want to try it out? 

View more: Next