2012-11-24

mindstalk: (robot)
I just finished Thinking, Fast and Slow by Daniel Kahneman. It is long with many chapters discussing many cognitive biases, making it hard to summarize in my usual "slog through and taken notes" way. Lucky you! It was quite interesting, though. One of those books that makes you think twice about democracy, except as Kahneman has pointed out, there's no guarantee the experts are any better.

One theme is a set of contrasts: System 1 vs. System 2, where 1 is fast, automatic, parallel, associative, and heuristic, basically systems of perception, memory and recognition; 2 is slow, serial, more logical, conscious, able to direct the attention of 1, and lazy. Another is Econs vs. Humans, rational decision makers vs. real people. A third, near the end, is the experiencing self vs. the remembering self. E.g. we tend to remember a painful episode not by the total pain, at least as modeled by a simple integral, but by the average of the peak pain and the last pain.

(So you can subject people to 60 seconds of their hand being in cold water, vs. 60 seconds of their hand being in equally cold water plus 30 more seconds of slightly less cold water, and they will choose to repeat the second experience over the first, because they remember the lower pain at the end, even though objectively it seems an entirely worse experience.)

Looking at http://en.wikipedia.org/wiki/List_of_biases_in_judgment_and_decision_making I'm reminded that major ones he talks about are anchoring, priming, framing effect, halo effect, base rate neglect, availability heuristic, endowment effect and loss-aversion, focusing effect (particularly by the remembering self, at the expense of the future experiencing self), impact bias, peak-end rule (what causes the cold water result).

Focusing: if you ask someone in Chicago how happy people in California are, the Chicagoan will think of the climate as a salient feature, focus on that, and expect Californians to be happier. In fact most Californians take the weather for granted, and aren't obviously any happier. Similarly people who don't know a paraplegic will expect one to be pretty unhappy after a year, whereas they tend to learn to cope and have close to normal levels of happiness.

If you ask people how happy they are, the current weather tends to have a big effect. Unless you first ask them what the weather is; then in considering their happiness, the weather is salient and they control for it.

Loss-aversion: given a chance to bet on a fair coin, heads they win $20, tails they lose $10, there's a tendency for humans to avoid the bet; losses are more painful, despite the nice expected value. If offered a chance to bet on the result of 100 coin tosses, most everyone would jump at that. Kahneman notes that this is too narrow minded: life is a whole series of small diverse bets with positive expected value, and it'd be very costly to systematically avoid them just because they look like diverse and unconnected bets, unlike 100 identical coin tosses. This feels relevant to me, who tends to be proudly risk-averse.

Framing: ask corporate executives about a fair chance to double or halve their capital, and most will avoid it. Their CEO would love for them to all take such a bet, as he can see the aggregate benefit to the company. Narrow framing vs. broad framing, and this actually goes back to the previous example: narrow framing is "how do I feel about the potential small loss here", broad framing is "what life policy should I have to such bets in general?"\

"Some people are more like their System 1, others are more like their System 2."

Keith Stanovich breaks up System 2, into an algorithmic mind -- slow thinking and demanding computation, IQ test performance, ability to switch tasks quickly and efficiently -- and a rational mind, or what Kahneman calls 'engaged', which is about reflectivity and resistance to biases, or ability to recognize when biases are likely and thus ability to slow down and think more. Someone can be intelligent, yet highly subject to bias; I couldn't help thinking of Intelligence and Wisdom in D&D.

This book also had the thing I mentioned recently, where asking people to think like a trader changes their behavior (in particular, makes them less loss averse in an experiment), which prompted me to think "f-ck! who would have thought of that as a requirement for human-equal AI? Intelligence is Hard."

***

My subject said two books. I've just started the second one, which is Simple Heuristics That Make us Smart, by Gerd Gigenrizer and Peter Todd and others. You'd think that'd be a similar research program to Kahneman and Tversky's, but there's apparently a fair bit of discord. I'd heard of Todd and his heuristics program back at IU -- he was there and I took a class -- so I recognized him when Kahneman mentioned them briefly in a footnote, saying they focused more on statistical simulation, that their evidence for actual psychological use was limited and disputed, and there for all its flaws, there's no need for System 1 to be frugal; it's built to use vast quantities of information while still being fast. This in 2012.

The other book, written in 1999, mentioned Kahneman and Tversky almost right away, with frequent sniping about how they focus on biases and deviations from a supposed perfectly rational ideal, while ignoring the ecological adaptedness and accuracy of fast and frugal heuristics. I've read a few chapters, and it's been an interesting reflective exercise to watch my biases at work. I liked Kahneman's book and his "no need to be frugal" criticism seemed plausible, so I come in biased against this work. The tone seems pettier, so there's a halo effect -- I don't like that, so I'm disposed to not like the content. And IMO it's an uglier book, particularly in the font, so that's the halo effect again.

As for the actual content, the first part was about the recognition heuristic and their famous example. If you're asked to judge which of two cities is larger, and you don't know, but you recognize that you've heard of one of them, it's a good bet to say that one is larger. Strikingly, you can do better by "knowing less": Americans might have more pairs of cities they've heard of and thus are stumped by, while Germans are more likely to have just heard of the biggest US cities. And they had a computer model that did best when taught the first 23 of 83 German cities that Americans recognized, even with other cues to help decide between pairs of recognized cities. (Basically, those cues were less accurate than just recognition, when applicable.) The next chapter talks about how recognition did better in picking a stock portfolio for 6 months of 1999 than almost any other strategy; they do acknowledge some of the potential pitfalls there.

My response was that yes, the recognition heuristic seems plausible and sensible in that situation (the cities one), but how common is that to the real world? And in their emphasis on "fast and frugal", and desire for clear computational models, they dismiss some alternatives, like familiarity. They seem to say that's too vague to consider, or form part of a research program, yet it seems obvious to me that if I recognize both cities but feel I've heard of one of them more, then I'll bet on that one and likely do well, and that familiarity -- number of associations, sense of prototypicalness, or just vague sense of hearing of it a lot -- should not be out of bounds for a cognitive research program, even if it would take more work to evaluated and model.

I just realized that Kahneman talked a lot about substitution effects -- faced with a hard question, like "how happy are you with your life", System 1 substituting an easier question, like "how happy am I right now". And the recognition effect would be just that. Memory doesn't return the size of a city, but recognition (or familiarity) is something, and can be substituted in.

Profile

mindstalk: (Default)
mindstalk

June 2025

S M T W T F S
123 45 67
89 10 1112 1314
15161718192021
222324 25262728
2930     

Most Popular Tags

Expand Cut Tags

No cut tags

Style Credit

Page generated 2025-07-10 03:45
Powered by Dreamwidth Studios