Skip to main content
Scroll For More
watch   &   listen

Daniel Kahneman in conversation

Ben Newell and Daniel Kahneman

I think that most of the things we believe, we believe for reasons that have very little to do with logic or reasoning.

Daniel Kahneman

Nobel Prize winning psychologist and bestselling author of Thinking, Fast and Slow Daniel Kahneman, joined Ben Newell, Professor of Cognitive Psychology at UNSW Sydney, to discuss his work ahead of the release of his latest book. A psychologist whose work on the foundations of behavioural economics was awarded the Nobel Prize in Economics, Daniel Kahneman has had an enormous impact on our understanding of how we think, and the process behind how and why we make good and bad decisions.  

In his new book, Noise: A Flaw in Human Judgement, Kahneman (with Olivier Sibony and Cass Sunstein) explores our susceptibility to ‘noise’ – the random factors and mental distractions that interfere with the judgement and decisions of organisations and individuals. Although we now try to acknowledge the impact of bias, ‘noise’ is even more common, but there is little awareness of it. Can we reduce both noise and bias to make better decisions? 

Transcript

Ann Mossop: Welcome to the UNSW Centre for Ideas podcast, a place to hear ideas from the world's leading thinkers and UNSW Sydney's brightest minds. I'm Ann Mossop, Director of the UNSW Centre for Ideas. The conversation you're about to hear is between Nobel laureate Daniel Kahneman and UNSW Professor Ben Newell and was recorded live. We hope you enjoy it.

Ben Newell: Welcome to Daniel Kahneman in Conversation. My name is Ben Newell and I'm a professor of Cognitive Psychology here at UNSW Sydney. This event is presented by the UNSW Centre for Ideas. Throughout the discussion, you can comment on Facebook, or use the live chat on YouTube, or post on Twitter using the hashtag UNSWIdeas. I'd like to acknowledge the Bejigal People that are the traditional custodians of this land. I would also like to pay my respects to the Elders both past and present, and extend that respect to other Aboriginal and Torres Strait Islanders who are present here today. This event celebrates the launch of the book Noise, written by Daniel Kahneman with Olivier Sibony, and Cass Sunstein. It is my very great pleasure to introduce Daniel Kahneman. Daniel Kahneman is best known for his work with Amos Tversky on human judgement and decision making, for which he was awarded the Nobel Prize in Economics in 2002. Kahneman has also studied a number of other topics including attention, the memory of experiences, wellbeing, counterfactual thinking and behavioural economics. He published Thinking Fast and Slow in 2010, which has sold more than 7 million copies worldwide. He is currently the Eugene Higgins professor of psychology at Princeton University. Today, Professor Kahneman is joining us from New York. Danny, thank you so much for speaking with me today.

Daniel Kahneman: My pleasure.

Ben Newell: I'd like to begin by asking you a question that some of our audience might wonder about. How does a psychologist end up being awarded the Nobel Prize in Economics?

Daniel Kahneman: Well, of course, you know, luck must be involved. And, luck was involved. Amos Tversky and I studied judgement and decision making, which are topics that economists are interested in. Decision theory, which we were working on, is considered one of the foundations of economic theory. And a paper that we published on decision making was published in the major theory journal of economics called Econometrica. We didn't publish it because we wanted to influence economics. We… that was not our objective. But that was just the major journal for publishing papers on decision making. But economists paid attention to the work. It influenced a few people, it influenced, in particular, Richard Thaler, who is now known as the guru of behavioural economics, and got a Nobel Prize four years ago, and is my best living friend. And we influenced him, we published a few articles together, he created behavioural economics, and I got too much credit for what he did. So that's how it happened.

Ben Newell: I think that's a very, very modest response. I wonder, so a lot of that analysis and paper and discussion came out of thinking about whether or not people are rational. And one question that I always wonder about is to what extent does it make sense to describe people as rational versus irrational given that that word has so many meanings, different meanings to different people?

Daniel Kahneman: Well, I think it makes very little sense. I very rarely use the word rational. And I never use the word irrational. Rationality is a technical concept in decision theory. And it's, it's a logic. So, it tells you how your beliefs and your preferences must be organised to be internally consistent and coherent. And this is how rationality is defined. And, rationality as defined is completely impractical. For human minds, a human mind, a finite human mind simply cannot meet the requirements of rationality as defined in decision theory. So the question of whether people are rational or not, is in some sense, not even an interesting question. Rationality is sort of a convenient assumption for economists and it is true that when Amos Tversky and I did our work, like, more than 40 years ago, economists sort of believed in rationality as a useful description of how people think. They didn't believe their wives or spouses were rational, they did believe their Dean's were rational, that was a comment that Amos Tversky always made. But they believed that people in general are rational. And, and the work that we did was considered a critique of that idea, of that assumption, that people in general are rational. And it's an easy target. I mean, it was very easy to show that people do not satisfy the extremely demanding axioms of rationality, the logic of rationality. So that's what I think about rationality.

Ben Newell: Okay. I wanted to turn now to briefly to discuss your previous book. So, the Thinking Fast and Slow book, which has been incredibly popular and influential. One of the central ideas, indeed, the characters, really, in that book were system one and system two. So the two systems that that produce our judgments and decisions. And this idea has become immensely popular. I find myself talking to doctors, lawyers, government, firefighters, who will now use Kahneman’s two systems in their discussions. My question is, do you think that this systems dichotomy has become too literal? And that what might have started as a useful characterisation has ended up potentially oversimplifying the complexities of human cognition?

Daniel Kahneman: Oh, absolutely. I mean, there is a familiar rule that psychologists are taught very early, which is not to explain the behaviour of people by the behaviour of little people inside their brain. Homunculi, they're called. And this is a no, no, when you're doing psychological theory. And it's a no no, that I very deliberately violated. That I chose language. I didn't invent system one and system two, both the ideas and the terms existed when I started working on it, on the topic. I borrowed them and developed them. But I deliberately chose this image of agents with personalities, system one and system two, and, and they have propensities, and they have traits, and they do things. and they interact with each other. And this is clearly an oversimplification, what I really meant to say, and I was very explicit in the book about what I did mean is that there are two types of processes, two main types of processes. And even that is an oversimplification. But there is one process that is rapid, and automatic, and effortless. And there is another process that is effortful, and controlled, and in general, slower. And so there are those two kinds of processes. And all I did was, in effect saying the processes of type one, think of them as if an agent called system one is producing them. And similarly, for type two. Turns out, and that's the reason that I did it, that people find it very easy to think about agents. Thinking about agents is a lot simpler and easier and more compelling than thinking about categories or types. So system one and system two, as systems of thinking are much easier to deal with, and much easier to think about than type one and type two. However, and here I agree with you completely, people have taken me much too literally, and so people believe, seem to believe, quite a few of them, that I suggest that there are two actual systems in the brain, and that these two systems fight it out or interact with each other, and I get questions like do dogs have system two? Or do infants? What is system two in infants? And things that really make very little sense, because that description, as I think you were indicating, has been oversimplified. So part of the appeal of the book is that oversimplification. That is part of the appeal of the book is that it made it easy for people to think about a distinction that is real and important, which is a distinction between two fundamentally different ways, I think, in which thoughts and ideas come to mind. Some ideas happen to you, like two plus two and then four, just as the thought happened to you. And other ideas, you've got to produce, like 24 times 17, you'd have to work at it to figure out, I think the answer is 408. 

Ben Newell: I believe you. 

Daniel Kahneman: But most people wouldn't, would have to figure it out. I happen to remember it. But, and that is work. And that is really very different. What happens to you when you are filing an income tax form, or when you computing multiplications, or when you're deliberately searching your memory, that is a very different state of mind, than the state of mind in which you are when you're responding to two plus two, or responding emotionally, when somebody says, your mother, that evokes an emotion, or the word vomit, that evokes an emotion, those are immediate, they happen to you. So that's system one, and system two, or type one and type two. And there are costs and benefits. I think, knowing about system one, and system two is better than not knowing that they are those two types of thinking, but at the same time, oversimplification should be discouraged, if possible.

Ben Newell: Yeah, I guess one of my concerns is that with the distinction between system one and system two, and you write a little bit about this in the new book, which we'll get to in a moment, is that there can be a kind of laziness in the use of, of biases and, and errors, and almost a, almost like an abdication of responsibility that people say, oh, that's my system, one, there's nothing I can do about it. And it kind of reinforces, perhaps in a self fulfilling way, this notion of irrational errorful humans that can then paint perhaps too negative a picture. That’s one worry I have.

Daniel Kahneman: I think you're giving the book much too much credit. I don't think it has any influence about how people think, good or bad. I don't think it helps people think much better. And I don't think that it makes people more tolerant of their own intuitive thoughts. That is, that's a criticism, I would not, I would not accept, I think. I think it is true that a lot of thinking is automatic and intuitive, and not founded on logical reasoning, and yet completely convincing and compelling. I think that most of the things that we believe, we believe for reasons that have very little to do with logic or with reasoning. So in that sense, I may be more extreme in assuming that's the role of type one processes, or the role of system one, as larger than perhaps you do.

Ben Newell: Okay. Another final question on the two systems is that the contrast between the automatic and the more deliberative. The deliberative thinking is often, we describe it as effortful and hard and, you know, the thing that's involved in doing our tax returns, but there are also instances in which we, in which we seek out that effortful thought, some thinking of mental games that we'd like to play, crosswords, Sudoku. Some of your very early work on attention and thinking about attention. I wonder what's, what's your thoughts about that value of mental effort versus the cost of mental effort?

Daniel Kahneman: Well, it's absolutely clear that mental effort is one of the major sources of joy for many people. So the state of flow, you know, that, sort of, extraordinary state in which people are totally absorbed in what they're doing, to the point of forgetting themselves, because they're so absorbed, is a marvellous state to be in. And it's a very effortful state. People are intensely concentrated and intensely focused and intentional, it's not, they’re not, their mind is not wandering, they're working, and they're enjoying the work. So that is certainly the case. It is also true, that when people are not deliberately and intentionally challenging themselves, the law of least effort tends to govern, that is if there is an easy way and and a harder way of getting to a goal, we have a preference for the easier way, and that is true both in the mental context and in the physical context, I believe.

Ben Newell: Okay, I'd like to turn now to your new book, Noise. You can see that I've been, it’s on the camera, you can see that I've been reading it intently and marking various different sections. So, just to begin, you write provocatively that wherever there is… whenever there is judgement, there is noise, and more of it than you think. I'm going to delve into those different types of noise in a moment. But just to start off, can you describe what you mean by noise, and how it differs from that more familiar concept of bias?

Daniel Kahneman: Well, ‘noise’ is a complicated concept, as we'll see, you know, because they are forms of noise. But the form of noise that we are most interested in, and that motivated the writing of this book, is system noise. And this is not a phenomenon within one person. This is variability across people. This is variability in a system that produces judgement. And there are many such systems. So the judicial system produces sentences. The underwriting system in an insurance company produces evaluations of risk and sets premiums. The patterns system grants patterns to some discoveries and denies them to others. The emergency room in a hospital is a system for producing diagnoses and treatments. And now, those systems when we're considering them, they are populated by different people who fulfil the same roles. So there are judges passing sentences, different underwriters, different ER physicians. And what we would want, in facing such a system, is we would want them to have one voice that is, ID’d clearly. You would not want the sentence of the defendants, the time that the defendant was spent in prison, to be determined by which judge happened to be responsible for the case that day, somebody who faces an insurance company and asked for a premium, really does not want the premium to be determined by a lottery. And so system noise is a problem. And system noise can be viewed as a source of errors. And here, maybe I should elaborate for just a minute. The concept of noise, in judgement, is borrowed from measurement noise. And all together, I view judgement as a species of measurement. And measurement noise is when you're trying to measure the same thing, the same weight, the same length of line. When you're trying to measure the same thing with an instrument, and you do that repeatedly, you really want to get measurements as close as possible. You want variability to be as small as possible. That variability is called noise, and the noise, and there is an immediate analogy between measurement noise and system noise within an organisation. So that's the concept of noise. It's completely different from the concept of bias. Bias is a concept within individual psychology. That is, we think of bias as a psychological process. And we detect or identify bias sometimes in a particular error, in a particular judgement. But we cannot identify noise in a particular judgement. Judgement is a characteristic of a set, a noise is a characteristic of a set of judgments. It's the statistical concept. And in that way, it's very different from bias.

Ben Newell: And do you think that the fact that it is a statistical concept is one of the reasons why it's remained obscure? You write about it in the book, is it being obscure in the public conscience? It's not something that's widely discussed? Is that because it's, it's a less tangible, kind of…

Daniel Kahneman: You know, one of the themes that I developed in my previous book, in Thinking Fast and Slow, was that people have a preference for thinking casually and about particular events and objects, and they have a lot of difficulty thinking statistically and thinking about properties statistical, non causal statistical properties of ensembles of objects. And that maps very precisely onto thinking about biases, and thinking about noise, that thinking about bias or about you, you can see it in individual error. And bias is really causal. Whereas noise is inherently statistical. And I think for that reason, bias is much easier to think about, and noise is quite difficult to think about. And so it tends to go undetected and undiscussed, and that was the motivation for writing that book.

Ben Newell: And in the book, you distinguish between several different types of noise, system noise, level noise, pattern noise. I wanted to talk a little bit about the pattern noise for a while if, if we may. So I understood pattern noise to have these two different aspects to it, a stable pattern noise and a transient pattern noise. So to start with this stable pattern noise is the kind of idiosyncrasies we have as individuals. I think at one point, you talk about it as a judgement, personality. And so what I wonder is how we reconcile our desire for creativity, for individual difference in opinion, and in thinking, with this need to eliminate the unwanted variability, the noise. So for example, in a hiring decision, we might like to have different opinions from different people. How do we deal with that balance?

Daniel Kahneman: Well, there are contexts in which we clearly want diversity, we want diversity of opinions, and we are really not interested in uniformity. So we don't want all film reviewers to have exactly the same opinion. So, there are many contexts in which diversity is desirable. So we define noise as undesirable variability, that is, noises’ variability where you don't want it. And in the context of hiring, for example, you really have to distinguish two different aspects of the problem, you could have several people involved in hiring a candidate, and if each of them brings a separate angle, so, one of them is an expert on subject matter, and the other one is a psychologist who evaluates the person's characteristics or whatever, then they are bringing different inputs to the decision. This is very different from a situation in which you have an individual who is hiring, who is interviewing a candidate, and is making the decision to hire, and another individual could face the same candidate and make a different decision. That is noise. It's not diversity that we want. So we want the final decision to be the same. But we very frequently want different people to provide different inputs. And when they provide different inputs, we simply don't call it noise. Noise is unwanted variability in a final integrated judgement or decision.

Ben Newell: So thinking of it in that way, puts the onus on the organisation or the system as a whole to define the situations in which noise is desirable versus, or variability rather, is desirable versus isn't desirable.

Daniel Kahneman: Absolutely. And altogether, I mean, this is the orientation that I have in general, and I had it I think, even in the previous book, that organisations, it's much easier to improve the thinking and the decision making of organisations, then to improve the thinking and the decision making of individuals. Organisations think slowly, they have procedures, you can intervene in the procedures, you can standardise procedures, and there is a chance of improving things that really doesn't exist when you're trying to improve your own thinking.

Ben Newell: It's an interesting thought that changing the way an organisation thinks is easier than changing the way an individual thinks. I can think of university committees where that doesn't appear to be the case, but…

Daniel Kahneman: Well, you know, I'm not saying that’s easy, I'm just saying that changing individuals is even harder. 

Ben Newell: Right.

Daniel Kahneman: That achieving real change is even harder.

Ben Newell: So the second element of pattern noise that you talked about is the transient occasion noise. And the idea here is that there are irrelevant features of the context or the situation that nonetheless influence judgments. You give an example in the book of a judge potentially being more lenient on a Monday if their football team won on the weekend. You also discuss some of the work of my colleague, Joe Forgas on how mood affects people's judgments. I wonder how concerned should we be about these irrelevant features that we're potentially unaware of, influencing our judgement?

Daniel Kahneman: Well, I think the general picture is that, take the example of judges passing sentences. So the first thought that comes to people's mind when they think about noise, is that some judges are more severe than others, so that there, on average, that there are differences in their biases. That's one type of noise. And also, judges differ in, I mean, within judges, there are variations from one day to the other. And we should be concerned about that, we don't want… the defendant really should not… well, the defendant’s fate should not be determined by, you know, the football events of the previous Monday, or by the judge's current mood. So none of these sources of noise is really acceptable. And some of them are to cope with another's.

Ben Newell: If I might, I might try and relate… so one of the thoughts that I had, when thinking about these occasion noise, or these contextual situations that affect our behaviour, whilst remaining somewhat out of our awareness, it put me in mind of some of the studies that you talked about a great deal in the earlier book and Thinking Fast and Slow, where the high profile studies in which people's behaviour was said to be influenced by features that they were unaware of. So I'm thinking of the social priming type studies where perhaps, you know, I read about an old person, and then I, I've walked more slowly down the corridor. And I can see by the reaction on your face that you know, where this question is headed. But there was the inability to be able to replicate some of these, these standout studies led you to warn of an impending train wreck for the discipline a few years back, and subsequent, you know, ripples of that. Your letter, across multiple disciplines, we're seeing these patterns of replication, these attempts to see what's real in our science. And I'm fascinated to know whether or not you think we've emerged from that wreckage, avoided that wreck. What's your current thinking on these sorts of studies?

Daniel Kahneman: That's a dramatic change of topic. It is true that when I wrote Thinking Fast and Slow, I was very impressed by literature on priming. And those were subtle, fascinating effects, that where a small change in context seem to have a significant effect on behaviour. And, and I think now that I was gullible, I think that I believed in these results, although if I had looked more carefully, I would have seen that the studies were individually fairly weak, and with small samples, the effects were too large to be true, in some sense. So yes, I became, and that really hurts me because I had put a lot of faith in it. And so I wrote a letter, which by the way, did not intend to be published. I wrote a letter to people in the priming field. And I still believed in priming, when I wrote that letter, I believed in it much more than I do now, in priming as a strong effect. And I asked them to get their act together and to replicate themselves, so that people would believe them, because it was clear that people were already failing to replicate priming. I think the current state of play is quite interesting. The researchers in priming have never admitted that they were wrong. Other people have failed to replicate them very consistently. But the main thing that's happened is that in part as a result of this, and in part as a result of the whole issue of replication, in other sciences, not only in psychology, the science of psychology has improved enormously over the last decade, I think standards have become much higher. I think sample sizes are higher, people are much more careful about their reasoning and pre registering their studies and the methodological quality of psychological research. It's hard to believe how much it has changed. You know, the letter that we're talking about, that was not intended to be published, was written in 2012. And in nine years, the field has really changed completely. And no thanks to my letter, but because of other events that were already happening within the field, and in the context of replication more generally.

Ben Newell: So you're, you have a very positive outlook then, on the future for the discipline, you think that it's that it's now headed much more in the right direction?

Daniel Kahneman: I mean, I think, I think it's remarkable how much has happened in a very short time. I mean, it is now standard, to the kinds of problems that gave rise to, unreplicable findings have really been tackled. And so there was a very important element of self deception. This was not fraud. But researchers allowed themselves degrees of freedom in interpreting the results. And, you know, I know because I did it myself, I caught myself in having made those errors. So, and today, people are much stricter with themselves. In the end, they have to be public about the precautions that they take in carrying out their research. So I think the main thing, you know, the priming scandal is a minor event, relatively, and the change in the methodological advance in the discipline is a major event. And that's what's really happened over the past decade.

Ben Newell: Okay. Thank you for allowing me that segue, or that brief tangent there. So I'm gonna return now to the issues that come up in the new book, in Noise. And a central feature of your career has been on trying to understand the benefits and pitfalls of intuitive judgement. And many of us often like to rely on what we think of as our intuition, when we're making these kinds of judgments. Intuition is a term which I guess is perhaps hard to define. I like the definition that you often use, which is Herbert Simon’s one, that it's nothing more, nothing less than recognition. But in the book you write that intuition should not be banned, but it should be informed, disciplined and delayed. This might seem at odds with the sort of fast and automatic way that people often claim to use intuition. So why do you think we need to delay? And why is it that that internal signal delivered by intuition is so seductive? And if I might just append a question from a current student that was submitted so Ifran Mohammed is a current UNSW student asks, why sometimes when we focus and eliminate all kinds of noise, we don't arrive at a solution in our minds, but when we do something else, the solution can suddenly turn up like a eureka moment? Is that an example of what you think of as intuition?

Daniel Kahneman: Well, that's a separate question. Let's return to it. I will probably forget it by the time I finish answering one question, but you'll have to forgive me for that. The definition of intuition as recognition is sort of the technical definition, the way that people, the definition that captures people's attention is that intuition is knowing something without knowing how you know it. And that is a subjective feeling of confidence that there is something that you know, or something that you understand, although you cannot quite justify it. And quite often there is no question about it. There are intuitions that people have, which are truly marvellous, and they are very rapid in many cases. So, all of us, you know, my favourite example of that is talking to your spouse and the telephone. And you can tell your spouse's mood on the telephone from the first word that you hear. This is a lot of practice, and we're rarely wrong, you can tell whether he or she is happy or angry or depressed. We can tell an awful lot from one word. That's intuition. And it's marvellous intuition, it is usually very, usually correct. Many professionals have intuitions that are marvellous. So chess players can look at a chessboard and have an immediate intuition about whether the correct move, physicians can recognise, can make a diagnosis, you know, from across the room, in some cases, and very likely to be correct. So, intuition can be marvellous, and it's a characteristic of fast thinking, that it happens quickly. However, that subjective sense of having an intuition, you can get it without justification, you can get it when actually what is happening as you're going wrong, you're following a mistaken heuristic or rule of thumb, some inconsequential or uninformative bit of information has led you astray. The characteristic about thinking intuitively is very high subjective confidence. And confidence, and that is a fundamental fact about, I think, human thinking, is that confidence is really imperfectly correlated with accuracy. So it's true that we're confident when our thinking is, when our intuitive thinking is right, but we're also confident when our thinking, our rapid thinking is wrong. So the recommendation in the current book that was explicit, was to delay intuition. And if you want me to, you know, I can, I can describe the origin of that idea. But I don't know whether this is where you want me to go?

Ben Newell: No, I would like to hear where that idea comes from, and how we know when we should delay it, I suppose. And how to do…

Daniel Kahneman: Well. In my mind, the idea goes back a very long time ago, when I was a lieutenant in the Israeli army, and that was in the 1950s, and I'm really embarrassed to say how long ago it was. And I was assigned the task of setting up an interview system for combat recruits, which was really intended to evaluate the suitability of recruits to combat units. And there had been an interview, there was an interview system in existence, which was the standard unstructured interview where the interviewer spoke to an individual and tried to form a general impression, and eventually got the sense that he or she knew the individual and could tell how good a soldier that individual would be. And that actually had very low validity. So people had high confidence in their intuitions, and they were essentially useless. And this is very common. That is, it's well known that an unstructured interview produces a lot of confidence in the interviewer, and very poor validity on average. Now, the system that I devised – under the influence of an important psychologist Paul Miele who had just published a book on that – the system I devised, involved the interviewer giving six scores to the recruit on different characteristics, on punctuality and sociability. And I had a characteristic you wouldn't use now, masculine pride. And there were six of them. And the idea was for the interviewer to ask factual questions, leading to a score on each of these six attributes. And my initial plan was that we'll just take the average of these six scores and that would be a judgement of how well the recruits, you know, the best guess as to how well a recruit would do. The interviewers, who were on the job, and who had been using an unstructured mode of interviewing, were furious with me. And they were furious because they wanted that sense of intuition, they wanted to, they wanted the sense of getting to know someone, and they wanted to use their clinical ability. And I remember being told by one of them, you're turning us into robots. That is by this sort of mechanical way of doing it. And so I compromised, and my compromise was I told them, well, you do things the way I told you, you get those six scores. But once you have completed the six scores, close your eyes, and make a judgement and intuitive judgement, how good a soldier would that person be? Now, a few months later, we had the results of that study, and the results were that we were doing a lot better than in the unstructured interview, today, that's not a surprise at all. But what was surprising to me then, was that the final intuitive judgement, close your eyes and make a judgement, was highly valid, and it added content, it added validity beyond the six traits that have been evaluated. So now, the lesson I drew from that was that intuition does work, but you want to resist forming an intuition too quickly, you want to resist, in the more recent terms, you want to resist fast thinking. You want to collect the information, you want to develop a profile of the case, and then you can allow yourself an intuitive judgement. And this, I realised, many decades later, can be generalised to decision making. So when you're making a decision between different investments, you can think of options as candidates, and apply the same logic to options as you would to candidates. By which I mean, that any option you can characterise in terms of the set of attributes that make that option more or less desirable, you can evaluate each of these attributes in a fact based way, and collect the series of scores and create a profile for that option. And then, and only then, allow yourself to have an intuitive global evaluation of that option. So that's the idea of delaying intuition. And I think there is a fair amount of support for it. And certainly, it's the major recommendation in my last book. So it was personally quite satisfying to go back to an idea that I had developed 60 years earlier and use it again. 

Ben Newell: Yeah, I found that that theme running through was very, kind of, I guess, maybe I was getting an internal signal from the coherence or the cohesiveness of the argument there. You made a comment just now about the resistance of being turned into robots. And one of the decision hygiene strategies that you talk about in the book, ways to, sort of, avoid noise or clean up noise, is to do with relying more on algorithms. So there's a rise of advisor systems, of algorithms everywhere at the moment. One of our audience, UNSW alumni, Leonor Gouldthorpe asks, do you believe that artificial intelligence, relying on algorithms, will be able to match the way that humans think? We also had a question about whether, how best should we work with algorithms? How should humans interact with these algorithms?

Daniel Kahneman: Well, you know, this is a very loaded topic and artificial intelligence is going to, I think, produce major problems for humanity in the next few decades. But with respect to judgement, we… there is a long history of research comparing human judgement to rules. The rules can be very simple, but the essential aspects of those rules are that they are noise free. That is, when you apply an algorithm or a rule to the same case, on different occasions, you are going to come up with exactly the same answer. That's noise free. It turns out there is so much noise in human judgement, that for that reason alone, rules tend to be superior in many cases, to human judgement, and in some cases, vastly superior. Now that’s even, that was even before artificial intelligence, it was true. Applying simple statistical rules and even imperfect statistical rules, just apply consistently, will do better than people. Now, there is a history of trying to use, to combine rules and intuition, that is to providing people with the inputs of, say, artificial intelligence, or some statistical analysis. And one of the best known examples of that was in chess. So, in 1998, Garry Kasparov was then the world champion in chess, was defeated by IBM Deep Blue. That was sort of a very important moment in the competition between artificial intelligence and human judgement. And Garry Kasparov, who is quite an opinion, as well as a brilliant man, he really did not like the style of the computer that defeated him. He felt that there was something mechanical, robot-like, in the blue and the style of Deep Blue. And his idea, which he maintained for several years, was that the optimal way to play chess would be by combining a very good chess player with an artificial intelligence assistant. Over the last decade, something has happened. Chess playing by artificial intelligence has improved to such a degree that they beat the world champion very easily. And artificial intelligence no longer needs a human to play chess. It's even so that the most recent programs that play chess have been developed in such a way that they don't build on human knowledge at all. You just have artificial intelligence, teach it the rules and have it play itself, millions of times until it finds a way, it develops its own style of playing, its own way of playing. And that is the the most recent program that I know about is a program called AlphaZero. And the very striking thing about AlphaZero is that it's highly creative. That is when you watch the games of AlphaZero against another AI, each world champion, Stockfish. The striking thing about the games of AlphaZero is how beautiful they are. And Garry Kasparov recently, the same Garry Kasparov said, AlphaZero plays as I do, except better. That is, it now has a style that is the human creative style at its very best, and it's perfect. So this, it's not going to happen immediately in every domain. Chess is a very powerful domain, relative, say, to, you know, the kinds of decisions that a chief executive makes. But you can see the handwriting on the wall. That is, when it becomes possible *glitch in sound* mode problems in a regular way, and to the accumulated amount of data about those problems, then you are going to have an artificial intelligence, that is first going to be almost as good as people, and very quickly will not need people. And that is going to happen. We already can see it happening in some domains. And it's unclear what other domains that it will happen. I would not bet against it, in almost anything except, and that's the critical point, our recommendation is noise as not to go to algorithms. For two reasons, both people hate them and oppose them, but mainly because they're not ready. It feels we are going to be using human judgement for generations. And so the first immediate task is to improve human judgement. The work on algorithm will proceed in parallel, and how the interaction between algorithms and people between artificial intelligence and real intelligence, how that will play out, is going to be one of the interesting and important events, you know, in human history, I think, over the next century.

Ben Newell: You hint in the book, the need for… this combination traditionally has been that the judge needs to figure out what are the right variables to look at, and then the algorithm will add those things together following in the tradition of Robyn Dawes, and other central figures in our field. How do we train judges to know what variables to look at? You speak a little bit about actively open minded thinking as a measure of how judges can be more appreciative of the factors that need to go into a judgement.

Daniel Kahneman: Well, that's a multipart story. In, you know, when you're applying, for example, artificial intelligence to a problem, like the problem of whether or not to grant bail to defendants, whether they have to be in jail, or whether they can be out of jail while waiting for trial. It turns out that this is a decision, when… that using objective data, the data that are only one part of what the judge has, an AI will do better than people, will do better than judges, because judges are noisy, and because, actually, judges are susceptible to pattern noise, as well as being more or less lenient. So in those cases, it's not even an issue of selecting what the variables are. It's making use of the variables that are available, and AI is going to be very good at it. And people are not bad at selecting the best variables. So, that is something that they can do. Actively open minded thinking is, I think of it as a way of resisting intuition. It's a way of resisting the confidence that comes with intuitive thinking, that staying open minded is really very closed, I mean, by delaying intuition as long as you can, accumulate information, and then, and only when you have as much information as you can have, use your intuition and reach a judgement.

Ben Newell: One, I suppose, final strategy, probably, that just wanted to touch on, is the notion of reducing noise by aggregating opinions across different people. So the idea of the wisdom of the crowds. Now you talk about that as… can be a useful strategy, but it can also amplify noise in certain situations. So do we know when we should be relying on more heads than one?

Daniel Kahneman: Well, you always should rely on more heads than one, if you can, because there is one, noise has an important characteristic. It's a statistical phenomenon. And if you aggregate many independent observations, and the key here is that they're independent, if you aggregate many independent observations, you are going to reduce noise. We know exactly at what rate you're going to reduce noise. Noise drops to the square root of the number of observations. And as it approaches zero, when you have enough information, when you have enough data. So aggregation of independent observations is always good. Where things can go wrong is when the judgments are not independent. So, when you are putting people in a room and allow them to discuss and form an opinion through discussion, then sometimes you will amplify noise rather than reduce it. And that is because, if there are people there whose confidence is very high, they're not necessarily right. Then when they speak confidently, they will overwhelm the judgement of others, they will be given more weight than they actually deserve. And to the extent that they influence the final judgement, the final judgement could go wrong. So we actually know, the key is independence. If you have independent judgments, aggregation is good. If you don't have independence, you have a problem.

Ben Newell: And in the book, you have a very helpful protocol, your mediating assessments protocol that maps out exactly how to maintain that independence in your different judges.

Daniel Kahneman: The mediating assessment protocol is actually derived directly from the interviewing system described earlier, that as you are making assessments of characteristics like punctuality and sociability, or you know, assets and liabilities, if you're talking about investments, and ultimately, you evaluate the profiles. So that's the idea described earlier.

Ben Newell: I want to finish on a question, a general broader question. So arguably, your work and its influence and the development application of behavioural economics, has been one of the most significant exports to society of psychological knowledge in recent decades. The rise of behavioural insights teams around the world is testament to this impact. So armed with all this knowledge, a staff member here at UNSW asks Daniel Luao, how can we harness all of these insights about human judgement to bring about the urgent changes we need to address major societal issues, such as climate change, or indeed reactions to the pandemic? Do you have any final thoughts on those broader topics?

Daniel Kahneman: Well, I think it's very clear that behavioural economics has been found useful. The Nudge Units exist in very large numbers in many countries. And that is because evidence has shown by and large that those intervention interventions that are suggested by behavioural economics tend to work. But it's essential to remember that these are limited interventions. What is characteristic of nudging as a school, is that those are inexpensive interventions. They're simple interventions, they're easy, and they do not involve coercion. They involve nudging, they involve changing the situation, making it easier for people to make some decisions rather than others. So behavioural economics is not going to solve climate change. What is, if anything, is going to solve climate change, it's going to be society with all its might, changing rules, imposing rules. And coercion is certainly going to have to be involved. So nudging by itself is not going to solve the major problems. Nudging can, in some cases, solve particular problems. And in all cases, make it easier and smoother to apply any policies, by making it easier for people to understand, by making it… by compelling bureaucrats to explain themselves in a language that people can understand, and by taking into account, what will evoke resistance and what will not evoke resistance. So economics is very useful at the margin, but when we're talking of major human problems, behavioural economics is making a marginal contribution.

Ben Newell: Okay, well, hopefully we can all use the fantastic insights that you've offered us today in terms of, thinking about our own judgement, thinking about how to improve the judgement of those of us, those around us. And eventually, these will lead to this better overall outcome for the world, we hope. So I'd like to thank you very much for your insights and for a wonderful conversation. I'd like you to… I'd like to encourage you all to buy the book, it’s called Simply Noise, and to look out for future exciting events at the UNSW Centre for Ideas. Danny, thank you so much.

Daniel Kahneman: Thank you, it was a pleasure.

Ann Mossop: Thanks for listening. For more information, visit centreforideas.com, and don't forget to subscribe wherever you get your podcasts.

Speakers
Daniel Kahneman

Daniel Kahneman

Daniel Kahneman is best known for his work with Amos Tversky on human judgment and decision making, for which he was awarded the Nobel Prize in Economics in 2002. Kahneman has also studied a number of other topics including the memory of experiences, attention, well-being, counterfactual thinking, and behavioural economics. He published Thinking, Fast and Slow in 2010, which has sold more than seven million copies worldwide. A new book titled Noise: A Flaw in Human Judgment (with Olivier Sibony and Cass R. Sunstein) will be published in May 2021.

Ben Newell

Ben Newell

Ben Newell is a Professor of Cognitive Psychology and Deputy Head of the School of Psychology at UNSW Sydney. His research focuses on the cognitive processes underlying judgment, choice and decision making, and their relationship to environmental, medical, financial and forensic contexts. He is the lead author of Straight Choices: The Psychology of Decision Making. He is currently an Associate Editor of the Psychonomic Bulletin & Review, on the Editorial Boards of Thinking & Reasoning, Decision, Journal of Behavioral Decision Making and Judgment & Decision Making. Ben is a member of the inaugural Academic Advisory Panel of the Behavioural Economics Team of the Australian Government. 

For first access to upcoming events and new ideas

Explore past events