Sunday, 17 September 2017

Deconstructing cultural divides

Talking about cultures is difficult: a culture has many facets, so where to begin analysing it? A good start would be breaking down a culture into eight dimensions as identified by Erin Meyer in her book The Culture Map.

Meyer (a professor at INSEAD) is a good, methodical writer, and there is much to learn from reading her book - plus, it's a very amusing read, as she recounts many anecdotes from her professional life. In this post, I want to review the eight dimensions Meyer has identified and offer my own perspective on them - especially as they relate to the workplace. These are:

Communicating: low- vs high-context cultures
The first way in which cultures may differ is how explicit their members are in their communication. Americans tend to spell everything out - to the extent that for an American, sarcasm can only be expressed by modulating one's voice (something that non-Americans perceive as very annoying at best or patronising at worst). A Briton, in contrast, delivers sarcasm totally deadpan - and may well be misunderstood by Americans.

In Meyer's terms, British culture is more high-context than American: its members rely on shared memes, behaviours and frames of reference when communicating, and they expect their interlocutors to pick up on subtle cues, so that they do not need to spell everything out. Yet, Britons themselves are very low-context in comparison to other cultures - especially Asian ones:


It is true that some cultures are more explicit than others, but I think most people overplay such differences. The most notable example of this is Malcolm Gladwell's take on Korea Air's 1997 plane crash: as he said in an interview, "the single most important variable in determining whether a plane crashes is not the plane, it’s not the maintenance, it’s not the weather, it’s the culture the pilot comes from."

Gladwell's thesis, as outlined in his book Outliers, is that the high-context nature of Korean culture and language makes Korean pilots more likely to crash planes: Koreans, he writes, are more deferential to authority and tend to rely on suggestion and subtle cues instead of direct communication, especially when talking to superiors. As a result, a flight officer or engineer will not directly challenge the captain, even if he notices something's wrong - instead, he will try to indirectly communicate his unease, by making statements such as "it's raining heavily" or "the radar is useful", meaning (according to Gladwell) "you have no visibility, do not attempt to land the plane using your eyes only" and "look at the radar, use that instead" respectively.

The problem with this thesis is that it is totally wrong, as a Korean blogger has shown. It exaggerates cultural differences between Korean and American pilots, and it downright misrepresents what actually went down in the '97 crash (for example, Gladwell suggests that if Korean pilots were forced to communicate in English, the number of crashes would be reduced; but the pilots in the '97 crash did actually use English a lot of the time). As the blogger notes, this inclination to interpret individual humans' actions based on culture is overly simplistic, distracts from fully understanding an issue, and destroys individual agency.

Consider the following real dialogue that Meyer provides as an example of cultural misunderstanding:

A: It looks like some of us are going to have to be here on Sunday to host the client visit.
B: I see.
A: Can you join us on Sunday?
B: Yes, I think so.
A: That would be a great help.
B: Yes, Sunday is an important day.
A: In what way?
B: It is my daughter's birthday.
A: How nice. I hope you all enjoy it.
B: Thank you. I appreciate your understanding.

A walked away from this conversation thinking that B would come in on Sunday; B thought that A had let him off the hook. It's true that B never explicitly stated he doesn't want to come in, and we can put this down to culture. But, in my opinion, the misunderstanding here is not due to culture, but due to the fact that mainly A, but also to a lesser extent B, are just bad communicators:

A: It looks like some of us are going to have to be here on Sunday to host the client visit.
B: I see.
A: Can you join us on Sunday?
B: Yes, I think so. --> Explicitly saying "yes" when you mean "no" is bad form. If B had led with "well, Sunday is an important day", fine; but he did not - there are no cues here, no subtlety, there is a direct "yes".
A: That would be a great help.
B: Yes, Sunday is an important day.
A: In what way?
B: It is my daughter's birthday.
A: How nice. I hope you all enjoy it. --> What on earth does A mean by this? He is asking B to come in and work - but "I hope you all..." implies that B will be with his family. Bad, ambiguous communication, not cultural misunderstanding!
B: Thank you. I appreciate your understanding. --> How can A fail to pick up the significance of "I appreciate your understanding"? What understanding has he shown? What does he think B is thanking him for? His obliviousness to this is due to bad listening skills, regardless of culture.

Still, it's undoubtedly a fact that some cultures are indeed more explicit than others, and that cultural misunderstandings can occur, even if people are good listeners. That's why Meyer is right in saying we should all be aware of cultural differences: people from low-context cultures should be extra vigilant so as to pick up subtle cues from people from high-context cultures; on the other hand, the latter should not try and find hidden meaning in explicit statements from people from low-context cultures.

Even so though, it is unrealistic to expect a Cincinnati-based manager to understand Japenese or Korean culture well enough to pick up on cues based on native speakers' shared worldview and history. So, as Meyer suggests, multinational organisations should train all their employees in using low-context, explicit language, to avoid misunderstandings.

(That said, I can absolutely understand why people from high-context cultures might find this difficult: most Europeans I know often dismiss Americans as unsophisticated and unrefined due to their explicitness and inability to process sarcasm; imagine then how westerners must come across to the even more high-context cultures.


(Interestingly though, exactly because high-context languages rely on shared culture, people from one high-context culture may totally fail to pick up cues even from a lower-context culture than theirs; I have a Chinese friend who finds the British way too indirect, for instance - even though her culture is supposed to be more high-context.))

Evaluating: direct vs indirect
A French person listening to an American evaluating anything wears a peculiar expression on their face - something between scorn and pity. Either the American is a liar, with their "awesome!"s and "wow!"s, and so they deserve scorn, or they really are that excited by everything - in which case they deserve the French person's pity for being fundamentally uncool and not realising that life is a meaningless abyss of pointlessness.

This is partly because the French have nihilistic philosophy in their DNA and partly because they are more explicit in their feedback than Americans, who are, however, more explicit than the Brits:


Cultures on the left side of the scale call it like it is: you do something wrong, they will let you know. Sides on the right will find a round-about way of giving negative feedback. For example, Americans will rarely tell you that you suck at something - they will talk about your "opportunity areas", and only after they have identified at least three "strengths". The English have a different strategy: they have developed a special vocabulary for giving negative feedback:

(As a result, the English are not perceived as uncool by the French: they may be more indirect, but they do it with more finesse than the Americans (or rather, they are perceived as uncool, but for different reasons).)

You can see the problem here: if you have never worked with English people before, you may walk out of a feedback session with your British boss thinking that he is in total agreement with your interesting ideas, and that any flaws in your work were due to his interference - when in fact, from his perspective, he just gave you a pretty severe dressing down. Misunderstandings are exacerbated by the fact that people from some cultures, such as the American, have a reputation for being very explicit in their communication, as the scale in the previous section shows; so, their interlocutors expect them to be the same way when evaluating things, and therefore fail to pick up on the more subtle feedback.

This is a difficult problem to crack. It is just as difficult to learn to interpret another person's feedback as it is for a person to learn to change the way they deliver it. In addition, as Meyer notes, it is easy to accidentally go too far: an American who reads all this may well decide to give being more direct a try, but end up coming across as rude to a French person.

According to Meyer, the solution to preventing misunderstandings here is common sense: remember not to take it for granted that your interlocutor has understood your feedback, and do not try to hard to adapt to the local style of giving feedback unless you understand it perfectly, because it is easy to overdo it.

My own observation is that style is one thing, substance another: the former doesn't matter if the very criticism you are offering is not helpful. For example, one of my managers once justly criticised me for handling a situation badly. I listened, and learnt from this, and (I believe I) improved in that area. Three months later, my manager gave me the exact same feedback - by referencing the mistake I made the first time. I thought that this was unfair: if I had not made progress, she should have referenced a new situation in which I showed the same failing. If there were no new mistakes in that area, why was I receiving the same feedback? No matter how subtly or directly she had given me the feedback, I would not have taken it well, given that it was not well-thought out.

Persuading: concept- vs applications-first
In my first week at P&G, my boss asked me to do an analysis and present my results to the finance director. Fresh out of university, I applied all the fancy analysis methods I had been taught, and wrote a five-page document outlining my findings, carefully describing my methodology and caveats to my work. I sent this to my manager, who came to my desk ten minutes later, gave the document back to me, and said "this is all wrong. No-one is going to read give pages. Fit it all in one". I was flabbergasted - there was no way I could fit all my findings, along with an explanation of how I got to them, in one page; and if I left out the latter, why would anyone trust my analysis?

Meyer calls what I tried to do "concept-first persuading", and what my boss was asking me to do "applications-first persuading". It is very important to know how different cultures rank in this scale, because getting things wrong will render your arguments totally useless. 

As the table shows, a concept-first audience demands that a person making an argument explains his approach and methodology before presenting conclusions and recommendations. For example, if you were to give a presentation to German managers, you should begin by explaining how you did your analysis; once you have convinced them that your approach is sound, you can present your results. Having accepted your methodology, they are more likely to approve of your conclusions. In contrast, a group of American managers will soon get impatient and accuse you of philosophising if you waste their time with a lecture on your approach.

For me, this section was the most illuminating one in the book - I had not realised there are cultural preferences in this area. As a result, I have got this wrong both ways in the past. There have been occasions when I gave an application-first presentation to a concept-first audience: I dove straight into my recommendations, only to be cut short minutes after I started speaking with questions of "how did you get that?" and "did you also consider x/y/z in your analysis" - which totally derailed my planned presentation. There have also been times when I started talking about how I approached a particular analysis to be interrupted with "why are we wasting time talking about how you modeled x/y/z? Cut to the chase". I might have been able to avoid such issues, had I known about this cultural divide.

Of course, what complicates things is that many of us work in multi-cultural environments - our audiences may well include both Americans and Russians... what do we do then? Meyer doesn't offer much advice here. My own approach so far has been structuring my presentations in such a way that I can easily change track if the meeting starts getting derailed - e.g. by having an appendix with my methodology at hand, or a section with bullet-point recommendations to which I can easily skip if needed. In the future, I think I might also try adapting my presentations to the culture of my audience's majority (or to the culture of the key decision maker), and see how that goes.

Meyer notes that the scale above does not show Asian cultures. This is because Asians, according to her, take an altogether different approach to persuading, which she calls "holistic thinking". She describes this as a pattern whereby people talk about peripheral information, which they slowly synthesise into one big picture. She cites some interesting studies corroborating her thesis - for example, when American and Japanese subjects were asked to describe pictures or videos of aquatic life, the Americans started by talking about the fish they spotted, whereas the Japanese started by describing the background. Similarly, when asked to take pictures of individuals, Americans took close-up portraits, whereas the Japanese zoomed out to take full-body pictures of the subject in her environment.

I am not 100% clear on how this is different to a concept-first preference - after all, looking at the big picture is basically taking a particularly broad theoretical approach to things. According to Meyer, it's interesting knowing about it, because it has implications for managing people from such cultures. She reports cases of managers who were used to western cultures, where they'd allocate specific tasks to individuals in their teams, and expect them to accomplish them. But in holistic cultures, employees want to know how their work fits in the bigger picture, so to motivate them, managers should explain how each person's work is relevant in the bigger scheme.

I must say that my personal experience does not really support this. What I have seen is that in every culture, good managers understand (and want to know) how their work fits in the big picture, and poor managers focus on their little silo, without really caring how their work affects that of others. For example, one of the things I have worked on in the past is minimising the cost of our products. What I noticed is that many of our chemists or engineers were brilliant at finding technical solutions to technical problems, but not very good at understanding how their work affected the consumers. Suppose you asked them for options to reduce the cost of promotional SKUs. They could easily give a list of different materials you can use, and explain how using a different material would lower the cost, but it would not occur to them to calculate the total cost of promotional SKUs and ask marketing whether these SKUs are really needed - for instance, do we really need to physically bundle two products together, or can we run a buy 1 - get 1 free promotion?

Perhaps some cultures really are more inclined to see the big picture; but I really think that this is more a function of an individual's intelligence, ability to synthesise information, and perhaps most importantly, curiosity, than of a person's cultural background.

Leading: egalitarian vs hierarchical
This dimension refers to how hierarchical a culture is. I don't think anyone will be surprised to hear that the Nordic countries, with their utopian socialist regimes, are the least hierarchical cultures in the world, whereas countries like Saudi Arabia, China and Japan find themselves at the opposite extreme:


In egalitarian cultures, it's okay for subordinates to openly disagree with their managers, to take initiative without approval or to e-mail people far higher in the management chain; in contrast, in hierarchical cultures, subordinates are more likely to defer to their managers' opinions, and one simply does not message someone two levels above them directly.

Per Meyer, egalitarian managers leading a team from a hierarchical society may run into big problems in this dimension: they may think that their reportees lack initiative or confidence, because they will not generally to do things on their own and will not speak up in meetings; on the flip side, egalitarian managers themselves may be perceived as incompetent and incapable of setting direction by their reportees. 

Meyer has a few suggestions for leading teams from hierarchical cultures: a) asking your team members to meet without you to brainstorm, and share back with you the team's ideas - removing yourself from the meeting will make team members more comfortable to voice their views; b) telling your subordinates in advance that you will ask for their inputs in a meeting, so they have the right expectations and time to prepare; and c) when chairing a meeting, do not expect people to jump in - invite people to share their views. 

In addition, Meyer says that symbolism may matter more in hierarchical structures - for instance, she recounts the story of a senior manager working in China who found out his reportees felt slighted because he biked to work: at that time, it was considered low-class to cycle instead of driving or even taking public transport, and the person's subordinates felt that their manager was not signalling his high status, which in turn reflected badly on his team.

My own experience corroborates the scale above, but, perhaps because P&G only hires at the entry level, promotes from within, and has quite multicultural offices, thus creating a very strong and fairly uniform culture, I have not witnessed dramatic differences when working from colleagues from different countries. Sure, the Chinese managers are more likely to be deferential to their superiors, but it's not like they will not speak up at meetings - and my subordinates do not seem to mind my taking the underground instead of hiring a driver.

Deciding: consensual vs top-down decision making
The fact that a particular culture is hierarchical does not mean that all decisions are made by the boss. In fact, a culture may be very hierarchical in the ways described above - it may vest its leaders with status, expect subordinates' communication to follow the chain and avoid challenging their seniors &c - but have a consensual decision making mechanism.

Meyer recounts the story of a merger between an American and a German company, which quickly ran into difficulties. Amusingly, each side accused the other of being overly hierarchical. An American remembers being told off by the Germans for scheduling lunch with someone beneath them in the hierarchy - thus violating protocol; the Germans complained that though Americans "pretend" to be egalitarian, what with their open-door policies and first-name basis, they made decisions in a far more dictatorial manner: a manager would often make a unilateral decision, and expect his subordinates to follow his lead. Germans, in contrast, would make decision by consensus.This had further implications: because Germans would spend a lot of time conferring before making a decision, once a decision was made, they would stick to it. Americans would make a snap decision, and expect to change course as new information came in.

And yet, Germans are not that far from Americans in their decision making style:

How does Japan, a country that is considered to have one of the most hierarchical cultures in the world also have the most consensual decision making style? Apparently, they operate on what is called the ringi system: low-level managers discuss an idea among them, reach a consensus, and present this to their 1-ups; the 1-ups then have discussion among themselves, and once the proposal has everyone's stamp of approval, it is sent to the people further up the chain, and so on until it reaches the ultimate decision maker. By that time, everyone in the hierarchy is aligned to the proposal (though I have no idea what happens if the group of more senior managers disagrees with their subordinates' recommendation).

My own experience corroborates this scale. P&G has a standardised system for making decisions, but even within such a system, you can see that managers from different cultures have different preferences. One of my German superiors perfected the system of management by walking: he would take walks around the office, listen to what his subordinates were working on, and make suggestions for what course of action to take. Even when he dictated a decision, he would invest time in explaining his reasoning to get his subordinates' buy-in. In contrast, an American manager I had would also solicit inputs from his team members, but was more likely to unilaterally decide a course of action - even if he did not have his subordinates full agreement (for example, on one occasion, he asked me to conduct a very large piece of analysis, which I felt was unnecessary. I explained why I thought this work was not needed, but he told me to do it anyway. (To be fair to him, he turned out to be right - that work did yield important insights.)).

You can immediately see what people at each end of the spectrum think of working with colleagues from the opposite end: people from top-down cultures find consensual decision making to be too slow, bureaucratic and inflexible; consensual decision makers find top-down cultures to be too dictatorial and indecisive, as decisions are frequently revised (not having undergone lengthy examination from the beginning).

My own view on this and the preceding section is that it is not enough for managers to learn and understand different cultures: true leaders must also be able to shape the culture of their own organisation. When it comes to decision-making, both extremes are not ideal. If you have a culture that insists on a rigid hierarchy where decisions are always made at the top, you may miss out on valuable input from the people further down the chain. Moreover, senior managers who do not interact with those at the bottom of the pyramid risk losing touch with the business environment - consider the example of John Lasseter's clash with Disney's Nine Old Men: Lasseter was fired from Disney for pushing for computer animation; he joined Pixar, which Dinsey ended up acquiring for $7.4 billion (at which point Lasseter was appointed Chief Creative Officer for both Pixar and Disney Animation).

On the other hand, an overly-egalitarian, consensual culture where everyone's opinion has the same weight regardless of experience or expertise is likely to be very slow and ineffective. For example, a poll Meyer cites found that fewer than 10% of Swedes believed a manager should match his subordinates' technical competence. But it can quickly get extremely frustrating trying to explain a thorny, technical issue to a superior who just doesn't have the necessary knowledge to understand it (ask Gary Cohn).

Beyond this, I think each kind of culture is better suited to particular kinds of problems. Companies operating in industries where speed of innovation is critical, and failure is not catastrophic, require a relatively flat culture, so that everyone can contribute ideas, but with a top-down, flexible decision-making style, so that decisions can be taken quickly and revised frequently. Companies working on capital-intensive, long-term projects (say, building nuclear reactors) would benefit from a hierarchical culture with an experienced leader at the top, and a consensual, slow decision-making style that ensures all relevant facts are considered before taking action. You do not want to start building a factory only to realise you have laid the foundations over a major fault line.

So, I think leaders must be aware of different cultures, but not so as to adapt to them, but so that they know what they need to do to align them to their organisation's mission.

Trust: cognitive vs affective
Meyer posits there are two types of trust: a Swiss person builds trust by being open, transparent and detailed - technically competent; a Chinese person, in contrast, builds trust through personal connection - by building Guanxi. In other (/Meyer's (sorry to say somewhat trite)) words, you can build trust from the head and you can build trust from the heart.

The more scientific terms for these two types of trust are cognitive and affective trust respectively. The former refers to the trust you have in a person thanks to their accomplishments and skills; the latter is the trust you have in people to whom you are close.

A Harvard Business School survey cited by Meyer highlights a significant difference between American and Chinese managers: the Americans separate cognitive and affective trust. The Chinese connect the two. Meyer brings up an anecdote illustrating this difference: she interviewed a Chinese manager, Ren, working in America who once formed a friendship with an American he met at a gym. By happenstance, this American was a potential client for Ren's company; Ren was surprised to find out that, in spite of their personal friendship, the American wanted to look into the details of a proposed contract, and negotiate a price as though they were strangers.

Another way to frame this divide is as task-based vs relationship-based trust. Task-based cultures separate cognitive from affective trust, whereas relationship-based cultures have more blurred boundaries between the two:


Of course, Meyer accepts that  Americans too form relationships with colleagues or business partners, but according to her these tend to be more ephemeral and often only exist to serve a business purpose. The fact you have skied or hit the links with someone does not mean they will not launch a hostile takeover bid for your company if they get the chance. In addition, Meyer stresses that one should not mistake friendliness for relationship-building, nor initial coldness for aversion to forming a bond. She points out that Americans are very smiley, friendly and likely to get into personal discussions with virtual strangers - but this does not necessarily mark a willingness to form a long-lasting bond.

I have three comments on this. First, I am not so sure that relationships count for as little in America, and for as much in, say, China, as Meyer suggests. For instance, she mentions that one of the ramifications of this cultural divide is that firing a salesman in China may be very risky, as they may take all their clients with them. Yet is it not also the case that when private bankers in Switzerland change employers, they take a lot of their customers with them? Is it not true that academics frequently move from institution to institution as a group? Don't we have the whole "old boys club" thing going on in places like the UK?

On the other hand, the business environment in countries like China is changing rapidly. It is still the case that people may refuse to do business with you if they do not know you; a friend was telling me how her boss is reluctant to hire people he does not personally know and trust, regardless of qualifications. But if you work in a multinational company, like I do, it's not like your colleagues will ignore you or be difficult until they get to know you. And cognitive trust does play a role in China - the first plant manager I worked with told me on my first day "you have a huge advantage: you are foreign, and have gone to a good university - people will trust you. Use that". In general, actually, university brand names count for far more in China than in the UK - what is this if not a sign that qualifications that signal capability matter?

Second, I think that, regardless of what kind of culture you find yourself in, it does not hurt to build a personal relationship with your colleagues or customers. Go out with your colleagues, play sports with them, go for lunch - it can't hurt.

(As I've mentioned before, one of the biggest cultural shocks I've faced in my career was when I moved from Geneva to London: in Geneva, P&Gers would go for one-hour long lunches, complete with espressos on the company's terrace. In London, my colleagues would go for quick, 20-minute lunches, which I found shocking. Not only that, but they preferred to go for lunch in large groups, consisting mainly of their immediate,current-team colleagues - whereas in Geneva, people would have 1-1 lunches with friends from other teams or business units). I found this lack of a decent lunch culture appalling - how could you get to know people if you never took the time to talk with them 1-1? (The answer, as I found out, was to go on big nights out together and get hammered - which is both physically and psychologically unhealthy in my view. In fact, this is approach to socialising is characteristic of the British psyche: you cannot risk opening yourself to another person unless you are drunk, in which case you can blame anything you say or do on alcohol.)

(I do not mean to boast (okay, maybe I do, a little), but building affective trust is particularly easy for me in China, because the Chinese love playing card and dice games, on which I am also very keen. Their card games (like much else in China) are very similar to games we have in the west (e.g. whist), except with a bunch of incredibly convoluted rules added on top. Their favourite dice game has the same rules as Perudo/Liar's dice, except that here it's a drinking game: in the west, when a player loses a round, they have to lose a die; here, they have to take a shot instead.))

Third, I think this is one of those cases where there is a right and wrong culture. I come from a relationship-based culture, and I have worked in a company where people do build very close relationships (I know plenty of people who met their spouse at P&G, and plenty more who've met their best friends in the company - I for one moved in with my ex-boss and had my bachelor party organised by two of my former managers), and though I know that such cultures feel much better than cold, task-based environments, they do come with risks. A relationship-based culture, where affective trust fosters cognitive trust, is more likely to lead to corruption. You appoint your friend to a managerial position, not because they are capable, but because you "trust" them; people like Ren from earlier expect their friends to give them contracts without due diligence; and people will not do business with you until they get to know you - hardly the most efficient or meritocratic way to doing things. So, as in the previous section, leaders should be aware of the local culture they find themselves in, but they should take steps to make it more meritocratic, if it is too much to the affective trust side of the spectrum.

Disagreeing: confrontational vs conciliatory
If I am ever asked in an interview how well I handle working with people from different cultures, I will produce this chart:


As you can see, Greece is the polar opposite of China: Greeks are confrontational and emotional, whereas the Chinese are reserved and value harmony - so, if I've managed to survive in a Chinese environment, I'd probably do okay everywhere. (Note that the UK is bang in the middle of the two cultures - so, having spent years in England (and being married to an English woman), I found the transition from Greece to China somewhat smoother than it might have been.)

The key difference between confrontational and non-confrontational cultures is that in the former, disagreements are seen as a good thing, and they do not affect people's personal relationship; in the latter cultures, attacking one's argument may be seen as attacking the person, and so debates are seen as inappropriate. People from the latter kind of culture are often shocked when they see people from a confrontational culture interact: at university, one of my closest (Greek) friends (and housemate) and I would argue 80% of the time (the remaining 20% was dedicated to South Park and so-called burger movies); that, in conjunction with the fact that the Greek language sounds very harsh to people who don't speak it, would cause our English friends to ask each one of us in hushed voices "are things alright between you two? You were having such a row!". We were just as perplexed that people kept interpreting what to us seemed as normal interaction as vicious fighting.

(Actually, thinking back, Greece fully justifies its position at the extreme corner of the chart above: the range of subjects on which we'd have passionate debates was absurdly wide - from the classic uni student "capitalism vs socialism" debate to what you should do if your car is running out of gas in the middle of nowhere - drive faster or slower? Astonishingly, this latter debate was for some inexplicable reason the most acrimonious one we've ever had as far as I can remember: three of us were walking home, and the debate reaching a crescendo of irrationally high temper, my friend walked away from the remaining two of us. My second friend told me, "come on, Aris, talk to him, make up". The best I could come up with was to shout after him, "hey, look, tomorrow you'll be talking again to us anyway, so you may as well come back and start now". It didn't work. (To be fair, that was after a night out, and we'd had our fair share of drinks).) 

Meyer suggests a few strategies for managing teams in multicultural environments. First, as in the case of leading, Meyer suggests that senior managers remove themselves from meetings, because their seniority may disincline people from disagreeing openly. In some cultures, even asking for someone's opinion may come across as pointed and confrontational, so it's often better to ask your subordinates to meet without you to discuss a problem, and then report their findings to you.

A second tip is to solicit anonymous feedback. In the US, brainstorming meetings are commonplace: a group of managers get together, toss around ideas and critique each other's suggestions; in other cultures though, people may be unwilling to share half-baked ideas in front of their colleagues. In such cases, you can ask people to write down ideas anonymously.

Another idea is to have pre-meetings. A quick check here: what makes a meeting successful in your eyes?
a) A decision is made;
b) various viewpoints are discussed and debated;
c) a formal stamp is put on a decision has already been made before the meeting.

Most Americans choose (a); most French choose (b); and most Chinese choose (c). In such cultures, it's helpful to have informal, 1-1 meetings with your colleagues to get everyone on the same page before the actual meeting.

Finally, Meyer says you should adjust your language depending on the culture in which you find yourself. Avoid qualifiers such as "totally" and "completely", and soften your message with "maybe" &c.

I think this advice is good inasmuch as it will steer you away from trouble, but again, in my view, a good leader should not be content in just adapting and staying out of trouble. Most people I know already think they attend waaay too many meetings, and are tired of office politics; recommending pre-meetings and pre-alignments (and in some cases, pre-pre-alignments) may be helpful in avoiding confrontation, but the downside is that people start spending too much time talking instead of doing.

As in the case of deciding, I think both extremes here are bad: you do not want a culture where people come to blows over questions of mileage optimisation, but you also do not want a culture where no-one feels comfortable challenging a patently idiotic proposal. I think a good leader has a duty to do the following:

First, train their people to feel comfortable to express their ideas and challenge each other - for example, by being upfront and clear about the fact that disagreement does not equal disrespect. It also helps to design exercises that encourage people to disagree with each other. Stereotypes will have you think that it's nigh impossible to get a Chinese manager to openly challenge a colleague, but this is not my experience. One of the operations managers I work with hosted an offsite for her organisation where managers were split into teams and asked to debate a business question. Not only did people do this and have fun, but the debate highlighted people's concerns with the company's strategy that might have gone unvoiced.

Second, develop a system for resolving conflict. "Agree to disagree" is not acceptable in my view: people should be encouraged to uncover their underlying assumptions, and critically evaluate them. P&G's former CEO had developed such a system for making strategic choices: a) all stakeholders write down what would have to be true for them to have confidence in each of the options identified. b) The team then determines which conditions thus identified are unlikely to be true. c) It then designs and executes tests for each condition and d) goes with the option which the tests have determined to be the most likely to achieve the objective.

(Optional third: go all in, Bridgewater-style).

Scheduling: Swiss precision vs Indian flexibility
Greeks have a number of stereotypes for the British, most of them inaccurate; none more so than the idea that Brits are insanely punctual. The first time I attended an English party, I arrived at the specified time... and found myself alone with my hosts for over an hour. And at least this is inconsequential - do not get me started on English trains.

From all of the cultural divides listed in Meyer's book, I think different peoples' attitude to time is the most accurate and persistent. It is also the most obviously there-is-a-right-and-wrong one.

It is pretty obvious what his divide addresses, so I do not need to expand on it, so let's jump straight into criticism.

First, once again, a great deal of cultural misunderstandings stemming from different attitudes to scheduling can be resolved through clear communication instead of "cultural sensitivity". Meyer brings up an anecdote of giving a lecture in Brazil. She was originally scheduled to talk for 45 minutes, but when she met with the organiser of the event the day before she was to speak, he told her "feel free to take more time than is scheduled if you like". She asked whether this meant she could take 60 minutes instead, to which the facilitator responded "of course, take the time you need". On the day of the lecture, the facilitator re-iterated that Meyer should take as much time as she needs. Meyer gave the lecture, and ended it after 65 minutes - even though people still had questions to ask her. The facilitator approached her and told her that her talk was great, but that it finished too early. Meyer was baffled by this, as, in her mind, she actually took longer than the time allotted to her.

Okay, I get that to an American, 60 minutes means 60 minutes. But look, ignoring your host's repeated request to take as much time as you need, and your audience's demand for more of your time, is not a cultural misunderstanding, it is bad listening. After all, why couldn't Meyer have just asked at the 60 minute mark "do we have more time? Is it okay if we go on?" It may be that the Brazilians' flexibility with time, as opposed to Meyer's strict interpretation of allocated time slots, is a cultural issue. But the misunderstanding that arose out of that has nothing to do with culture, and everything to do with communication.

Second, though people at the opposite ends of the time scale find each others' culture stressful, the fact is that strict scheduling is far more optimal. I fully understand and sympathise with people who claim that inx, arbitrary deadlines and schedules are suffocating. But there are some things which are critical, and for which you need to rely on a precise timeline. If a woman is about to give birth, and calls a driver to take her to the hospital, she cannot afford to wait just because the driver has a somewhat fuzzy and liberal interpretation of "get over here, right now"; no-one wants to miss spending Christmas with their family because planes or trains are delayed.

Bottom line: being late in meetings with other people sends a very clear signal: my time, and my priorities, are more valuable than yours. The answer to avoiding stress is not turning up to meetings late, but avoiding setting arbitrary and stressful deadlines, and avoiding non-value adding meetings.

Conclusion
Culture is complicated. A particular people may be emotional, but avoid confrontation; they may be explicit in their communication, but avoid giving direct feedback; they may be hierarchical, but despise top-down decision making. Meyer has done a very good job at breaking down cultural divides in neat categories that allow for methodical analysis.

Still though, as I've tried to show in this post, I think that discussions on culture almost inevitably fall into three pitfalls that Meyer herself does not entirely avoid, though she does at times acknowledge them:

a) a great deal of cultural misunderstanding can be avoided, not through cultural training, but by good communication;

b) we humans have more in common than anecdotes seem to suggest; as a result, cultural differences are often exaggerated. Moreover, there is very wide variation within a culture. This suggests that cultures are not as inflexible and hard to change as books like Meyer's may seem to suggest; and

c) as I've tried to argue in many of the sections above, the relativist view of "there is no right or wrong culture" is wrong. I am not of course talking about moral superiority here, but about a particular culture's efficacy in achieving a given goal. Some cultures are better for fostering innovation; some are better at maintaining stability. Operating in a culture that is not conductive towards achieving an organisation's goal is counter-productive.

Finally, even though Meyer's framework is excellent in comparing and contrasting different cultures, it is, at the end of the day, an over-simplified model of a culture's norms and behaviours. (To be fair to her, Meyer never claims her eight cultural dimensions perfectly encapsulate a culture's essence. Still, it's important to reinforce this.) Consider this quadrant that Meyer has drawn:


Notice that China is one of the most high-context and indirect-feedback cultures in the world. If you were to take this at face value, you might expect the Chinese to skirt around everything, always communicating using subtle cues and avoiding anything that might give offense. But what might give offense is very different in China vs in the west: the Chinese have no qualms referring to their friends as 小胖子 (xiao pangzi) or "little fatty", asking you how old you are or how much money you make, and referring to you as "the foreigner" or "the white". And though it's true the Chinese very rarely say that something is "bad", they will very frequently and directly say that something is "not good" or "not right". So, by all means, do learn how the various cultures map under Meyer's system, but remember that binary classification into eight categories does not tell the whole story. 

Wednesday, 2 August 2017

Behavioural Economics: a review

Most of us have read Kaheman's Thinking, Fast and Slow or Thaler's Misbehaving or Nudge. These books all discuss the birth of behavioural economics, a discipline that marries economics with psychology, and which its adherents claim has supplanted neoclassical economics.

Yet contrary to the strong assertions made in these books, or by some of the discipline's fans, behavioural economics has not definitively dethroned traditional economics. Indeed, in spite of the discipline's popularity, it is still a small part of economics courses curricula. In this post, I review three main criticisms of the discipline that help explain why this is so: first, the criticisms leveled by behavioural economists against classical economics are often unfair; second, many of the experiments that gave birth to the discipline have failed replication attempts, or cannot be generalised from the lab to society at large; and third, the fact that neoclassical economics make for a better foundation for policy.

A. Behavioural economics vs Neoclassical Economics
Neoclassical economics refers to the attempt to model an economy based on three principles:

a) that people have rational preferences between outcomes (this basically means that any two alternative choices can be compared to each other, and that preferences are transitive, i.e. if a person prefers apples to bananas, and bananas to pears, then he also prefers apples to pears);

b) that individuals maximise utility; and that

c) people act independently on the basis of full information.

Neoclassical economics relies on these assumptions to model allocation of resources, market beviour &c, often making use of game theory. This latter field, popularised by the film A Beautiful Mind, is concerned with predicting how two agents will behave in a particular situation. Briefly, game theory suggests that a possible interaction among a number of agents will result in equilibrium, a state where no agent has an incentive to change their behaviour.

The classic game theory example is the prisoner's dilemma: two criminals are arrested, placed in separate cells, and offered a bargain: each prisoner can testify that their partner committed the crime, or they can stay silent. If both prisoners betray each other, they both get two years in prison; if one prisoner betrays his partner, but his partner stays silent, the snitch goes free but their loyal partner gets three years; and if both stay silent, they both get one year in prison (due to some lesser charge the prosecutor can concoct).

This scenario can be visualised in the following table:
The bottom left number in each cell shows A's sentence, and the top right B's sentence. According to game theory, both prisoners betraying each other is the game's only Nash Equilibrium: you can see that in any other cell, one or both of the prisoners has an incentive to change their strategy, whereas in the bottom right cell, a prisoner will only be worse off if they change. So what this game tells us is that even though mutual cooperation would leave both players better off, rational decision making will lead to mutual betrayal.

Behavioural economics challenges the three hypotheses that underpin neoclassical economics. The discipline suggests that, not only people are irrational, but they are predictably so, to the point that the same approaches used by neoclassical economics (such as game theory) would lead to different conclusions, were the predictably irrational behaviour of humans taken into account.

There are two responses to this challenge. The first is that behavioural economics does not so much supplant neoclassical economics, as it augments it. Prospect theory, one of the discipline's foundations, proposed by Amos Tversky and Daniel Kahneman, slightly modifies utility theory, so that according to it, people make choices between alternatives based on potential gains and losses, not end-states; it also suggests that people use heuristics to make decisions. But at its core, it's not all that different to classical economics.

The second is that neoclassical theory is actually pretty good at predicting behaviour; the experimental results from behavioural economics that seem to suggest otherwise misunderstand neoclassical theory. There is a good paper on this by David Levine and Jie Zheng. This paper uses the Ultimatum game as an example: this is a game that many behavioural economics proponents claim undermines neoclassical economics.

In the Ultimatum game, person A is given $10, and can then suggest a division of this money between himself and player B. Player B can then accept A's suggestion, or reject it, in which case neither player gets any money. In various lab experiments, it has been observed that few people, if anyone, offer less than $2 to player B, with most people offering $5; and, when player A makes an "unfair" offer, player B often rejects it. Some behavioural economists consider this an excellent refutation of neoclassical economics: surely, traditional, neoclassical theory, with its selfish, buck-maximising agents, would predict minimal offers from player A, which would always be accepted by B.

(This way of reasoning is called sub-game perfection: the idea is that you break the game into two stages, and reason backwards: player A thinks, as long as I offer anything to player B, he is better off accepting rather than rejecting my offer; therefore, I can offer anything, no matter how little, and still have him accept it.)

However, neoclassical economics does not have selfishness or lack of altruism as a fundamental axiom; in fact, Adam Smith explicitly stated that people's utility functions most likely have a moral dimension to them. More importantly though, game theory says that, perhaps counter-intuitively, the Ultimatum game has many Nash equilibria. As Levine and Zheng write, the right way of thinking about the problem is to check whether people's losses (as a result of their strategy) are small relative to what they could have gained, had they played optimally.

To do this, one would have to look at how much money a player who had past experimental data could have made and compare it to how much they actually made. Using this approach, it is found that players in the Ultimatum game lose about $1. Furthermore, only 1/3 of this $1 represents known losses, i.e. money that the players know they will lose (clearly, only player B has known losses in this game, when he rejects A's offer, knowing he is choosing to forego money). The remaining 2/3 are basically due to players who assume the role of A not having had enough experience to judge what kind of offers are typically rejected.

In summary then, many argue that behavioural economics is nothing but tinkering with the neoclassical model; any claims that it's a fundamentally new paradigm show a misunderstanding of neoclassical theory.

B. Humans: not that irrational or uniform
Behavioural economists, drawing on work from psychology, make some pretty astonishing claims: if you "prime" people by having them read words that remind them of old people, they will subsequently walk slower; if you give them more products to choose from, they are less likely to make a purchase; if you make exam questions harder to read, they will perform better. Some of these have been as influential as they are hard to believe - for example, consumer goods companies have reduced the number of products they sell to reduce "choice overload", and leaders such as Obama and Zuckerberg have simple wardrobes on purpose to avoid ego depletion. It turns out, however, that some of these effects are not as robust as pop books would have us think.

In this section, I will discuss some experiments that behavioural economists use as examples of human irrationality; but first, there is another matter to be addressed. A great deal of the criticisms leveled against neoclassical economics is based on lab experiments that purport to show people are far more altruistic, selfless or irrational than standard theory predicts. However, many of these results cannot be generalised to society at large; furthermore, human behaviour varies significantly across the world, and we should be weary of drawing conclusions about humanity from lab experiments performed at Ivy League colleges.

Stephen Levitt and John List expand on what lab experiments say about the real world in this paper. They start by suggesting that people's utility function takes the form

U(action, stakes, norms, scrutiny) = Morality(action, stakes, norms, scrutiny) + Wealth(action, stakes)

In other words, the utility, how happy a person will be by taking an action, depends on the moral cost of this action, as well as on its effect on the person's wealth. Whereas the effect on wealth depends on the action and the stakes involved, the moral cost also depends on social norms and the scrutiny of an individual's action. Levitt and List argue that behaviour in the lab is not a reliable predictor of behaviour in society because scrutiny in the lab is far higher than in real life and the stakes are often lower.

(Here's a video demonstrating the importance of scrutiny in guiding action:

)

This is not just a hypothesis, but an observed fact. In one experiment, List ran an experiment in which sellers could choose the quality of the products to offer to buyers in response to the buyers'' bids. He used experienced sports card traders as subjects, and found that in the lab, they exhibited strong social preferences: when buyers offered high prices, sellers responded by offering high-quality cards, even though they were not obligated to do so. But he then ran a field test on these same traders. He sent confederates to pose as buyers in sports-cards shows. It turns out that outside the lab, there was little relationship between price offered and quality. Similarly, other experiments have found that people are more likely to behave selfishly if their anonymity is guaranteed.

Also, here's an interesting factoid found in Levitt and List's paper: in another experiment, List and a collaborator examined whether professionals behave the same way as students in trust games. It turns out that CEOs in Costa Rica are considerably more trusting and trustworthy than students. Maybe it's because the people who become CEOs in Costa Rica are particularly nice; it may be because CEOs care more about their reputation and behave extra-trustingly. But either way, this shows that it's hard to generalise from experiments run on students.

Which leads me to the WEIRDest people in the world - members of Western, Educated, Industrialised and Democratic societies. The authors of this paper make the same argument as the previous paragraph - behavioural and cognitive studies tend to generalise their experimental results to the entire human species, when their effects are local. They back this claim with a number of case studies.

Consider, for example, the Muller-Lyer, aka the two lines, illusion. Which of the two lines below is longer?

You can probably guess the answer, even if you haven't read any books on pop psychology: the two lines have the same length. If you have read pop psychology (or a Buzfeed article on 27 Illusions that will BLOW your mind (you won't believe number 4!)) you have probably read something like "viewers invariably perceive line b as being longer". But there is nothing invariable about this phenomenon:


The chart above shows by how much line a must be increased in length, before subjects perceive the two lines as being of equal length, by country. As you can see, in some societies, viewers can tell the two lines are the same length with hardly any manipulation; also, children and adults respond quite differently to the illusion.

Whether a society is industrial or not also affects its members behaviour in the Ultimatum Game. I mentioned earlier that most people who play the Ultimatum game in a lab setting offer about 50% of their wealth; but this is only the behaviour of American adult subjects; in fact, Americans seem to be far more generous than other societies...

% of wealth offered in Ultimatum Game, by country
... and more willing to reject an offer they deem unfair:
Income maximing offer, by country

(The second chart shows the % the proposer should offer, to maximise their wealth on average. In the US, the optimal strategy for a proposer is to offer 50% of his wealth, otherwise he runs the risk of the receiver rejecting the offer; in other countries, receivers are content with 10% instead.)

Even more shockingly, experiments ran in Russia, China, Sweden, the Netherlands and Germany show that some subjects even reject so-called hyper-fair offers (>60% of the proposer's wealth). I mean... you can kind of understand this behaviour in communist countries like Russia or China, or in socialists' poster-boy Sweden, but Germany??

And for my favourite example of different behaviour across countries, consider Herrmann &al's paper on anti-social punishment. This paper focuses on a so-called public goods game. This game is played with four players over ten rounds. Players are given 20 tokens, and in each round, they need to decide how many of their tokens to contribute to a common pool. The tokens in this common pool are then increased by 40%, and divided over all four players, regardless of whether they contributed or not. So, as in many real-life situations, players are better off if they all contribute, but each one has an incentive to free-ride on the other players' contributions. For example, if all four players contribute 10 tokens, they will each end up with 14 (= 4 x 10 x 1.4/4); but if one player does not contribute anything, he keeps his 10, and he gets an additional 10.5 (= 3 x 10 x 1.4/4) from the other players, thus ending up with 20.5. Herrmann & al ran this experiment in a number of different countries, using university undergraduates as subjects.

There are a few interesting results from this experiment. First, the level of cooperation, as measured by the average contribution by each player, varied significantly across different countries. Second, as a pessimist (or a classical economist) would expect, cooperation quickly declined as the game progressed (and people realised others started free-riding):

But that's not the best part yet. The researchers also ran the same experiment introducing the ability to punish other players. After learning other players' contribution choices, each player could assign every other player between one and ten deduction points. Each deduction point would reduce the punished player's tokens by three, but would cost the punisher one token.

In this variant of the game, the cooperation level increased, or at least remained stable in most countries:

But this is still not the best part. If you were playing this game, whom would you punish? Odds are, you would choose to punish those players who contributed less than you. That's only fair, right? Well, that's only fair if you come from an Anglo-Germanic country. It turns out people from a number of countries, most notably Oman and Greece, choose to punish overly generous players!
It's anybody's guess why anyone would punish other generous players. The researchers suggest it's a form of revenge: though players cannot see who punished them, they probably assume that they were punished by the more generous ones. Indeed, it seems that this "anti-social punishment" correlates with the amount of punishment a player received in the previous round.

Needless to say, anti-social punishment has an extremely strong negative correlation with mean contribution:
(I grant that this whole section on the public good game is only tangentially related to the core matter at hand, in that it shows how differently people behave by country, and how irresponsible it is to make universal claims re human behaviour based on American studies; the main reason I am including it here is that it confirms my long-held belief that at the core of Greece's problems lies the classic Hellenic quip - "τι είμαι εγώ, μαλάκας;/σιγά μη γίνω εγώ ο μαλάκας της υπόθεσης".)

In short, people do not behave the same way across the world. More importantly, people do not behave the same way outside the lab. Behavioural economics is predicated on the assumption that people behave irrationally in a predictable, uniform way. Evidence seems to suggest otherwise.

Now, I realise that what makes pop economics and psychology books exciting are the factoids they offer - the trivial pieces of knowledge that we all like to repeat at parties and seem clever. The rest of this section adopts this strategy (though admittedly, too late: I suspect that the readers who have followed me this far are those who would persevere regardless of factoids): I list below a few "classic" experiments that are referenced by behavioural economists to show that they are not as robust as some books make them seem.

The paradox of choice
A 2000 study by Iyengar and Lepper found that giving consumers more choice results in fewer purchases. In their experiment, they set up two tasting booths in an upscale grocery store, on different days. One of the booths had six varieties of jam displayed. The other had 24. What they found was that though more consumers stopped at the large-sample booth (60% vs 40% for the small-sample one), only 3% of consumers exposed to the large-sample booth made a purchase, vs 30% of those exposed to the small-sample booth.

They also ran two more experiments as part of the same study. In the second experiment, psychology students were given the option to write an essay for extra credit. Some students were given six topics to choose from, others 30. Not only did more students who were given six topics actually write the essay (74% vs 60%), but their essays were actually better! In the third experiment, participants were asked to choose a chocolate. Again, some participants were given a limited assortment to choose from, and some a larger one. This experiment found that people who were given a larger assortment to choose from took longer to make a choice, felt they were given too many options, did not feel any more confident that they made the right choice, and enjoyed the chocolates they chose less than those given a smaller range to choose from (though they reported enjoying the selection process more). Not only that, but when participants were asked whether they wanted to be paid in cash or in chocolates for their time, 48% of those given a small assortment chose to be paid in chocolates, vs 12% of those given a wider range.

It's hard to overstate the effect of this study - not just in academia, but also in business. I have actually heard people reference the choice paradox in meetings, to argue for reducing the number of products we offer.

Now, as I've said many times before, I totally agree that society does not really need 20 different shampoo variants within one brand. But to make a decision based on one study that you haven't read and understood is pretty irresponsible.

A meta-analysis of all studies that have looked into the choice paradox found the mean choice overload effect to be virtually zero. Several studies tried to directly replicate the original experiments and failed - for example, Scheibehene tried to replicate the jams study in Germany, and Greifeneder tried to replicate the chocolate study, both without any meaningful results.

Of course, many of the studies analysed by the meta-study did also find evidence of choice overload. There are a number of factors that may explain the variance in these studies - some have to do with publication bias, but some other interesting ones are:
  • Measurement choices: it seems that more choice is better when what is being measured is consumption, instead of binary buy/not buy choices.
  • Strong preferences: people with strong preferences prefer more choice.
  • Ease of comparison: if the products in an assortment are difficult to compare, e.g. by having complementary features, consumers may experience regret after making a choice, hence leading to choice overload.
  • Perception of quality distribution: people may be more likely to prefer small assortments if all products on offer are of high quality. But if average quality is low, with some products being of high quality, then a larger assortment increases the odds of being able to buy a satisfactory product.

Basically, the jury's still out on this one. It's certainly not the case that more choice invariably leads to fewer purchases though.

Priming
You must have heard of this one: subtle cues subconsciously "prime" you in a way that visibly alter your behaviour. In the original study on the matter, volunteers had to create a sentence from scrambled words. When these words related to old people, subjects walked slower when leaving the lab after the experiment.

Whereas I readily bought into all the other effects I discuss here, I must say I always viewed this one with suspicion: apparently, one of the words used to prime subjects was "Florida". This seems very strange to me. Whereas I could grant that some people may associate Florida with old people, to an extent that they then alter their behaviour, I find it crazy that subjects only associate Florida with old people. What about Disney land? Alligators? Miami? Spring breaks? Why would these words not prime people to walk like a princess, run for their lives, swagger about or stumble drunkenly?

It turns out my suspicion was justified: another group of scientists tried to replicate the study, with a few modifications: a) they timed subjects using infrared sensors, not stopwatches as in the original experiment; b) they used more volunteers and c) they used experimenters who did not know what the study was about. They found zero priming impact.

But they went further: they repeated the experiment, only this time, they told the experimenters that the subjects had been primed. They told half of them (the experimenters) to expect faster walks, and half of them to expect slower walks. The subjects were found to walk slower only by those experimenters who were expecting that!

Of course, the author of the original paper responded that a) his experimenters were also blind to the study hypotheses (which is true, but the experimenters were the ones who prepared all materials, which they had plenty of time to study; and being smart people, many of them probably guessed the hypothesis); b) subjects in the replication experiment were told to "go straight down the hall when leaving", which draws attention to the process, and arguably implies speed, thus eliminating the effect (but there is no evidence they were told this - plus, if the effects of priming are so weak, what's the point of it?); c) the replication experiment used too many old-related words, which meant subjects may have noticed the connection, cancelling priming (but his own original paper said that more primes would yield stronger results) and d) the experiment would only work if subjects associated old age with infirmity, an association the replication did not test (but then, neither did the original paper).

I am not saying we are not susceptible to subliminal messages; but we would be a pretty ridiculous species indeed if we walked slower every time someone said "Florida".

System 2 Activation
Try answering the following three questions:
  1. A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
  2. If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
  3. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
The answers are $0.05, 5 minutes and 47 days. Yet many people answer $0.1, 100 minutes and 24 days - not because these questions are difficult, but because it is very easy for our minds to make these mistakes when going on autopilot.

In Thinking, Fast and Slow, Kahneman talks about how humans reason using two different "systems": system 1 is quick, effortless and relying on intuition, whereas system 2 is slow, deliberate and analytical. Because using system 2 takes up a lot of effort, we tend to rely more on system 1, the autopilot that causes a lot of us to answer one or more of these questions incorrectly.

But, says Kahneman: if you disrupt people's autopilot, they will switch to system 2, and perform better. One way to do this is to explicitly say, "careful, these are trick questions"; but more astonishingly, according to Kahneman, you can disrupt system 1 just by making the questions harder to read - e.g. by using a font that's harder on the eyes, or a pale colour.

Kahneman bases this claim on this paper, in which experimenters asked 40 Princeton students to take the three-question test above. Half the students took the test in normal font, the other half in a difficult, 10% gray, italicised font. The first group got 2.45 of the questions right, on average, whereas the second only got 1.9 right.

But a number of replication attempts have failed to discover any such effect:

I think all we can take out of this series of experiments is that Ivy League students are slightly smarter than non-Ivy League ones.

Ego Depletion
Here's another effect that has had real life impacts. A study put students in a room with freshly baked cookies and radishes. Some were told they could only eat the former, some that they could only eat the latter. All students were then given an unsolvable test, and the researchers measured for long the students would keep trying to solve it. It turned out that those who were allowed to eat the cookies persevered for far longer (19 mins) than those who weren't (8 mins). This was taken to show that humans have a fixed amount of willpower than can get depleted; furthermore, that willpower is like a muscle that can be trained. Hundreds of studies have been run since then, all apparently confirming this hypothesis.

And people have taken heed - including Obama and Zuckerberg, who have both claimed to opt for dull, standardised wardrobes so as to avoid wasting decision energy on useless tasks.

However, a more recent, massive attempt to reproduce the main effect outlined above, using 2,000 subjects, has found zero effect.

Cracks in the theory had appeared before. Evan Carter, a graduate student at Miami, tried to replicate a previous experiment, only to find that he could not reproduce its results. So he looked into a 2010 meta-analysis, and discovered that a) the meta-analysis had only included published studies, increasing the risk of publication bias (unexciting results don't get published all that much) and b) some studies had bizarre or contradictory measures of willpower - e.g. one study suggested that depleted subjects would be less willing to help a stranger, whereas an other study said that depleted subjects would give more to charity. Re-evaluating the studies in the meta-analysis adjusting for such errors, he also found no effect.

Again, I am not disputing that people get tired, and that if they are asked to do too many things, they will have less energy. But the original formulation of the hypothesis, and some of the lessons that people have taken from it, such as that taking an extra minute each morning to decide what tie to wear can deplete one's willpower, seem exaggerated and unfounded.

To conclude this section: I am not claiming that humans are perfectly rational. Indeed, I think Kahneman, Tversky and other economists/psychologists have done a brilliant job demonstrating many ways in which humans are irrational. I think their work on heuristics humans use instead of reason, and how these lead to mistakes such as overconfidence, ignoring base rates and other fallacies such as the Linda effect, is brilliant (some people have suggested these are all framing issues that disappear if questions are asked differently, but I found that criticism pretty weak. See here Kahneman and Tversky's reponse.)

But we are not as stupid, easy to manipulate, or homogeneous as behavioural economists often suggest. Nor have behavioural economists conclusively proven that their models are better at predicting human behaviour in real life. And this brings us to...

C. Behavioural economics and policy making
This will be a short section. Behavioural economics have been so influential that the US and British governments have set up whole departments to carry out policy based on the discipline's lessons. David Cameron himself referred to a behavioural economics insight: "The best way to get someone to cut their electricity bill is to show them their own spending, to show them what their neighbours are spending, and then show what an energy-conscious neighbour is spending".

But as Tim Hartford (the Undercover Economist) points out, this is plain wrong. The best way to make people cut their energy consumption is to increase prices. There may be all sorts of reasons to oppose a policy (such as tax) that aims to make energy more expensive; indeed, as someone who identifies as more or less a libertarian, I would rather keep government taxes at a minimum. But this is neither here nor there: the fact remain that classical economics offers better policy solutions than behavioural economics. Standard tools such as taxes, subsidies and interest rates are way more powerful, and have far stronger impacts, than "nudges".

This is because, again, on aggregate, claims of neoclassical economics' death at the hands of the 2008 crisis are greatly exaggerated (another factoid: Mark Twain never used this exact phrase. What he wrote was "the report of my death was an exaggeration"). Neoclassical economics is still being taught at schools and universities, not because academics are die-hard traditionalists, but because it still has lots of valuable things to say about how the world works.