Tuesday, 21 August 2018

Further thoughts on paternity

Time goes by faster as you get older, because when you are, say, ten, one year is 10% of your life; when you are thirty, it's 3.3%, and therefore feels like a smaller interval. But if the passage of time normally accelerates at a steady pace as you get older, it jumps to hyper-drive when you have a child. I cannot fathom how almost two months have gone by so quickly.

But parenthood also exaggerates time's other quirk - that the more crammed your schedule is, the faster time seems to go by as you are experiencing it, but the longer the time interval seems in retrospect - a year spent climbing the Himalayas and learning to sky dive will seem longer in memory than one spent watching Friends reruns (I am not passing judgement on the merit of each activity, and I personally would much rather do the latter than the former). Similarly, it feels like these two months have gone by incredibly fast, and to have lasted aeons, at the same time.

In-between these time warps, I have had some further thoughts on parenthood that I jot down below:

What is love?
I touched on the strangeness of missing someone you have only met for a few hours in my first post on paternity, but I have been thinking about it some more. When I think of the love I feel towards other people in my life, such as my parents, my wife or close friends, I notice that this love is either the product of familiarity and affection over a long time, or the evolution of peripheral feelings, such as those of respect or sexual attraction. In contrast, and in my wife's words, my love towards my son is totally unearned.

On the contrary, if I told you that I love to bits a person who completely disregards my own needs and wants, habitually deprives me of sleep, and yells at me whenever I am a minute late in catering to his every whim, and all he gives me in return is the occasional smile, you'd probably refer me to a psychiatrist. (I am not the first to notice this - a book a friend gave my wife classifies motherhood as a particular case of Stockholm syndrome.)

Yet I do of course love my son to bits, inexplicable though this is, and I will gladly be peed, farted and puked on in return for a single smile. It forces one to rethink one's understanding of the concept of love.

Should you have a child?
I wrote on Facebook that you should have a child if you enjoy playing Sims. In addition to fans of the game (and I mean those fans who actually took care of their Sims, not those whose objective in the game was to come up with novel and increasingly convoluted ways to get them killed), there are a few more kinds of person who will love parenthood. You will enjoy the experience if you...

  • had Tamagochis that did not die;
  • like scatological humour & fart jokes;
  • are in dire need of exercise;
  • never got used to surviving with fewer than ten hours of sleep, despite everyone's insistence that you totally would when you reached your twenties, and want to do so;
  • want to master the art of the micro-nap, or to learn to fall asleep within seconds;
  • want to test the strength of your marriage;
  • want to finally get your Greek mum/grandmother/other relative to stop fretting about your health (though you won't be 100% successful here);
  • think having your wife sleepily trying to unscrew your head in the middle of the night, thinking it's your child she's trying to pick up, is jolly good fun (it is, but only in retrospect).
How much do I really believe in gender equality?
I don't care whether you have won a flinch of bacon at Dunmow, if you have a child, your spouse and you are going to have an argument. Sleep deprivation does that to people. In one such argument, I told my wife that she was failing to appreciate how much I am doing for our son - way more than most men do. She countered that I do not get credit for merely doing my share. As is the case about 25% of the time often the case [ed.], my wife was right: in spite of thinking that I hold men and women truly equal, I was giving myself kudos for assuming shared responsibility for raising our son. But the man assuming 50% of the responsibility ought not to be praiseworthy, it ought to be the default.

Everyone knows we all have biases of which we are unaware. Everyone also slyly adds (though not out loud) "but I less than everyone else". It is rare that our biases are so clearly called out, and shocking when they are.

How can everyone not see that my son is simply the best?
As I've mentioned before, my son is the first grandchild, great-grandchild, nephew, cousin-once-removed &c on my side of the family, and the first of the new generation amongst family friends. As a result, he has been showered with an obscene degree/number of affection, attention and gifts; his pictures on FB have received plenty of likes, smiley faces and hearts (not without side-effects - my mum's old nanny believes digital well-wishers are giving Christopher the evil eye). 

Yet all this attention scarcely seems enough: whenever my son smiles, or coos, or simply is, I am shocked, shocked, to find out that everyone is not looking at him on the verge of tears. Do they not realise just how amazing each smile is? How can they glance at their phones, or cook, or read their books instead, when every smile they miss is a smile they will never get back? And at least family and friends are praising him - what about all those people I pass on the street, or on the boat? How can they fail to even steal a glance? Do they not realise they are in presence of pure awesomeness? How do they just go on about their lives?

You think I am exaggerating, and okay, maybe I am taking some poetic license - but only a little. I pride myself on being very rational but when it comes to my son, I am genuinely constantly surprised that other people do not think he is exactly as amazing and attention-worthy as I think he is, even though I fully understand I shouldn't expect them to.

More to come, as they come.



Saturday, 7 July 2018

Paternity - First Thoughts

My son was born on Monday, 2nd of July, at 9:35 in the morning. I am not given to sentimental prose, but when I see him opening his little eyes, and measuring the world around him with a quizzical,  questioning expression, my heart melts.

As I did with my move to China (and as I would have done with my joining Google, were it not inappropriate), and seeing as I am among the first of my friends to have a child, I am putting pen to paper (/fingers to keyboard) to record my first thoughts and impressions on paternity.

Feelings accompanying parenthood
One of the most common questions I got in the months preceding the birth was, how does it feel knowing you are about to become a dad? Invariably, my response was, it hasn't really sunk in yet - I expect it will after the baby comes.

It hasn't. Every few hours, my wife and I feel a light bulb turn on in our heads, accompanied with a surge of adrenaline and the thought, "wow. We are parents now". But my whole worldview, my mental image of my daily routine, of what it means to be, and act, like me, has not really changed yet; the fact that I am no longer master of my own life, but that my life, schedule, and priorities are all now subordinated to that of a tiny human being's is understood intellectually, not viscerally.

What has happened very suddenly is the formation of the bond between us, the parents, and our child.   When doctors had to take our little person (to whom my wife refers as μικρούλι (mikrouli - little one), little beast, or the kraken ("the kraken awakens", she declared, before rousing me to help her feed him)) for some tests, we experienced strong saudade (I had to look up for the noun for the feeling of missing someone; I suppose I could have used the more commonplace "longing", but that rings somewhat melodramatic, more fitting to a romantic novel than a 21st century blogpost); it may not sound all that surprising that parents miss their children - but it is curious nevertheless: you would be shocked if I told you that I miss someone I have only known for a couple of hours.

That said, I don't feel jealous at all with regards to holding him; as long as I know that he in my vicinity, I don't really mind if it's other people hugging him - such as his grandparents. Actually, I am keen to encourage that - I always get a little annoyed with parents whose children are exceedingly shy and uncomfortable around other people; I want my children to learn to socialise from day one. Plus, it's such a joy seeing my parents handle him - especially my dad (whose name my son shares, as per Greek custom), whose face lights up, literally lights up (and I hate incorrect use of the word literally, but it does, his face seems brighter to me) when he sees his grandson.

The last feeling of note is the joy and gratitude I feel towards all the friends and family members who have come to visit us or sent their wishes already. It is so wonderful to know that this little human comes into the world surrounded by people who love him and will support him, and that he will be safe and taken care of even if - God forbid - anything were to happen to Jessi and me. (This child is not only my first, but is also the first grandchild and great-grandchild on my side of the family; the first of his generation amongst my parents' best friends; and among the first on my own close friends - I do hope that subsequent children will generate as much excitement!)

Labour
I was in the room with my wife when she gave birth. I know everyone "knows" this, but I don't think everyone knows it until they have experienced it or at least witnessed it: labour is painful. In the almost 11 years I know my wife, I have never seen her acknowledge pain, besides the occasional exclamation (granted, this is partly thanks to her being very careful and rarely injuring herself); while giving birth (she decided to do everything naturally), she cried. This was so unsettling that I started crying. This then made my wife laugh.

Marital Privacy
My wife and I are not squeamish (you can hardly be squeamish when you have been brought up skewering lambs' livers and lungs down metal spikes, and wrapping them with intestine), but we had done a fairly good job of compartmentalising our hygiene routines. While I won't go into details, I will just say that I've seen and discussed bodily functions in two days more than I had in a decade. And that's in spite of having worked on feminine products back at P&G.

Handling a baby
When people say that something has a steep learning curve, they usually mean it's difficult. What it actually means, though, is that learning takes place in a very small time frame, not that acquiring this knowledge is particularly hard. Learning to feed a baby and change its nappies has a steep learning curve in that second (more correct) sense: you learn a lot of easy stuff very quickly.

A few things that strike me as particularly interesting here: first, handling an adult the way you are supposed to handle a baby would be bullying. To wake it up in case it needs to eat but is sleepy, you place it face down on your palm, and quickly rub your knuckles on its back until it starts complaining (at which point you quickly pass it to the mother, who waits for it to open its mouth to voice its discord, and as soon as it does stuffs a breast down its mouth without waiting for it to state its (legitimate, on the face of it) case - rendering the whole feeding process very similar to what goes on in foie gras farms); to wash its behind after changing a nappy, you execute a judo manoeuvre, whereby you grab its little thigh with one hand, its arm with the other, and twist it across your forearm, so that it's resting there with its face down; to put it to sleep in case it's crying (and its crying is not due to its being hungry or in need of nappy change), you execute that same judo move, then place your index and middle fingers in a scissor position, pinch the thigh that's dangling down from your forearm between them, and move its leg backwards and forwards, like a pump. This last trick is incredibly good at calming down a baby.

Which brings me to the second interesting point: the reason that handling a child has a steep but easy learning curve is that there are tricks for everything. Once your midwife explains them to you (by the way, I have to note here that we had a wonderful midwife whom I would very strongly recommend to anyone planning to give birth in Greece), a lot of things become way easier than you might expect. At least, that's how it seems at the moment...

Third, it's funny how quickly your standards change when you take care of a child. Those who know me know I value my sleep. I lived for four years in Geneva, and almost never went skiing nearby, because I hated to get up early on Saturdays; I ask colleagues to avoid inviting me to meetings before 9:30; I complain when I get less than 8 hours' sleep. Yet I was jubilant last night, when my son slept for four hours in-between feeds (vs his average of two to three) - I considered these four hours of uninterrupted rest rejuvenating and God-sent. And I feel surprisingly awake, and able to write this. I hope this continues.

Fourth, because this is our first child, and Jessi and I have no idea what is normal (how long should it feed? how long between feeds? how heavy is it, compared to other babies that age? how much weight can we expect it to lose and regain in its first days and weeks?), we are being quite methodical about recording its development - we even have an app which allows us to time its feeds, and sync them across our phones. I plan to record its effort to speak (I already have a model in my head, whereby I will record phrases he says, and then tag them - for language, number of words, number of syllables &c, and track all these across time; incidentally, this will make for an interesting blog post a couple of years down the road, if I manage to maintain the disciple to do it). Again, I wonder whether we will do all these for our future children...

This is all for now. More to come, if deemed interesting enough.

Tuesday, 23 January 2018

Petty Pet Hates

My wife once gave me a pearl of wisdom, lifted off some book or film: the devil causes most of the world's misery not through wars and famines and plagues but through the tiny, daily frustrations he puts in our way. Small things that cause inordinate annoyance. For me, these small things include...

That scene in the Matrix where Morpheus asks Neo if he believes in fate, and Neo says no because I do not like the idea that I'm not in control of my life and Morpheus leans in and emphatically says I know exactly what you mean as though this is a super-profound insight

Okay, yes, granted, this is very specific and weird, but every now and then this scene pops into my head and it makes me cringe. There may be some people who find comfort in the idea they have no control of their lives; but literally everyone who finds fate an uncomfortable concept does so because they do not like to think they lack agency. So Morpheus's "I totally understand the way you feel" vibe as though he is somehow special for "getting it" is just annoying. Everyone understands the way Neo feels. Move on.

"Better safe than sorry"
This is maddening because "safe" and "sorry" are not opposites, nor mutually exclusive, which makes this statement a false dichotomy. "Sorry" is a bad thing to be: it means something bad happened to you. Everything else being equal, if you are given the choice between sorry and literally anything else, you'd pick that something else - unless that something else is something that will also make you feel sorry, in which case you are not being offered a real choice.

What the statement is trying to convey is "better not take a particular risk, even if the potential benefit is large, because there is a chance things will go wrong" - but this is much more debatable. Indeed, if we never took any risks, if our decision-making heuristic were "avoid even the slightest chance of ever feeling sorry", we would never get anything done.

So what people who say "better safe than sorry" are doing is stacking the deck to their favour - they take a matter which admits for discussion and turn it into a binary, "there is only one obvious choice here" statement. For shame.

People who pronounce "Barcelona" "Barthelona" or "Ibiza" "Ibitha" because "that's how the locals say it"
Okay, and Greece is "Ellatha", China is "ZhongGuo", "eclectic" means "picky" and a "sycophant" is a slanderer instead of a suck-up. Barcelona may well be pronounced differently in Catalonia, but in English it's pronounced with a c. Mispronouncing the word to show that you have been to Spain does not make you sophisticated, it makes you a massive douchebag.

One exception is Spanish people themselves - in their case, saying "Barthelona" does not mean they are pretentious; it just means they are not speaking English properly.

Deadpool (the film)
What bothers me more than having wasted two hours of my life watching this rubbish is the fact that most people - critics and audiences alike - seem to think it was actually good.

It was not. I am pretty sure no-one will argue Deadpool's strength lies in its plot or its gratuitous violence - instead, people seem to take pleasure in its supposedly funny and original meta-breaking-the-fourth-wall quirk.

Except that was not novel, nor well-done. There are literally hundreds of films, series, plays, books, songs and even video games that have done meta way before, or way better, or both, than Deadpool. Think I am exaggerating? Okay, how about:

House of Cards, Family Guy, South Park, the Simpsons, Looney Tunes, Tom & Jerry, Wanted, Trading Places, Young Frankenstein, Blazing Saddles, Scream, Scary Movie, Shriek If You Know What I Did Last Friday the Thirteenth, Not Another Teen Movie, Wolf of Wall Street, House of Cards again, the Expendables, Space Jam, House of Lies, Δύο Ξένοι, Στάβλοι της Εριέττας Ζαϊμη, S1ngles, the Dark Tower, If On A Winter's Night a Traveler, several Milan Kundera novels, Logicomix, Economix, Maus, Asterix, Lucky Luke, Comix Zone, The Devil and Daniel Webster, Monty Python & the Meaning of Life, That Mitchell and Webb Look, Monty Python & the Holy Grail, The Rocky Horror Picture Show (the theatrical version), certain performances of Don Juan, Turandot and many other plays, xkcd, Cyanide & Happiness, SMBC, You're So Vain, Amelie, Arkas, Kiss Kiss Bang Bang, M*A*S*H, Lego Batman, Love Red Nose Day, various DFW essays & short stories, Fleabag, Rick & Morty, 22 Jump Street, Harold & Maude, La Cite de la Peur, Team AmericaThe Cannonball Run, Harold & Kumar, Airplane!... and and these are just the ones I came up with by myself - more here.

Here's where I stand on meta: meta references necessarily remind the viewer/reader/listener that the work of fiction they are absorbing is just that - fiction; they therefore break the illusion, reduce empathy for the characters and ultimately detract from any substance the particular piece of art has - unless doing so is necessary to achieve whatever that piece of art is trying to achieve (for example, an investigation into the role of art). This means that serious art would not really rely on meta, as it would risk undermining itself.

Of course, meta can be used for aesthetic purposes. In some cases, meta serves to make the viewer complicit with the artist - a wink-wink, nudge-nudge, "we are in this together" gesture (a la House of Cards). In the vast majority of cases, meta is deployed for comedic effect. But I think that meta is overused for both these purposes, and especially for the latter: meta, like many forms of humour, is only funny if it's incongruous and unexpected; otherwise, it's just too lazy, too easy - a cheap source for halfhearted laughs.

And this is where Deadpool fails miserably. Its meta references are only unexpected by idiots. No viewer with an IQ >110 would fail to anticipate a quip about how Reynolds is a bad actor; or one about how the film only had budget for third-tier X-Men; or a post-credit scene mocking viewers' expectation of post-credit scenes in Marvel films. Most jokes are not funny the second time, since you already know the punchline; Deadpool isn't funny the first time either, because the punchline is just so bloody obvious.

When people say "my gut says..."
The reason this drives me crazy is that people use a part of their digestive system to justify a belief they cannot defend rationally. And for some inexplicable reason, a decision-making system that is no better than flipping a coin (and that is, in fact, likely worse, given humans' numerous documented biases) is accepted even in business environments.

To be fair, there are cases where "gut instinct", idiotic nomenclature aside, is a valid heuristic. A person who operates in a repetitive environment with little-changing conditions can develop an intuitive understanding of that environment. In such cases, the person's subconscious can detect patterns faster than rational thought. Chess is a good example. When I was in high school, my classmates and I used to play chess against each other and against our teachers. One particular maths teacher was undefeated - none of us ever even came close to beating him. I remember we once asked him how many moves ahead he plans. He responded that he does not really plan ahead more than two moves - instead, what made him so good was that he could just "see" whether a particular position on the board was to his advantage - he had an intuitive understanding of what made for a good arrangement of his pieces.

Similarly, when I was coding at university, there were times when I noticed bugs in my programs, and would immediately know which portion of my code was the root cause, without having to go through an exhaustive debugging process. Both chess and coding are governed by fixed rules. If you do enough of either, your brain starts spotting patterns without having to work through each rule sequentially. This happens because the patterns that develop in both activities are repetitive and give immediate feedback, so your subconscious learns to spot them: "the last three times my pieces were arranged in this position, I lost the game... perhaps I should make a different move".

However, most people use "gut instinct" in totally inappropriate circumstances. I have heard senior managers make strategic decisions following their gut - but the business environment is not characterised by rigid, well-understood and unchanging rules. Quite the opposite - it is rapidly changing; your past experience in dealing with a customer like Tesco is of little relevance when dealing with Amazon; what was true in the 80s for Gen X need not be true for millennials in 2020. And even if business were governed by stable rules, even senior managers would not have enough experience and quality feedback to justify relying on instinct.

(In fact, even in well-regulated games, the intuitions players develop can prove wrong: AlphaGo beat human Go champion Lee Sedol by making moves that "human players would never think of doing"; in other words, intuition stemming from long experience can put blinkers on our creative thinking.)

At the end of the day, even in these environments where gut instinct is a thing, it's nothing more than a short-cut. A serious chess player would be able to rationally explain his instinctive preference for a position, if given time; a coder would be able to deconstruct his reflexive debugging process. So, putting things down to "gut instinct" and leaving it at that is inexcusable, and nothing more than charlatanism.

Friday, 12 January 2018

Dropshipping - where capitalism's morality goes to die

Nothing tests your belief in a system than its most egregious application.

I am a firm believer in laisssez-faire capitalism for two reasons: first, no other socio-economic system has been so successful in raising people out of poverty and increasing material wealth than it; second, and more important, is that libertarian capitalism is the only framework that that does not require an authoritarian morality adjudicator deciding what is just, good or desirable: it lets everyone make their own choices.

Dropshipping makes me question my faith in it, and my instinctive dislike of most kinds of regulation. The term refers to the practice of setting up an e-commerce store (typically via a platform such as Shopify) that manufactures nothing and holds no inventory. Instead, what it does is advertise products produced by wholesalers (mostly based in China) and sold on AliExpress. When a consumer places an order on the dropshipping site, the dropshipper immediately places an order on AliExpress, providing the consumer's address. The dropshipper makes a profit by charging a higher price than AliExpress. You can watch tutorials on youtube here and here (I recommend watching the videos at 1.5x or 2x speed, otherwise they are pretty dull). You know those annoying facebook ads for hoodies, bracelets, fidget spinners &c you occasionally see? Odds are clicking them will direct you to dropshippers. The picture below illustrates the whole affair:



Dropshippers argue that this is not that different to what most retailers do. For example, all supermarkets sell branded and private label products that they do not manufacture themselves, charging a mark-up vs the price at which they procure them from their suppliers. The difference, however, is that most supermarkets source their products from wholesalers who do not sell directly to consumers (DTC). The wholesalers' decision not to sell directly to consumer makes sense - it's more profitable for them to produce huge batches to sell to the Tescos and Walmarts of the world, even foregoing the mark-up these retailers charge to the end consumers, than to set up costly supply chain networks to sell DTC. Though DTC is becoming cheaper and cheaper, for the time being there are often big efficiencies in the traditional model, which ultimately benefit consumers themselves.

Not so with dropshippers. Consumers could just go to AliExpress and buy products themselves, often at huge discounts to the prices dropshippers charge (in one case, a dropshipper was charging $12.95 for rings that cost $2.85 on AliExpress). So dropshippers are not adding any value whatsoever; all they are doing is ripping off their consumers by peddling junk and riffraff and bombarding them with crude ads on social media platforms. And economists the world over are wondering why productivity is stalling.

Still, if consumers are too stupid to resist the urge buying the worthless nonsense that's marketed to them, and too lazy to bother comparing prices (something that takes <5 minutes in the age of google), who am I to complain?

But it gets worse when you learn about the marketing tactics dropshippers employ. Rory Ganon, the guy in the first video linked above, advocates the following practices:

  • Claiming that products are "free", and consumers only have to pay for shipping. In his tutorial, he sells a bracelet that retails on AliExpress for $1.99; he charges $0 for the bracelet itself, but adds a $9.99 price tag to shipping. AliExpress's actual shipping cost? Free.
  • Adding a "limited offer" countdown in the product's page to suggest that the $0 price offer will expire in one day. This is a bold faced lie, of course.
  • Installing an app on your Shopify shop that bombards the people browsing it with pop ups whenever other people have bought something on your site - the idea is that this gives the impression your site has a lot of traffic, and is therefore credible. However, the app can be configured to produce pop ups for fictitious sales.
What's astonishingly, cartoonishly sleazy is that dropshippers like Rory Gannon are not only not ashamed of their conduct, but are actively showcasing it (characteristing it "hustling") in their tutorials - it reminds me of the scene in The Big Short where Steve Carell's character asks his associates "I don't get it, why are they [mortgage brokers] confessing [to selling mortgages to people who won't be able to repay them]?" to which they answer "they aren't confessing; they are bragging".

I do not know how big the dropshipping market is, nor do I know what % of dropshippers employ such tactics as described above. But given the size of e-commerce, now in the hundreds of billions of dollars per year, the popularity of apps enabling the kinds of practices described above, and the sheer number of dropshipping sites, I imagine the answer to both questions is "big". And though I would never advocate banning dropshipping (as I implied above, the rule of regulation should not be to protect consumers from their own stupidity), surely regulators should intervene to prevent downright dishonest marketing. And perhaps the mainstream media should spend less time worrying about the power of legitimate companies like Amazon (which are hugely beneficial to consumers), and start investigating the self-proclaimed hustlers who are fleecing the clearly none-too-bright public.

Sunday, 17 September 2017

Deconstructing cultural divides

Talking about cultures is difficult: a culture has many facets, so where to begin analysing it? A good start would be breaking down a culture into eight dimensions as identified by Erin Meyer in her book The Culture Map.

Meyer (a professor at INSEAD) is a good, methodical writer, and there is much to learn from reading her book - plus, it's a very amusing read, as she recounts many anecdotes from her professional life. In this post, I want to review the eight dimensions Meyer has identified and offer my own perspective on them - especially as they relate to the workplace. These are:

Communicating: low- vs high-context cultures
The first way in which cultures may differ is how explicit their members are in their communication. Americans tend to spell everything out - to the extent that for an American, sarcasm can only be expressed by modulating one's voice (something that non-Americans perceive as very annoying at best or patronising at worst). A Briton, in contrast, delivers sarcasm totally deadpan - and may well be misunderstood by Americans.

In Meyer's terms, British culture is more high-context than American: its members rely on shared memes, behaviours and frames of reference when communicating, and they expect their interlocutors to pick up on subtle cues, so that they do not need to spell everything out. Yet, Britons themselves are very low-context in comparison to other cultures - especially Asian ones:


It is true that some cultures are more explicit than others, but I think most people overplay such differences. The most notable example of this is Malcolm Gladwell's take on Korea Air's 1997 plane crash: as he said in an interview, "the single most important variable in determining whether a plane crashes is not the plane, it’s not the maintenance, it’s not the weather, it’s the culture the pilot comes from."

Gladwell's thesis, as outlined in his book Outliers, is that the high-context nature of Korean culture and language makes Korean pilots more likely to crash planes: Koreans, he writes, are more deferential to authority and tend to rely on suggestion and subtle cues instead of direct communication, especially when talking to superiors. As a result, a flight officer or engineer will not directly challenge the captain, even if he notices something's wrong - instead, he will try to indirectly communicate his unease, by making statements such as "it's raining heavily" or "the radar is useful", meaning (according to Gladwell) "you have no visibility, do not attempt to land the plane using your eyes only" and "look at the radar, use that instead" respectively.

The problem with this thesis is that it is totally wrong, as a Korean blogger has shown. It exaggerates cultural differences between Korean and American pilots, and it downright misrepresents what actually went down in the '97 crash (for example, Gladwell suggests that if Korean pilots were forced to communicate in English, the number of crashes would be reduced; but the pilots in the '97 crash did actually use English a lot of the time). As the blogger notes, this inclination to interpret individual humans' actions based on culture is overly simplistic, distracts from fully understanding an issue, and destroys individual agency.

Consider the following real dialogue that Meyer provides as an example of cultural misunderstanding:

A: It looks like some of us are going to have to be here on Sunday to host the client visit.
B: I see.
A: Can you join us on Sunday?
B: Yes, I think so.
A: That would be a great help.
B: Yes, Sunday is an important day.
A: In what way?
B: It is my daughter's birthday.
A: How nice. I hope you all enjoy it.
B: Thank you. I appreciate your understanding.

A walked away from this conversation thinking that B would come in on Sunday; B thought that A had let him off the hook. It's true that B never explicitly stated he doesn't want to come in, and we can put this down to culture. But, in my opinion, the misunderstanding here is not due to culture, but due to the fact that mainly A, but also to a lesser extent B, are just bad communicators:

A: It looks like some of us are going to have to be here on Sunday to host the client visit.
B: I see.
A: Can you join us on Sunday?
B: Yes, I think so. --> Explicitly saying "yes" when you mean "no" is bad form. If B had led with "well, Sunday is an important day", fine; but he did not - there are no cues here, no subtlety, there is a direct "yes".
A: That would be a great help.
B: Yes, Sunday is an important day.
A: In what way?
B: It is my daughter's birthday.
A: How nice. I hope you all enjoy it. --> What on earth does A mean by this? He is asking B to come in and work - but "I hope you all..." implies that B will be with his family. Bad, ambiguous communication, not cultural misunderstanding!
B: Thank you. I appreciate your understanding. --> How can A fail to pick up the significance of "I appreciate your understanding"? What understanding has he shown? What does he think B is thanking him for? His obliviousness to this is due to bad listening skills, regardless of culture.

Still, it's undoubtedly a fact that some cultures are indeed more explicit than others, and that cultural misunderstandings can occur, even if people are good listeners. That's why Meyer is right in saying we should all be aware of cultural differences: people from low-context cultures should be extra vigilant so as to pick up subtle cues from people from high-context cultures; on the other hand, the latter should not try and find hidden meaning in explicit statements from people from low-context cultures.

Even so though, it is unrealistic to expect a Cincinnati-based manager to understand Japenese or Korean culture well enough to pick up on cues based on native speakers' shared worldview and history. So, as Meyer suggests, multinational organisations should train all their employees in using low-context, explicit language, to avoid misunderstandings.

(That said, I can absolutely understand why people from high-context cultures might find this difficult: most Europeans I know often dismiss Americans as unsophisticated and unrefined due to their explicitness and inability to process sarcasm; imagine then how westerners must come across to the even more high-context cultures.


(Interestingly though, exactly because high-context languages rely on shared culture, people from one high-context culture may totally fail to pick up cues even from a lower-context culture than theirs; I have a Chinese friend who finds the British way too indirect, for instance - even though her culture is supposed to be more high-context.))

Evaluating: direct vs indirect
A French person listening to an American evaluating anything wears a peculiar expression on their face - something between scorn and pity. Either the American is a liar, with their "awesome!"s and "wow!"s, and so they deserve scorn, or they really are that excited by everything - in which case they deserve the French person's pity for being fundamentally uncool and not realising that life is a meaningless abyss of pointlessness.

This is partly because the French have nihilistic philosophy in their DNA and partly because they are more explicit in their feedback than Americans, who are, however, more explicit than the Brits:


Cultures on the left side of the scale call it like it is: you do something wrong, they will let you know. Sides on the right will find a round-about way of giving negative feedback. For example, Americans will rarely tell you that you suck at something - they will talk about your "opportunity areas", and only after they have identified at least three "strengths". The English have a different strategy: they have developed a special vocabulary for giving negative feedback:

(As a result, the English are not perceived as uncool by the French: they may be more indirect, but they do it with more finesse than the Americans (or rather, they are perceived as uncool, but for different reasons).)

You can see the problem here: if you have never worked with English people before, you may walk out of a feedback session with your British boss thinking that he is in total agreement with your interesting ideas, and that any flaws in your work were due to his interference - when in fact, from his perspective, he just gave you a pretty severe dressing down. Misunderstandings are exacerbated by the fact that people from some cultures, such as the American, have a reputation for being very explicit in their communication, as the scale in the previous section shows; so, their interlocutors expect them to be the same way when evaluating things, and therefore fail to pick up on the more subtle feedback.

This is a difficult problem to crack. It is just as difficult to learn to interpret another person's feedback as it is for a person to learn to change the way they deliver it. In addition, as Meyer notes, it is easy to accidentally go too far: an American who reads all this may well decide to give being more direct a try, but end up coming across as rude to a French person.

According to Meyer, the solution to preventing misunderstandings here is common sense: remember not to take it for granted that your interlocutor has understood your feedback, and do not try to hard to adapt to the local style of giving feedback unless you understand it perfectly, because it is easy to overdo it.

My own observation is that style is one thing, substance another: the former doesn't matter if the very criticism you are offering is not helpful. For example, one of my managers once justly criticised me for handling a situation badly. I listened, and learnt from this, and (I believe I) improved in that area. Three months later, my manager gave me the exact same feedback - by referencing the mistake I made the first time. I thought that this was unfair: if I had not made progress, she should have referenced a new situation in which I showed the same failing. If there were no new mistakes in that area, why was I receiving the same feedback? No matter how subtly or directly she had given me the feedback, I would not have taken it well, given that it was not well-thought out.

Persuading: concept- vs applications-first
In my first week at P&G, my boss asked me to do an analysis and present my results to the finance director. Fresh out of university, I applied all the fancy analysis methods I had been taught, and wrote a five-page document outlining my findings, carefully describing my methodology and caveats to my work. I sent this to my manager, who came to my desk ten minutes later, gave the document back to me, and said "this is all wrong. No-one is going to read give pages. Fit it all in one". I was flabbergasted - there was no way I could fit all my findings, along with an explanation of how I got to them, in one page; and if I left out the latter, why would anyone trust my analysis?

Meyer calls what I tried to do "concept-first persuading", and what my boss was asking me to do "applications-first persuading". It is very important to know how different cultures rank in this scale, because getting things wrong will render your arguments totally useless. 

As the table shows, a concept-first audience demands that a person making an argument explains his approach and methodology before presenting conclusions and recommendations. For example, if you were to give a presentation to German managers, you should begin by explaining how you did your analysis; once you have convinced them that your approach is sound, you can present your results. Having accepted your methodology, they are more likely to approve of your conclusions. In contrast, a group of American managers will soon get impatient and accuse you of philosophising if you waste their time with a lecture on your approach.

For me, this section was the most illuminating one in the book - I had not realised there are cultural preferences in this area. As a result, I have got this wrong both ways in the past. There have been occasions when I gave an application-first presentation to a concept-first audience: I dove straight into my recommendations, only to be cut short minutes after I started speaking with questions of "how did you get that?" and "did you also consider x/y/z in your analysis" - which totally derailed my planned presentation. There have also been times when I started talking about how I approached a particular analysis to be interrupted with "why are we wasting time talking about how you modeled x/y/z? Cut to the chase". I might have been able to avoid such issues, had I known about this cultural divide.

Of course, what complicates things is that many of us work in multi-cultural environments - our audiences may well include both Americans and Russians... what do we do then? Meyer doesn't offer much advice here. My own approach so far has been structuring my presentations in such a way that I can easily change track if the meeting starts getting derailed - e.g. by having an appendix with my methodology at hand, or a section with bullet-point recommendations to which I can easily skip if needed. In the future, I think I might also try adapting my presentations to the culture of my audience's majority (or to the culture of the key decision maker), and see how that goes.

Meyer notes that the scale above does not show Asian cultures. This is because Asians, according to her, take an altogether different approach to persuading, which she calls "holistic thinking". She describes this as a pattern whereby people talk about peripheral information, which they slowly synthesise into one big picture. She cites some interesting studies corroborating her thesis - for example, when American and Japanese subjects were asked to describe pictures or videos of aquatic life, the Americans started by talking about the fish they spotted, whereas the Japanese started by describing the background. Similarly, when asked to take pictures of individuals, Americans took close-up portraits, whereas the Japanese zoomed out to take full-body pictures of the subject in her environment.

I am not 100% clear on how this is different to a concept-first preference - after all, looking at the big picture is basically taking a particularly broad theoretical approach to things. According to Meyer, it's interesting knowing about it, because it has implications for managing people from such cultures. She reports cases of managers who were used to western cultures, where they'd allocate specific tasks to individuals in their teams, and expect them to accomplish them. But in holistic cultures, employees want to know how their work fits in the bigger picture, so to motivate them, managers should explain how each person's work is relevant in the bigger scheme.

I must say that my personal experience does not really support this. What I have seen is that in every culture, good managers understand (and want to know) how their work fits in the big picture, and poor managers focus on their little silo, without really caring how their work affects that of others. For example, one of the things I have worked on in the past is minimising the cost of our products. What I noticed is that many of our chemists or engineers were brilliant at finding technical solutions to technical problems, but not very good at understanding how their work affected the consumers. Suppose you asked them for options to reduce the cost of promotional SKUs. They could easily give a list of different materials you can use, and explain how using a different material would lower the cost, but it would not occur to them to calculate the total cost of promotional SKUs and ask marketing whether these SKUs are really needed - for instance, do we really need to physically bundle two products together, or can we run a buy 1 - get 1 free promotion?

Perhaps some cultures really are more inclined to see the big picture; but I really think that this is more a function of an individual's intelligence, ability to synthesise information, and perhaps most importantly, curiosity, than of a person's cultural background.

Leading: egalitarian vs hierarchical
This dimension refers to how hierarchical a culture is. I don't think anyone will be surprised to hear that the Nordic countries, with their utopian socialist regimes, are the least hierarchical cultures in the world, whereas countries like Saudi Arabia, China and Japan find themselves at the opposite extreme:


In egalitarian cultures, it's okay for subordinates to openly disagree with their managers, to take initiative without approval or to e-mail people far higher in the management chain; in contrast, in hierarchical cultures, subordinates are more likely to defer to their managers' opinions, and one simply does not message someone two levels above them directly.

Per Meyer, egalitarian managers leading a team from a hierarchical society may run into big problems in this dimension: they may think that their reportees lack initiative or confidence, because they will not generally to do things on their own and will not speak up in meetings; on the flip side, egalitarian managers themselves may be perceived as incompetent and incapable of setting direction by their reportees. 

Meyer has a few suggestions for leading teams from hierarchical cultures: a) asking your team members to meet without you to brainstorm, and share back with you the team's ideas - removing yourself from the meeting will make team members more comfortable to voice their views; b) telling your subordinates in advance that you will ask for their inputs in a meeting, so they have the right expectations and time to prepare; and c) when chairing a meeting, do not expect people to jump in - invite people to share their views. 

In addition, Meyer says that symbolism may matter more in hierarchical structures - for instance, she recounts the story of a senior manager working in China who found out his reportees felt slighted because he biked to work: at that time, it was considered low-class to cycle instead of driving or even taking public transport, and the person's subordinates felt that their manager was not signalling his high status, which in turn reflected badly on his team.

My own experience corroborates the scale above, but, perhaps because P&G only hires at the entry level, promotes from within, and has quite multicultural offices, thus creating a very strong and fairly uniform culture, I have not witnessed dramatic differences when working from colleagues from different countries. Sure, the Chinese managers are more likely to be deferential to their superiors, but it's not like they will not speak up at meetings - and my subordinates do not seem to mind my taking the underground instead of hiring a driver.

Deciding: consensual vs top-down decision making
The fact that a particular culture is hierarchical does not mean that all decisions are made by the boss. In fact, a culture may be very hierarchical in the ways described above - it may vest its leaders with status, expect subordinates' communication to follow the chain and avoid challenging their seniors &c - but have a consensual decision making mechanism.

Meyer recounts the story of a merger between an American and a German company, which quickly ran into difficulties. Amusingly, each side accused the other of being overly hierarchical. An American remembers being told off by the Germans for scheduling lunch with someone beneath them in the hierarchy - thus violating protocol; the Germans complained that though Americans "pretend" to be egalitarian, what with their open-door policies and first-name basis, they made decisions in a far more dictatorial manner: a manager would often make a unilateral decision, and expect his subordinates to follow his lead. Germans, in contrast, would make decision by consensus.This had further implications: because Germans would spend a lot of time conferring before making a decision, once a decision was made, they would stick to it. Americans would make a snap decision, and expect to change course as new information came in.

And yet, Germans are not that far from Americans in their decision making style:

How does Japan, a country that is considered to have one of the most hierarchical cultures in the world also have the most consensual decision making style? Apparently, they operate on what is called the ringi system: low-level managers discuss an idea among them, reach a consensus, and present this to their 1-ups; the 1-ups then have discussion among themselves, and once the proposal has everyone's stamp of approval, it is sent to the people further up the chain, and so on until it reaches the ultimate decision maker. By that time, everyone in the hierarchy is aligned to the proposal (though I have no idea what happens if the group of more senior managers disagrees with their subordinates' recommendation).

My own experience corroborates this scale. P&G has a standardised system for making decisions, but even within such a system, you can see that managers from different cultures have different preferences. One of my German superiors perfected the system of management by walking: he would take walks around the office, listen to what his subordinates were working on, and make suggestions for what course of action to take. Even when he dictated a decision, he would invest time in explaining his reasoning to get his subordinates' buy-in. In contrast, an American manager I had would also solicit inputs from his team members, but was more likely to unilaterally decide a course of action - even if he did not have his subordinates full agreement (for example, on one occasion, he asked me to conduct a very large piece of analysis, which I felt was unnecessary. I explained why I thought this work was not needed, but he told me to do it anyway. (To be fair to him, he turned out to be right - that work did yield important insights.)).

You can immediately see what people at each end of the spectrum think of working with colleagues from the opposite end: people from top-down cultures find consensual decision making to be too slow, bureaucratic and inflexible; consensual decision makers find top-down cultures to be too dictatorial and indecisive, as decisions are frequently revised (not having undergone lengthy examination from the beginning).

My own view on this and the preceding section is that it is not enough for managers to learn and understand different cultures: true leaders must also be able to shape the culture of their own organisation. When it comes to decision-making, both extremes are not ideal. If you have a culture that insists on a rigid hierarchy where decisions are always made at the top, you may miss out on valuable input from the people further down the chain. Moreover, senior managers who do not interact with those at the bottom of the pyramid risk losing touch with the business environment - consider the example of John Lasseter's clash with Disney's Nine Old Men: Lasseter was fired from Disney for pushing for computer animation; he joined Pixar, which Dinsey ended up acquiring for $7.4 billion (at which point Lasseter was appointed Chief Creative Officer for both Pixar and Disney Animation).

On the other hand, an overly-egalitarian, consensual culture where everyone's opinion has the same weight regardless of experience or expertise is likely to be very slow and ineffective. For example, a poll Meyer cites found that fewer than 10% of Swedes believed a manager should match his subordinates' technical competence. But it can quickly get extremely frustrating trying to explain a thorny, technical issue to a superior who just doesn't have the necessary knowledge to understand it (ask Gary Cohn).

Beyond this, I think each kind of culture is better suited to particular kinds of problems. Companies operating in industries where speed of innovation is critical, and failure is not catastrophic, require a relatively flat culture, so that everyone can contribute ideas, but with a top-down, flexible decision-making style, so that decisions can be taken quickly and revised frequently. Companies working on capital-intensive, long-term projects (say, building nuclear reactors) would benefit from a hierarchical culture with an experienced leader at the top, and a consensual, slow decision-making style that ensures all relevant facts are considered before taking action. You do not want to start building a factory only to realise you have laid the foundations over a major fault line.

So, I think leaders must be aware of different cultures, but not so as to adapt to them, but so that they know what they need to do to align them to their organisation's mission.

Trust: cognitive vs affective
Meyer posits there are two types of trust: a Swiss person builds trust by being open, transparent and detailed - technically competent; a Chinese person, in contrast, builds trust through personal connection - by building Guanxi. In other (/Meyer's (sorry to say somewhat trite)) words, you can build trust from the head and you can build trust from the heart.

The more scientific terms for these two types of trust are cognitive and affective trust respectively. The former refers to the trust you have in a person thanks to their accomplishments and skills; the latter is the trust you have in people to whom you are close.

A Harvard Business School survey cited by Meyer highlights a significant difference between American and Chinese managers: the Americans separate cognitive and affective trust. The Chinese connect the two. Meyer brings up an anecdote illustrating this difference: she interviewed a Chinese manager, Ren, working in America who once formed a friendship with an American he met at a gym. By happenstance, this American was a potential client for Ren's company; Ren was surprised to find out that, in spite of their personal friendship, the American wanted to look into the details of a proposed contract, and negotiate a price as though they were strangers.

Another way to frame this divide is as task-based vs relationship-based trust. Task-based cultures separate cognitive from affective trust, whereas relationship-based cultures have more blurred boundaries between the two:


Of course, Meyer accepts that  Americans too form relationships with colleagues or business partners, but according to her these tend to be more ephemeral and often only exist to serve a business purpose. The fact you have skied or hit the links with someone does not mean they will not launch a hostile takeover bid for your company if they get the chance. In addition, Meyer stresses that one should not mistake friendliness for relationship-building, nor initial coldness for aversion to forming a bond. She points out that Americans are very smiley, friendly and likely to get into personal discussions with virtual strangers - but this does not necessarily mark a willingness to form a long-lasting bond.

I have three comments on this. First, I am not so sure that relationships count for as little in America, and for as much in, say, China, as Meyer suggests. For instance, she mentions that one of the ramifications of this cultural divide is that firing a salesman in China may be very risky, as they may take all their clients with them. Yet is it not also the case that when private bankers in Switzerland change employers, they take a lot of their customers with them? Is it not true that academics frequently move from institution to institution as a group? Don't we have the whole "old boys club" thing going on in places like the UK?

On the other hand, the business environment in countries like China is changing rapidly. It is still the case that people may refuse to do business with you if they do not know you; a friend was telling me how her boss is reluctant to hire people he does not personally know and trust, regardless of qualifications. But if you work in a multinational company, like I do, it's not like your colleagues will ignore you or be difficult until they get to know you. And cognitive trust does play a role in China - the first plant manager I worked with told me on my first day "you have a huge advantage: you are foreign, and have gone to a good university - people will trust you. Use that". In general, actually, university brand names count for far more in China than in the UK - what is this if not a sign that qualifications that signal capability matter?

Second, I think that, regardless of what kind of culture you find yourself in, it does not hurt to build a personal relationship with your colleagues or customers. Go out with your colleagues, play sports with them, go for lunch - it can't hurt.

(As I've mentioned before, one of the biggest cultural shocks I've faced in my career was when I moved from Geneva to London: in Geneva, P&Gers would go for one-hour long lunches, complete with espressos on the company's terrace. In London, my colleagues would go for quick, 20-minute lunches, which I found shocking. Not only that, but they preferred to go for lunch in large groups, consisting mainly of their immediate,current-team colleagues - whereas in Geneva, people would have 1-1 lunches with friends from other teams or business units). I found this lack of a decent lunch culture appalling - how could you get to know people if you never took the time to talk with them 1-1? (The answer, as I found out, was to go on big nights out together and get hammered - which is both physically and psychologically unhealthy in my view. In fact, this is approach to socialising is characteristic of the British psyche: you cannot risk opening yourself to another person unless you are drunk, in which case you can blame anything you say or do on alcohol.)

(I do not mean to boast (okay, maybe I do, a little), but building affective trust is particularly easy for me in China, because the Chinese love playing card and dice games, on which I am also very keen. Their card games (like much else in China) are very similar to games we have in the west (e.g. whist), except with a bunch of incredibly convoluted rules added on top. Their favourite dice game has the same rules as Perudo/Liar's dice, except that here it's a drinking game: in the west, when a player loses a round, they have to lose a die; here, they have to take a shot instead.))

Third, I think this is one of those cases where there is a right and wrong culture. I come from a relationship-based culture, and I have worked in a company where people do build very close relationships (I know plenty of people who met their spouse at P&G, and plenty more who've met their best friends in the company - I for one moved in with my ex-boss and had my bachelor party organised by two of my former managers), and though I know that such cultures feel much better than cold, task-based environments, they do come with risks. A relationship-based culture, where affective trust fosters cognitive trust, is more likely to lead to corruption. You appoint your friend to a managerial position, not because they are capable, but because you "trust" them; people like Ren from earlier expect their friends to give them contracts without due diligence; and people will not do business with you until they get to know you - hardly the most efficient or meritocratic way to doing things. So, as in the previous section, leaders should be aware of the local culture they find themselves in, but they should take steps to make it more meritocratic, if it is too much to the affective trust side of the spectrum.

Disagreeing: confrontational vs conciliatory
If I am ever asked in an interview how well I handle working with people from different cultures, I will produce this chart:


As you can see, Greece is the polar opposite of China: Greeks are confrontational and emotional, whereas the Chinese are reserved and value harmony - so, if I've managed to survive in a Chinese environment, I'd probably do okay everywhere. (Note that the UK is bang in the middle of the two cultures - so, having spent years in England (and being married to an English woman), I found the transition from Greece to China somewhat smoother than it might have been.)

The key difference between confrontational and non-confrontational cultures is that in the former, disagreements are seen as a good thing, and they do not affect people's personal relationship; in the latter cultures, attacking one's argument may be seen as attacking the person, and so debates are seen as inappropriate. People from the latter kind of culture are often shocked when they see people from a confrontational culture interact: at university, one of my closest (Greek) friends (and housemate) and I would argue 80% of the time (the remaining 20% was dedicated to South Park and so-called burger movies); that, in conjunction with the fact that the Greek language sounds very harsh to people who don't speak it, would cause our English friends to ask each one of us in hushed voices "are things alright between you two? You were having such a row!". We were just as perplexed that people kept interpreting what to us seemed as normal interaction as vicious fighting.

(Actually, thinking back, Greece fully justifies its position at the extreme corner of the chart above: the range of subjects on which we'd have passionate debates was absurdly wide - from the classic uni student "capitalism vs socialism" debate to what you should do if your car is running out of gas in the middle of nowhere - drive faster or slower? Astonishingly, this latter debate was for some inexplicable reason the most acrimonious one we've ever had as far as I can remember: three of us were walking home, and the debate reaching a crescendo of irrationally high temper, my friend walked away from the remaining two of us. My second friend told me, "come on, Aris, talk to him, make up". The best I could come up with was to shout after him, "hey, look, tomorrow you'll be talking again to us anyway, so you may as well come back and start now". It didn't work. (To be fair, that was after a night out, and we'd had our fair share of drinks).) 

Meyer suggests a few strategies for managing teams in multicultural environments. First, as in the case of leading, Meyer suggests that senior managers remove themselves from meetings, because their seniority may disincline people from disagreeing openly. In some cultures, even asking for someone's opinion may come across as pointed and confrontational, so it's often better to ask your subordinates to meet without you to discuss a problem, and then report their findings to you.

A second tip is to solicit anonymous feedback. In the US, brainstorming meetings are commonplace: a group of managers get together, toss around ideas and critique each other's suggestions; in other cultures though, people may be unwilling to share half-baked ideas in front of their colleagues. In such cases, you can ask people to write down ideas anonymously.

Another idea is to have pre-meetings. A quick check here: what makes a meeting successful in your eyes?
a) A decision is made;
b) various viewpoints are discussed and debated;
c) a formal stamp is put on a decision has already been made before the meeting.

Most Americans choose (a); most French choose (b); and most Chinese choose (c). In such cultures, it's helpful to have informal, 1-1 meetings with your colleagues to get everyone on the same page before the actual meeting.

Finally, Meyer says you should adjust your language depending on the culture in which you find yourself. Avoid qualifiers such as "totally" and "completely", and soften your message with "maybe" &c.

I think this advice is good inasmuch as it will steer you away from trouble, but again, in my view, a good leader should not be content in just adapting and staying out of trouble. Most people I know already think they attend waaay too many meetings, and are tired of office politics; recommending pre-meetings and pre-alignments (and in some cases, pre-pre-alignments) may be helpful in avoiding confrontation, but the downside is that people start spending too much time talking instead of doing.

As in the case of deciding, I think both extremes here are bad: you do not want a culture where people come to blows over questions of mileage optimisation, but you also do not want a culture where no-one feels comfortable challenging a patently idiotic proposal. I think a good leader has a duty to do the following:

First, train their people to feel comfortable to express their ideas and challenge each other - for example, by being upfront and clear about the fact that disagreement does not equal disrespect. It also helps to design exercises that encourage people to disagree with each other. Stereotypes will have you think that it's nigh impossible to get a Chinese manager to openly challenge a colleague, but this is not my experience. One of the operations managers I work with hosted an offsite for her organisation where managers were split into teams and asked to debate a business question. Not only did people do this and have fun, but the debate highlighted people's concerns with the company's strategy that might have gone unvoiced.

Second, develop a system for resolving conflict. "Agree to disagree" is not acceptable in my view: people should be encouraged to uncover their underlying assumptions, and critically evaluate them. P&G's former CEO had developed such a system for making strategic choices: a) all stakeholders write down what would have to be true for them to have confidence in each of the options identified. b) The team then determines which conditions thus identified are unlikely to be true. c) It then designs and executes tests for each condition and d) goes with the option which the tests have determined to be the most likely to achieve the objective.

(Optional third: go all in, Bridgewater-style).

Scheduling: Swiss precision vs Indian flexibility
Greeks have a number of stereotypes for the British, most of them inaccurate; none more so than the idea that Brits are insanely punctual. The first time I attended an English party, I arrived at the specified time... and found myself alone with my hosts for over an hour. And at least this is inconsequential - do not get me started on English trains.

From all of the cultural divides listed in Meyer's book, I think different peoples' attitude to time is the most accurate and persistent. It is also the most obviously there-is-a-right-and-wrong one.

It is pretty obvious what his divide addresses, so I do not need to expand on it, so let's jump straight into criticism.

First, once again, a great deal of cultural misunderstandings stemming from different attitudes to scheduling can be resolved through clear communication instead of "cultural sensitivity". Meyer brings up an anecdote of giving a lecture in Brazil. She was originally scheduled to talk for 45 minutes, but when she met with the organiser of the event the day before she was to speak, he told her "feel free to take more time than is scheduled if you like". She asked whether this meant she could take 60 minutes instead, to which the facilitator responded "of course, take the time you need". On the day of the lecture, the facilitator re-iterated that Meyer should take as much time as she needs. Meyer gave the lecture, and ended it after 65 minutes - even though people still had questions to ask her. The facilitator approached her and told her that her talk was great, but that it finished too early. Meyer was baffled by this, as, in her mind, she actually took longer than the time allotted to her.

Okay, I get that to an American, 60 minutes means 60 minutes. But look, ignoring your host's repeated request to take as much time as you need, and your audience's demand for more of your time, is not a cultural misunderstanding, it is bad listening. After all, why couldn't Meyer have just asked at the 60 minute mark "do we have more time? Is it okay if we go on?" It may be that the Brazilians' flexibility with time, as opposed to Meyer's strict interpretation of allocated time slots, is a cultural issue. But the misunderstanding that arose out of that has nothing to do with culture, and everything to do with communication.

Second, though people at the opposite ends of the time scale find each others' culture stressful, the fact is that strict scheduling is far more optimal. I fully understand and sympathise with people who claim that inx, arbitrary deadlines and schedules are suffocating. But there are some things which are critical, and for which you need to rely on a precise timeline. If a woman is about to give birth, and calls a driver to take her to the hospital, she cannot afford to wait just because the driver has a somewhat fuzzy and liberal interpretation of "get over here, right now"; no-one wants to miss spending Christmas with their family because planes or trains are delayed.

Bottom line: being late in meetings with other people sends a very clear signal: my time, and my priorities, are more valuable than yours. The answer to avoiding stress is not turning up to meetings late, but avoiding setting arbitrary and stressful deadlines, and avoiding non-value adding meetings.

Conclusion
Culture is complicated. A particular people may be emotional, but avoid confrontation; they may be explicit in their communication, but avoid giving direct feedback; they may be hierarchical, but despise top-down decision making. Meyer has done a very good job at breaking down cultural divides in neat categories that allow for methodical analysis.

Still though, as I've tried to show in this post, I think that discussions on culture almost inevitably fall into three pitfalls that Meyer herself does not entirely avoid, though she does at times acknowledge them:

a) a great deal of cultural misunderstanding can be avoided, not through cultural training, but by good communication;

b) we humans have more in common than anecdotes seem to suggest; as a result, cultural differences are often exaggerated. Moreover, there is very wide variation within a culture. This suggests that cultures are not as inflexible and hard to change as books like Meyer's may seem to suggest; and

c) as I've tried to argue in many of the sections above, the relativist view of "there is no right or wrong culture" is wrong. I am not of course talking about moral superiority here, but about a particular culture's efficacy in achieving a given goal. Some cultures are better for fostering innovation; some are better at maintaining stability. Operating in a culture that is not conductive towards achieving an organisation's goal is counter-productive.

Finally, even though Meyer's framework is excellent in comparing and contrasting different cultures, it is, at the end of the day, an over-simplified model of a culture's norms and behaviours. (To be fair to her, Meyer never claims her eight cultural dimensions perfectly encapsulate a culture's essence. Still, it's important to reinforce this.) Consider this quadrant that Meyer has drawn:


Notice that China is one of the most high-context and indirect-feedback cultures in the world. If you were to take this at face value, you might expect the Chinese to skirt around everything, always communicating using subtle cues and avoiding anything that might give offense. But what might give offense is very different in China vs in the west: the Chinese have no qualms referring to their friends as 小胖子 (xiao pangzi) or "little fatty", asking you how old you are or how much money you make, and referring to you as "the foreigner" or "the white". And though it's true the Chinese very rarely say that something is "bad", they will very frequently and directly say that something is "not good" or "not right". So, by all means, do learn how the various cultures map under Meyer's system, but remember that binary classification into eight categories does not tell the whole story. 

Wednesday, 2 August 2017

Behavioural Economics: a review

Most of us have read Kaheman's Thinking, Fast and Slow or Thaler's Misbehaving or Nudge. These books all discuss the birth of behavioural economics, a discipline that marries economics with psychology, and which its adherents claim has supplanted neoclassical economics.

Yet contrary to the strong assertions made in these books, or by some of the discipline's fans, behavioural economics has not definitively dethroned traditional economics. Indeed, in spite of the discipline's popularity, it is still a small part of economics courses curricula. In this post, I review three main criticisms of the discipline that help explain why this is so: first, the criticisms leveled by behavioural economists against classical economics are often unfair; second, many of the experiments that gave birth to the discipline have failed replication attempts, or cannot be generalised from the lab to society at large; and third, the fact that neoclassical economics make for a better foundation for policy.

A. Behavioural economics vs Neoclassical Economics
Neoclassical economics refers to the attempt to model an economy based on three principles:

a) that people have rational preferences between outcomes (this basically means that any two alternative choices can be compared to each other, and that preferences are transitive, i.e. if a person prefers apples to bananas, and bananas to pears, then he also prefers apples to pears);

b) that individuals maximise utility; and that

c) people act independently on the basis of full information.

Neoclassical economics relies on these assumptions to model allocation of resources, market beviour &c, often making use of game theory. This latter field, popularised by the film A Beautiful Mind, is concerned with predicting how two agents will behave in a particular situation. Briefly, game theory suggests that a possible interaction among a number of agents will result in equilibrium, a state where no agent has an incentive to change their behaviour.

The classic game theory example is the prisoner's dilemma: two criminals are arrested, placed in separate cells, and offered a bargain: each prisoner can testify that their partner committed the crime, or they can stay silent. If both prisoners betray each other, they both get two years in prison; if one prisoner betrays his partner, but his partner stays silent, the snitch goes free but their loyal partner gets three years; and if both stay silent, they both get one year in prison (due to some lesser charge the prosecutor can concoct).

This scenario can be visualised in the following table:
The bottom left number in each cell shows A's sentence, and the top right B's sentence. According to game theory, both prisoners betraying each other is the game's only Nash Equilibrium: you can see that in any other cell, one or both of the prisoners has an incentive to change their strategy, whereas in the bottom right cell, a prisoner will only be worse off if they change. So what this game tells us is that even though mutual cooperation would leave both players better off, rational decision making will lead to mutual betrayal.

Behavioural economics challenges the three hypotheses that underpin neoclassical economics. The discipline suggests that, not only people are irrational, but they are predictably so, to the point that the same approaches used by neoclassical economics (such as game theory) would lead to different conclusions, were the predictably irrational behaviour of humans taken into account.

There are two responses to this challenge. The first is that behavioural economics does not so much supplant neoclassical economics, as it augments it. Prospect theory, one of the discipline's foundations, proposed by Amos Tversky and Daniel Kahneman, slightly modifies utility theory, so that according to it, people make choices between alternatives based on potential gains and losses, not end-states; it also suggests that people use heuristics to make decisions. But at its core, it's not all that different to classical economics.

The second is that neoclassical theory is actually pretty good at predicting behaviour; the experimental results from behavioural economics that seem to suggest otherwise misunderstand neoclassical theory. There is a good paper on this by David Levine and Jie Zheng. This paper uses the Ultimatum game as an example: this is a game that many behavioural economics proponents claim undermines neoclassical economics.

In the Ultimatum game, person A is given $10, and can then suggest a division of this money between himself and player B. Player B can then accept A's suggestion, or reject it, in which case neither player gets any money. In various lab experiments, it has been observed that few people, if anyone, offer less than $2 to player B, with most people offering $5; and, when player A makes an "unfair" offer, player B often rejects it. Some behavioural economists consider this an excellent refutation of neoclassical economics: surely, traditional, neoclassical theory, with its selfish, buck-maximising agents, would predict minimal offers from player A, which would always be accepted by B.

(This way of reasoning is called sub-game perfection: the idea is that you break the game into two stages, and reason backwards: player A thinks, as long as I offer anything to player B, he is better off accepting rather than rejecting my offer; therefore, I can offer anything, no matter how little, and still have him accept it.)

However, neoclassical economics does not have selfishness or lack of altruism as a fundamental axiom; in fact, Adam Smith explicitly stated that people's utility functions most likely have a moral dimension to them. More importantly though, game theory says that, perhaps counter-intuitively, the Ultimatum game has many Nash equilibria. As Levine and Zheng write, the right way of thinking about the problem is to check whether people's losses (as a result of their strategy) are small relative to what they could have gained, had they played optimally.

To do this, one would have to look at how much money a player who had past experimental data could have made and compare it to how much they actually made. Using this approach, it is found that players in the Ultimatum game lose about $1. Furthermore, only 1/3 of this $1 represents known losses, i.e. money that the players know they will lose (clearly, only player B has known losses in this game, when he rejects A's offer, knowing he is choosing to forego money). The remaining 2/3 are basically due to players who assume the role of A not having had enough experience to judge what kind of offers are typically rejected.

In summary then, many argue that behavioural economics is nothing but tinkering with the neoclassical model; any claims that it's a fundamentally new paradigm show a misunderstanding of neoclassical theory.

B. Humans: not that irrational or uniform
Behavioural economists, drawing on work from psychology, make some pretty astonishing claims: if you "prime" people by having them read words that remind them of old people, they will subsequently walk slower; if you give them more products to choose from, they are less likely to make a purchase; if you make exam questions harder to read, they will perform better. Some of these have been as influential as they are hard to believe - for example, consumer goods companies have reduced the number of products they sell to reduce "choice overload", and leaders such as Obama and Zuckerberg have simple wardrobes on purpose to avoid ego depletion. It turns out, however, that some of these effects are not as robust as pop books would have us think.

In this section, I will discuss some experiments that behavioural economists use as examples of human irrationality; but first, there is another matter to be addressed. A great deal of the criticisms leveled against neoclassical economics is based on lab experiments that purport to show people are far more altruistic, selfless or irrational than standard theory predicts. However, many of these results cannot be generalised to society at large; furthermore, human behaviour varies significantly across the world, and we should be weary of drawing conclusions about humanity from lab experiments performed at Ivy League colleges.

Stephen Levitt and John List expand on what lab experiments say about the real world in this paper. They start by suggesting that people's utility function takes the form

U(action, stakes, norms, scrutiny) = Morality(action, stakes, norms, scrutiny) + Wealth(action, stakes)

In other words, the utility, how happy a person will be by taking an action, depends on the moral cost of this action, as well as on its effect on the person's wealth. Whereas the effect on wealth depends on the action and the stakes involved, the moral cost also depends on social norms and the scrutiny of an individual's action. Levitt and List argue that behaviour in the lab is not a reliable predictor of behaviour in society because scrutiny in the lab is far higher than in real life and the stakes are often lower.

(Here's a video demonstrating the importance of scrutiny in guiding action:

)

This is not just a hypothesis, but an observed fact. In one experiment, List ran an experiment in which sellers could choose the quality of the products to offer to buyers in response to the buyers'' bids. He used experienced sports card traders as subjects, and found that in the lab, they exhibited strong social preferences: when buyers offered high prices, sellers responded by offering high-quality cards, even though they were not obligated to do so. But he then ran a field test on these same traders. He sent confederates to pose as buyers in sports-cards shows. It turns out that outside the lab, there was little relationship between price offered and quality. Similarly, other experiments have found that people are more likely to behave selfishly if their anonymity is guaranteed.

Also, here's an interesting factoid found in Levitt and List's paper: in another experiment, List and a collaborator examined whether professionals behave the same way as students in trust games. It turns out that CEOs in Costa Rica are considerably more trusting and trustworthy than students. Maybe it's because the people who become CEOs in Costa Rica are particularly nice; it may be because CEOs care more about their reputation and behave extra-trustingly. But either way, this shows that it's hard to generalise from experiments run on students.

Which leads me to the WEIRDest people in the world - members of Western, Educated, Industrialised and Democratic societies. The authors of this paper make the same argument as the previous paragraph - behavioural and cognitive studies tend to generalise their experimental results to the entire human species, when their effects are local. They back this claim with a number of case studies.

Consider, for example, the Muller-Lyer, aka the two lines, illusion. Which of the two lines below is longer?

You can probably guess the answer, even if you haven't read any books on pop psychology: the two lines have the same length. If you have read pop psychology (or a Buzfeed article on 27 Illusions that will BLOW your mind (you won't believe number 4!)) you have probably read something like "viewers invariably perceive line b as being longer". But there is nothing invariable about this phenomenon:


The chart above shows by how much line a must be increased in length, before subjects perceive the two lines as being of equal length, by country. As you can see, in some societies, viewers can tell the two lines are the same length with hardly any manipulation; also, children and adults respond quite differently to the illusion.

Whether a society is industrial or not also affects its members behaviour in the Ultimatum Game. I mentioned earlier that most people who play the Ultimatum game in a lab setting offer about 50% of their wealth; but this is only the behaviour of American adult subjects; in fact, Americans seem to be far more generous than other societies...

% of wealth offered in Ultimatum Game, by country
... and more willing to reject an offer they deem unfair:
Income maximing offer, by country

(The second chart shows the % the proposer should offer, to maximise their wealth on average. In the US, the optimal strategy for a proposer is to offer 50% of his wealth, otherwise he runs the risk of the receiver rejecting the offer; in other countries, receivers are content with 10% instead.)

Even more shockingly, experiments ran in Russia, China, Sweden, the Netherlands and Germany show that some subjects even reject so-called hyper-fair offers (>60% of the proposer's wealth). I mean... you can kind of understand this behaviour in communist countries like Russia or China, or in socialists' poster-boy Sweden, but Germany??

And for my favourite example of different behaviour across countries, consider Herrmann &al's paper on anti-social punishment. This paper focuses on a so-called public goods game. This game is played with four players over ten rounds. Players are given 20 tokens, and in each round, they need to decide how many of their tokens to contribute to a common pool. The tokens in this common pool are then increased by 40%, and divided over all four players, regardless of whether they contributed or not. So, as in many real-life situations, players are better off if they all contribute, but each one has an incentive to free-ride on the other players' contributions. For example, if all four players contribute 10 tokens, they will each end up with 14 (= 4 x 10 x 1.4/4); but if one player does not contribute anything, he keeps his 10, and he gets an additional 10.5 (= 3 x 10 x 1.4/4) from the other players, thus ending up with 20.5. Herrmann & al ran this experiment in a number of different countries, using university undergraduates as subjects.

There are a few interesting results from this experiment. First, the level of cooperation, as measured by the average contribution by each player, varied significantly across different countries. Second, as a pessimist (or a classical economist) would expect, cooperation quickly declined as the game progressed (and people realised others started free-riding):

But that's not the best part yet. The researchers also ran the same experiment introducing the ability to punish other players. After learning other players' contribution choices, each player could assign every other player between one and ten deduction points. Each deduction point would reduce the punished player's tokens by three, but would cost the punisher one token.

In this variant of the game, the cooperation level increased, or at least remained stable in most countries:

But this is still not the best part. If you were playing this game, whom would you punish? Odds are, you would choose to punish those players who contributed less than you. That's only fair, right? Well, that's only fair if you come from an Anglo-Germanic country. It turns out people from a number of countries, most notably Oman and Greece, choose to punish overly generous players!
It's anybody's guess why anyone would punish other generous players. The researchers suggest it's a form of revenge: though players cannot see who punished them, they probably assume that they were punished by the more generous ones. Indeed, it seems that this "anti-social punishment" correlates with the amount of punishment a player received in the previous round.

Needless to say, anti-social punishment has an extremely strong negative correlation with mean contribution:
(I grant that this whole section on the public good game is only tangentially related to the core matter at hand, in that it shows how differently people behave by country, and how irresponsible it is to make universal claims re human behaviour based on American studies; the main reason I am including it here is that it confirms my long-held belief that at the core of Greece's problems lies the classic Hellenic quip - "τι είμαι εγώ, μαλάκας;/σιγά μη γίνω εγώ ο μαλάκας της υπόθεσης".)

In short, people do not behave the same way across the world. More importantly, people do not behave the same way outside the lab. Behavioural economics is predicated on the assumption that people behave irrationally in a predictable, uniform way. Evidence seems to suggest otherwise.

Now, I realise that what makes pop economics and psychology books exciting are the factoids they offer - the trivial pieces of knowledge that we all like to repeat at parties and seem clever. The rest of this section adopts this strategy (though admittedly, too late: I suspect that the readers who have followed me this far are those who would persevere regardless of factoids): I list below a few "classic" experiments that are referenced by behavioural economists to show that they are not as robust as some books make them seem.

The paradox of choice
A 2000 study by Iyengar and Lepper found that giving consumers more choice results in fewer purchases. In their experiment, they set up two tasting booths in an upscale grocery store, on different days. One of the booths had six varieties of jam displayed. The other had 24. What they found was that though more consumers stopped at the large-sample booth (60% vs 40% for the small-sample one), only 3% of consumers exposed to the large-sample booth made a purchase, vs 30% of those exposed to the small-sample booth.

They also ran two more experiments as part of the same study. In the second experiment, psychology students were given the option to write an essay for extra credit. Some students were given six topics to choose from, others 30. Not only did more students who were given six topics actually write the essay (74% vs 60%), but their essays were actually better! In the third experiment, participants were asked to choose a chocolate. Again, some participants were given a limited assortment to choose from, and some a larger one. This experiment found that people who were given a larger assortment to choose from took longer to make a choice, felt they were given too many options, did not feel any more confident that they made the right choice, and enjoyed the chocolates they chose less than those given a smaller range to choose from (though they reported enjoying the selection process more). Not only that, but when participants were asked whether they wanted to be paid in cash or in chocolates for their time, 48% of those given a small assortment chose to be paid in chocolates, vs 12% of those given a wider range.

It's hard to overstate the effect of this study - not just in academia, but also in business. I have actually heard people reference the choice paradox in meetings, to argue for reducing the number of products we offer.

Now, as I've said many times before, I totally agree that society does not really need 20 different shampoo variants within one brand. But to make a decision based on one study that you haven't read and understood is pretty irresponsible.

A meta-analysis of all studies that have looked into the choice paradox found the mean choice overload effect to be virtually zero. Several studies tried to directly replicate the original experiments and failed - for example, Scheibehene tried to replicate the jams study in Germany, and Greifeneder tried to replicate the chocolate study, both without any meaningful results.

Of course, many of the studies analysed by the meta-study did also find evidence of choice overload. There are a number of factors that may explain the variance in these studies - some have to do with publication bias, but some other interesting ones are:
  • Measurement choices: it seems that more choice is better when what is being measured is consumption, instead of binary buy/not buy choices.
  • Strong preferences: people with strong preferences prefer more choice.
  • Ease of comparison: if the products in an assortment are difficult to compare, e.g. by having complementary features, consumers may experience regret after making a choice, hence leading to choice overload.
  • Perception of quality distribution: people may be more likely to prefer small assortments if all products on offer are of high quality. But if average quality is low, with some products being of high quality, then a larger assortment increases the odds of being able to buy a satisfactory product.

Basically, the jury's still out on this one. It's certainly not the case that more choice invariably leads to fewer purchases though.

Priming
You must have heard of this one: subtle cues subconsciously "prime" you in a way that visibly alter your behaviour. In the original study on the matter, volunteers had to create a sentence from scrambled words. When these words related to old people, subjects walked slower when leaving the lab after the experiment.

Whereas I readily bought into all the other effects I discuss here, I must say I always viewed this one with suspicion: apparently, one of the words used to prime subjects was "Florida". This seems very strange to me. Whereas I could grant that some people may associate Florida with old people, to an extent that they then alter their behaviour, I find it crazy that subjects only associate Florida with old people. What about Disney land? Alligators? Miami? Spring breaks? Why would these words not prime people to walk like a princess, run for their lives, swagger about or stumble drunkenly?

It turns out my suspicion was justified: another group of scientists tried to replicate the study, with a few modifications: a) they timed subjects using infrared sensors, not stopwatches as in the original experiment; b) they used more volunteers and c) they used experimenters who did not know what the study was about. They found zero priming impact.

But they went further: they repeated the experiment, only this time, they told the experimenters that the subjects had been primed. They told half of them (the experimenters) to expect faster walks, and half of them to expect slower walks. The subjects were found to walk slower only by those experimenters who were expecting that!

Of course, the author of the original paper responded that a) his experimenters were also blind to the study hypotheses (which is true, but the experimenters were the ones who prepared all materials, which they had plenty of time to study; and being smart people, many of them probably guessed the hypothesis); b) subjects in the replication experiment were told to "go straight down the hall when leaving", which draws attention to the process, and arguably implies speed, thus eliminating the effect (but there is no evidence they were told this - plus, if the effects of priming are so weak, what's the point of it?); c) the replication experiment used too many old-related words, which meant subjects may have noticed the connection, cancelling priming (but his own original paper said that more primes would yield stronger results) and d) the experiment would only work if subjects associated old age with infirmity, an association the replication did not test (but then, neither did the original paper).

I am not saying we are not susceptible to subliminal messages; but we would be a pretty ridiculous species indeed if we walked slower every time someone said "Florida".

System 2 Activation
Try answering the following three questions:
  1. A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
  2. If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
  3. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
The answers are $0.05, 5 minutes and 47 days. Yet many people answer $0.1, 100 minutes and 24 days - not because these questions are difficult, but because it is very easy for our minds to make these mistakes when going on autopilot.

In Thinking, Fast and Slow, Kahneman talks about how humans reason using two different "systems": system 1 is quick, effortless and relying on intuition, whereas system 2 is slow, deliberate and analytical. Because using system 2 takes up a lot of effort, we tend to rely more on system 1, the autopilot that causes a lot of us to answer one or more of these questions incorrectly.

But, says Kahneman: if you disrupt people's autopilot, they will switch to system 2, and perform better. One way to do this is to explicitly say, "careful, these are trick questions"; but more astonishingly, according to Kahneman, you can disrupt system 1 just by making the questions harder to read - e.g. by using a font that's harder on the eyes, or a pale colour.

Kahneman bases this claim on this paper, in which experimenters asked 40 Princeton students to take the three-question test above. Half the students took the test in normal font, the other half in a difficult, 10% gray, italicised font. The first group got 2.45 of the questions right, on average, whereas the second only got 1.9 right.

But a number of replication attempts have failed to discover any such effect:

I think all we can take out of this series of experiments is that Ivy League students are slightly smarter than non-Ivy League ones.

Ego Depletion
Here's another effect that has had real life impacts. A study put students in a room with freshly baked cookies and radishes. Some were told they could only eat the former, some that they could only eat the latter. All students were then given an unsolvable test, and the researchers measured for long the students would keep trying to solve it. It turned out that those who were allowed to eat the cookies persevered for far longer (19 mins) than those who weren't (8 mins). This was taken to show that humans have a fixed amount of willpower than can get depleted; furthermore, that willpower is like a muscle that can be trained. Hundreds of studies have been run since then, all apparently confirming this hypothesis.

And people have taken heed - including Obama and Zuckerberg, who have both claimed to opt for dull, standardised wardrobes so as to avoid wasting decision energy on useless tasks.

However, a more recent, massive attempt to reproduce the main effect outlined above, using 2,000 subjects, has found zero effect.

Cracks in the theory had appeared before. Evan Carter, a graduate student at Miami, tried to replicate a previous experiment, only to find that he could not reproduce its results. So he looked into a 2010 meta-analysis, and discovered that a) the meta-analysis had only included published studies, increasing the risk of publication bias (unexciting results don't get published all that much) and b) some studies had bizarre or contradictory measures of willpower - e.g. one study suggested that depleted subjects would be less willing to help a stranger, whereas an other study said that depleted subjects would give more to charity. Re-evaluating the studies in the meta-analysis adjusting for such errors, he also found no effect.

Again, I am not disputing that people get tired, and that if they are asked to do too many things, they will have less energy. But the original formulation of the hypothesis, and some of the lessons that people have taken from it, such as that taking an extra minute each morning to decide what tie to wear can deplete one's willpower, seem exaggerated and unfounded.

To conclude this section: I am not claiming that humans are perfectly rational. Indeed, I think Kahneman, Tversky and other economists/psychologists have done a brilliant job demonstrating many ways in which humans are irrational. I think their work on heuristics humans use instead of reason, and how these lead to mistakes such as overconfidence, ignoring base rates and other fallacies such as the Linda effect, is brilliant (some people have suggested these are all framing issues that disappear if questions are asked differently, but I found that criticism pretty weak. See here Kahneman and Tversky's reponse.)

But we are not as stupid, easy to manipulate, or homogeneous as behavioural economists often suggest. Nor have behavioural economists conclusively proven that their models are better at predicting human behaviour in real life. And this brings us to...

C. Behavioural economics and policy making
This will be a short section. Behavioural economics have been so influential that the US and British governments have set up whole departments to carry out policy based on the discipline's lessons. David Cameron himself referred to a behavioural economics insight: "The best way to get someone to cut their electricity bill is to show them their own spending, to show them what their neighbours are spending, and then show what an energy-conscious neighbour is spending".

But as Tim Hartford (the Undercover Economist) points out, this is plain wrong. The best way to make people cut their energy consumption is to increase prices. There may be all sorts of reasons to oppose a policy (such as tax) that aims to make energy more expensive; indeed, as someone who identifies as more or less a libertarian, I would rather keep government taxes at a minimum. But this is neither here nor there: the fact remain that classical economics offers better policy solutions than behavioural economics. Standard tools such as taxes, subsidies and interest rates are way more powerful, and have far stronger impacts, than "nudges".

This is because, again, on aggregate, claims of neoclassical economics' death at the hands of the 2008 crisis are greatly exaggerated (another factoid: Mark Twain never used this exact phrase. What he wrote was "the report of my death was an exaggeration"). Neoclassical economics is still being taught at schools and universities, not because academics are die-hard traditionalists, but because it still has lots of valuable things to say about how the world works.