Some hazy metaphorical thoughts on improving science inspired by game theory and emotions.

I brought with me Robert Franks Passion within Reason (another tip from Jason Collins) on my trip to Brussels. I was reading it, as I was waiting for the symposium to start (and waiting for Daniel Lakens to show up), and it struck me that some of what he was talking about probably fits the situation.

Let me elaborate. The whole book is about how emotions have a rationality to them (yay, as that is what I believe too), and that they specifically are useful for sustained cooperation. They serve, in part, as honest signals.

As I was waiting, I was reading a part where he elaborates on how cooperation could evolve, when cheaters can take advantage of the honest agents, and drive them to extinction – this is well treaded areas of course. The population consists of honest agent, and cheaters. Everybody wants to work with a honest agent, as there is a higher payoff for both other honest and the dishonest agents. Honest agents that cooperate with dishonest agents are worse off than working alone. A dishonest agent who hooks up with another cheat does as well as if working alone. The cost is borne by the honest ones.
In the model, he first equips the honest ones with an honest signal – a blush to take his example. That way, you can find the other honest agents, and avoid the dishonest ones. Don’t we all wish the world was that easy.

But, as Robert Trivers shows (and Frank, and others), it pays to mimic a good signal, and cheaters who can will flourish. To a point.

But, maybe the cheat signal varies in how good it is. Think about animal mimicry – pretty good but not perfects. (For example batesian mimicry of Coral snakes by King snakes). And, the honest signal also varies in how good it is. The distributions differ. Only honests have the highest strengths, and only cheats completely lack the signal. But, there is an overlap, where the signal now is ambiguous.

Those with a clear signal have a valuable asset – everybody wants to cooperate with them. But, they also have a vulnerability – what if they pick a cheat. Frank adds an inspection cost. Someone with a high signal value (because it is really mostly in their interest) can pay some cost (in time, effort) to inspect the prospective partners signal.

Out of this, he creates a model of iterated cooperation. Some of the consequences of the model: the higher the proportion of honest agents in the population, the more likely an ambiguous signal is to be an honest signal. That is, it is less risky to cooperate.

The signal reservation point (the intensity of the blush as indicating honesty, for example) also goes down (less blushing needed), if there is high payoff for successful cooperation, and low penalty for being cheated. The gains outweighs the losses, and you can take more risk (simply be more trusting). The equilibrium points shift, depending on these parameters.

If you squint, or have a vivid imagination with low association thresholds as I (sometimes) do, you can kind of map this onto the research endeavor. Trust is necessary. You want to believe that other researchers are honest researchers, so you can build on their work. There will always be cheats. The goal may not be to catch them, but to mimimize their pay-off. Raise the payoff for honesty, lower the cost of being cheated. Make it less attractive for cheats (increase the population of the honest).

Of course a model like that is rather simplistic (hah – but really). Real life has to deal with not only those who fake data for fame and fortune, but those who polish their outliers a little too neatly, and prune their data-sets a bit too severe and topiary like.

The payoffs in the system has not been geared towards good research practices. (Yeah, that is about the billionth or so time someone has stated this). You believe Wichert’s and John’s research (and most colleagues) people in science want to do science because they want to find out something that is true. Because it is interesting. Because we are curious. Perhaps not for any – well – useful reason, but because following ones curiosity is itself rewarding, and hearing someone talking about what they have found out is rewarding. Now, what use is it to know about planets and stars and quarks? And, of course, some also are interested in doing research because they have some practical ideas.

But, research is discovery, and inherently uncertain. Demanding productivity in the shape of positive results and papers, preferably in high-impact journals to keep your job (to pay your mortgage, to feed your kids and yourself) sets things up for gaming. Also sets it up for not taking the right kind of risks – pursuing something that may not pay off in any way at all but could potentially lead to discovery. (Let me put in a plug again for Chris Chamber’s initiative at Cortex).

Another issue is how new researchers are trained, as in, which practices do they see and copy (and are told are standard). This was brought up in the panel (horror at what some young researchers thought were normal practices), it was clear in both Wichert’s and John’s research, and Nosek talked about incorporating procedures in their open science framework to improve these practices without unnecessarily burdening he researchers.

Research is a craft. You learn doing it under the supervision of a skilled senior researcher. Most of it is tacit, or informal exchanges in lab-discussions. There are lots of heuristics and rules of thumb. You learn what those that are successful do, and copy it as best you can. It is very Richerson & Boyd, cultural transfer. There is so much to learn, and so much to get your head around, I actually think it is impossible to not use the heuristics and short cuts and what works – like rest of life. Plus, you are investigating new things. You simply don’t know, even if you are very good at it. Otherwise, why do research?

Altering the practices has a cost – in slowing down productivity. (A benefit in making a more robust science). They must be reasonable (I still shudder at the utter byzantine rule systems around the ethics tests in the US – which seems more like a power issue, than one of making sure participants are protected. And, I am very much in favor for having checks.) But, they should include documented and shareable data. Easily shareable instruments. Good protocols covering both data-collection, and the data-processing. And, you should be expected to share it. If something is never ever checked, will take up precious time, and doesn’t seem to affect the near productivity, it will fall to the wayside.

Now, if we get the good practices in place, and are able to shift the reward structure, will it be stable? Likely not. It is all so evolutionary, and new ways of figuring out how to game the system will arise. I have been listening through Fukuyama’s the Origin of Political Order, and it is a story of stability altering with gaming and instability. I think this also is the point of Peter Turchin’s clio-dynamics. I think his analysis of Dune is apt, although I don’t think we are planning on overthrowing an empire and starting a new.

Oh, speaking of perverse incentives. I’ll just throw in this post from Steve Hsu, and the Nature Commentary he linked to.


About asehelene

... because if I'm in a room with a second person, I want to be reasonably sure I'm the crazier one.
This entry was posted in Emotion, Research Practice, Uncategorized and tagged , , . Bookmark the permalink.

One Response to Some hazy metaphorical thoughts on improving science inspired by game theory and emotions.

  1. Pingback: Barbell and Optionality strategy for fixing sciences? | Åse Fixes Science

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s