That's a remark made by my mentor Bob Dixon when he was my thesis advisor circa 1984. It's one of those pieces of wisdom that we accumulate and carry around in our brains for the rest of our lives. It had reason to percolate to the surface as I read Why Current Publication Practices May Distort Science by N. Young et al., published in the Medicine blog of the Public Library of Science. Young and his coauthors discuss the effects of "The Winner's Curse" on how scientific results are (or are not) published.
When an item is auctioned, potential buyers make offers, or bids, on the item. Although the dollar amounts of the bids may not be normally distributed, for sure they are spread out. Some bidders bid low, some bid high, depending on how each perceives the value of the item being auctioned. What is the actual market value of the item? One definition might be the average of all the bids. But who actually gets the item being auctioned? The highest bidder. That means, by this definition of market value, the winner of the auction always pays above market value for the item. That's "The Winner's Curse".
Of course, it's not that simple. The winner may be bidding higher because they believe they have special knowledge or capabilities that will let them leverage the item to greater value than the other bidders. Or they may be desperate. Or just foolish. (I personally have fallen into that category at one art auction, and ended up $250 poorer because of it.) But in general, auctions are good for the seller, maybe not so good for the buyer.
Young et al. apply this to scientific publications. Research with the most spectacular results tend to be what gets published. And, broadly speaking, research with positive results has a much better chance of getting published than research with negative results. Researchers whose projects yield positive results win the auction for space in refereed journals.
This gives a false perception of the state of a particular area of research, since projects that yield negative results are not published and hence are not part of the "average" perceived by those who read the scientific journals. In a "publish or perish" academic climate, there is a strong motivation to produce only positive results, to quickly terminate research that yields negative results, or even to phrase results to make them appear positive. Surely negative results are just as valuable, since they can prune the tree of possible avenues of inquiry for other researchers. But such research is seldom published.
Long time readers are just waiting for me to say this: this is yet another example of measurement dysfunction.
I've been accused by my friend and occasional colleague Rodney Black of having an academic bent. (He sincerely meant it as a compliment, and I don't deny it.) So I appreciate this issue that is perhaps of interest mostly to those who spend most of their time on the R end of the R&D spectrum. (Although since this affects much medical research, including the clinical trials done by pharmaceutical companies, it probably should be of great concern to all of us.)
But there is an equally strong bias at the D end of the spectrum too. Technology efforts that crash and burn seldom get discussed, seldom get written up, are usually quietly buried and the participants sent to the career equivalent of Guantanamo Bay, or maybe Gulag Archipelago. But like those research projects yielding negative results, these failed efforts would be valuable as object lessons, cautionary tales, and as port mortems on what to do differently next time.
This too leads to a false sense of the "average" of the state of the art. It causes artificially inflated expectations, frequently on the part of upper management. All they see in the in-flight magazine are the success stories. "History is written by the victors", as Winston Churchill once said.
I'm as guilty of this as any one else. I've had a few spectacular failures in my career. But you'd have to ply me with more than a few gin and tonics to get me to talk about them. (I encourage you to try.) That may be human nature, but it's wrong.
Two-time Nobel Prize winner Linus Pauling once said "The best way to have a good idea is to have lots of ideas". Meaning, most of your ideas aren't going to work out. If you have "a track record of success", it's only because you've done a very good job at hiding your failures.
Not only is failure an option, it's downright necessary.
I was having breakfast the other day with two of my oldest friends, Brian and Paul, who are the closest thing to homies I could have when I was in college in the 1970s. We were discussing our career ups and downs. I was a little surprised to hear someone say that the few real disasters I had had in my career were in hindsight the best things that had ever happened to me. I was even more surprised to realize it was me saying it. But it's true. Funny how life works out.
(Update 2012-12-21)
I eventually did write about the most spectacular failure of my career, and how it was simultaneously the my biggest stroke of luck, in Dead Man Walking.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment