There’s been an interesting discussion on data-sharing and how to properly give credit when you are using someone else’s data on both Facebook and Twitter. Candice and Richard Morey did a nice blog-post on why sharing data should not automatically mean authorship. Talking to other researchers, that seems to be part of what the Vancouver and beyond suggests for criteria for authorship. The proper way to credit a shared data-set is to include a reference.
Authorship and references are the two traditional ways of assigning credit. For the individual scientist authorship signals origination, and reference signals the use other scientists find in the original work.
But, references are a strange measure of success of an idea/work. When I was going through my Srull & Wyer (1979) trace, I collected all the places in the manuscripts where they had been cited in the first 53 articles that cited them. The reason for citing them ranged from the peripheral to the profound. Examples of the peripherals was an opening sentence where the author cited them (along with others) as evidence social psychologists were now interested in cognitive explanations for social phenomena, and a foot-note where they stated that the current paper was not interested in the priming phenomenon, but one should look to Srull & Wyer 1979 if one was interested. In the profound, they were cited multiple times because the research essentially extended the original research.
This shouldn’t be surprising. We are trained to cite just about everything we have gotten from other researchers, be it trivial, profound or antagonistic, and this is perfectly fine. I like being able to look in the references to pursue ideas that may not be central to the present research. I even find it disconcerting when it doesn’t exist. I started reading William James “Principles of Psychology” and found it distracting that there were no references to statement that it was clear he had learned from others. But, of course, in our citing practices, papers will vary in their degree of centrality.
None of that is evident from a reference list.
It seems we may need to look over how we are apportioning credit, especially when authorship and references are given so much weight in important measures of success. I don’t have a clear thought on how to do this, because there are always downsides, and simply complicating things by grading the importance of a cite is something that I instinctively think can become problematic.
Perhaps one needs to abandon the traditional ways of indexing success is the way to go (I doubt that will be the case).
But, should we distinguish between peripheral and central contributions from earlier research? Sharing stimuli or sharing data-sets or using tested paradigms, questionnaires, analysis schemes – are they “worth” more than the more peripheral citations, or do we run other risks of conflict and credit arbitrage?