Certain scientific leaders believe that funding for science should be allocated, not based on the relative ‘trendiness’ or ‘impact’ of projects, but rather according to the acumen — either potential or proven — of the practitioners. One champion of this ethos is the President of the Alexander von Humboldt Foundation, a certain Prof. Helmut Schwarz, who has perpetuated this simple idea to the great benefit of the Foundation.
The same strategy is also being adopted elsewhere, such as in the Institute for Basic Science (IBS) in South Korea. The IBS, in much the same way as the Max Planck Institut in Germany, receives public funding to support research centres that more or less function autonomously from one another, each with a focus in a certain scientific discipline. Having worked in one such centre for two years, I can tell you that each is more or less built around the vision of a certain eminent scientist, who receives a large quantity of funding for at least a decade. The growing Korean economy, coupled with the crackdown on corruption that was rife in the older generation of professors, has allowed for this huge investment of money. Certainly not everyone is pleased about the spending of taxpayers’ money in this regard, but time will tell what it will bring. One thing’s for sure: South Korea is more than a little jealous of the Nobel Prizes that rival Japanese scientists have won.
So, what does Schwarz mean when he speaks of funding people, not projects? Well, it’s about judiciously selecting outstanding scientists, giving them a bunch of funding, and not bugging them every month for some progress report. It’s about trusting good people with our time and money, and having confidence that — whether or not the research will seemingly lead to technological applications — our society will benefit from what is learnt. Such a plan means that the workers are not restricted to working on a certain project, because heck, after a little tinkering it might well be that something else appears much more promising. This is, after all, fundamental science. But if you continually hassle talented people for updates and you starve them of funding then things don’t work out. Surprised? I should hope not.
Among the many side-effects of not giving scientists breathing room is that they are forced to rush to get quick results, often with only incremental advancements in our knowledge. I know this from talking to several young academics, and it doesn’t surprise me that certain senior professors in the USA have mentioned to me that many new independent researchers only have very specialized knowledge. Some come across as technicians rather than innovators, yet it is the system, not they, who are not to blame for this. You see, to get and maintain a faculty position these days, one way is to focus on one or two topical materials or analytical techniques from which to derive as many results as possible (often by handing the project to a collaborator without any knowledge of what their friend is actually doing). When pressed for time, people stay in their comfort zone, something that helps one’s h-index (due to self-citation) and gives a perhaps more predictable stream of publications — essential for any tenure application — than might result from research more of a ‘blue sky’ nature.
So how does one decide whose papers are good and whose aren’t? Quantity and quality, of course. Because quoting numbers makes everyone feel smart, right? The thing is: no bibliometric data can tell the whole truth. The Thomson Reuters impact factor of a journal takes only into account citations for articles within two years of their publication. This is inherently biased against fundamental research, whose implications and applications may be very significant, but might not be realized so soon.
The numbers themselves aren’t the problem, they are just numbers after all. You take them with a grain of salt. Case in point: Nature Chemistry has an impact factor of 25, but three-quarters of the articles in Nature Chemistry have an equivalent impact factor less than 25. The real problem is that people — including faculty committees, funding agencies — are making ill-informed and shallow judgements based on these numbers. In certain institutions, particularly in Asia, contributions to teaching and the research community are neglected because decisions regarding tenure rest solely on an equation involving number of papers and the impact factors of the respective journals. The resulting figure of merit is supposed to equate to scientific contribution, something faculty deans and chancellors can consider without having even browsed a research article from someone’s portfolio. Indeed, people believe they are making ‘objective’ and ‘fair’ decisions because they are based on numbers. Because taking some time to exercise critical thinking would be too difficult, right? The numbers may not mean what they think they mean and fairness is out of the question. When I assumed my current role in scientific publishing, I was surprised when my senior colleague told me that we have absolutely no goals on achieving a certain impact factor! They simply said that we will aim to attract and produce the best articles possible.
We’ve all been to fantastic seminars where pure scientists ask and answer the hard questions in research. Unfortunately, it is the easy question that absolutely anyone (including bureaucrats) can ask, that poses the biggest headache: “why are you doing this?”