Skip to main content

Spring School 2024 – Failures in One Health Research

Gaining knowledge in science is a slow and tough process that is characterized by a multitude of errors, new beginnings and revisions. Misconceptions, according to which research moves purposefully from a given assumption towards a new knowledge that is valid for all time, often lead to misunderstandings between society, politics and science. However, dealing with failures can also be a challenge for young scientists, especially if the topic is taboo or expectations are particularly high.

This is why the Spring School 2024 of the One Health Platform, which took place on May 13 2024 in Hanover, was dedicated to this complex topic. The relevance of the topic for doctoral students and postdocs was demonstrated by the fact that the event was quickly fully booked.

The rocky road of a research project

The topic was introduced by Prof. Stephanie Becker (University of Veterinary Medicine Hannover), who used one of her research projects as an example to illustrate how a research project can proceed and how rocky the road to success can be. In the project she presented, it took 13 years to prove the hypothesis, as she and her team originally chose the wrong method to prove it. In addition, some of the experiments turned out to be more complicated than expected. This is not uncommon, as Prof. Becker emphasized, because in research you are entering unknown territory and do not know many answers beforehand. It is therefore unrealistic to believe that everything in a research project will work as written in the application. But even if the original research hypothesis turns out to be wrong in the course of a project, valuable results can still be obtained. In addition, negative results could also be published, albeit usually with a lower impact factor. For a cumulative dissertation, however, these publications would count just as much as others. In the end, negative results are an important part of everyday research, into which money and time also flow and which should therefore also be published. Not least to let the world know: “That's not how it works.”

Errors in the interpretation of data

The successful execution of an experiment is a hurdle in scientific work. The meaningful interpretation of the results and possibly even the transfer into practice are two others, which Prof. Dr. Ulrich Dirnagl (BIH Quest Center for Responsible Research) addressed in more detail in his contribution. In his presentation, he described experiments that go wrong as real mistakes. Mistakes in the laboratory are common, but unfortunately there is often a lack of professionalism in dealing with them in science unlike in other industries. Errors that remain undetected inevitably lead to the production of results with low validity and contribute to the reproducibility crisis. Another problem, however, is inaccuracies in the statistical evaluation of experiments. Here, phenomena such as p-hacking, in which attempts are made in various ways to obtain a significant p-value, or selective evaluation (cherry picking), low case numbers, as well as “storytelling” and “raking” would lead to the significance of results being distorted. A huge problem for the interpretation and translation of experiments and studies is the sometimes low statistical competence in biomedical research. Another problem is the infrequent publication of negative and null results in research. In order to solve these problems in the long term, a cultural change in science is necessary, null hypotheses and negative results should be published much more frequently and the statistical competence of scientists should be strengthened. It could also be helpful to register study protocols in advance, as is already the case with clinical studies. Accordingly, Prof. Dirnagl ended his presentation with a quote from Jonas Salk: “There is no such thing as a failed experiment, because learning what doesn't work is a necessary step to learning what does.”

The path to productive failure

An overview of the forms, functions and cultural change of failure from a philosophical perspective was given by Dr. Michael Jungert (FAU Competence Center for Interdisciplinary Science Reflection). 
scientific reflection). He pointed out that failure is sometimes dealt with much more openly outside of science and that in the business world, extra freedom for failure is sometimes even granted. This is done because people are aware that not everything can always work out at the first attempt and that progress cannot be achieved without the odd failure. If you look at the historical development of failure in science, you can see that a major change has taken place here. While in the 17th century until well into the 19th century, science was characterized by the maxim of avoiding errors and failure, from the 20th century onwards, the view that failure should be seen as an essential part of science became increasingly established. According to Karl Popper's definition, the foundation of science consists of testing theories and hypotheses with the aim of disproving or falsifying them. Failure is therefore part of gaining scientific knowledge. This is reflected in everyday research, in which failure is the order of the day, while breakthroughs are comparatively rare. Causes of failure can be mistakes or errors. In general, failure can occur in various forms (discarding - improving - varying - defending) and for various reasons. Dr. Jungert used the Apollo 13 mission to illustrate that there is also a “successful failure”. Through its failure, the mission triggered important learning processes and changes
and contributed to the establishment of a culture of failure at NASA, from which future missions could benefit. In science, too, dealing with failure in the right way could help to avoid misunderstandings about how science works in the public eye and make failure more productive. This would require a change from within science. Important elements of an “infrastructure of failure” are, for example, dealing openly with failure both internally and externally, documenting failed work by publishing negative and null results, and promoting high-risk science. This change in culture can also take the pressure off young scientists in particular, which would repeatedly lead to misconduct.

Justice and inclusion for One Health

Dealing transparently with failure within science is particularly relevant for the strong networking between disciplines and sectors in the One Health context. In her presentation, Dr. Lara Urban (Helmholtz AI Institute) discussed how equity and inclusion can be achieved for One Health. Based on numerous projects with partners abroad, she showed how transparency and the exchange of research results can be realized in networked projects. This is particularly important in light of the unequal global distribution of resources and opportunities.

The right way to deal with failure

Prof. Martin Pfeffer (University of Leipzig) also emphasized the importance of transparency in networked research projects. This can be achieved through open and honest dealings with colleagues and research partners. However, formal safeguards, for example through a cooperation agreement, are also an important building block for creating transparency. With regard to failures, he pointed out that the definition of a failure would depend heavily on one's own or external expectations. He also agreed with the previous speakers' statements that failures are an everyday phenomenon in research. They are also often generated by third parties. There are a variety of problem-solving strategies, as he also illustrated with his own experiences from research projects. In his experience, two things are necessary for dealing well with failures: good planning that considers all possible eventualities and, if necessary, also includes a plan B, and removing the taboo of failure, which makes it possible to ask for help if necessary.

Research funding with the courage to take risks

In a concluding discussion round on the topic of “Research funding - risk vs. error avoidance - how do we achieve scientific progress?”, there was an exchange on how research funding should work to promote innovation, excellence, transparency and quality, building on the previous presentations. How can the balancing act between high-risk pioneering work and conservative error-avoidance strategies in research projects be mastered? How much willingness to take risks can/should one have in a research project? What evaluation standards and funding formats are available/needed to promote innovation, excellence, transparency and quality? How should excellence be assessed? How can research be made more efficient and progressive with a good publication culture?

These questions were discussed by the panelists Prof. Dr. Stephan Ludwig (University of Münster), Dr. Nora Kottmann (Volkswagen Foundation), Dr. Laura de la Cruz (German Aerospace Center) and Leif Rauhöft (Bernhard Nocht Institute for Tropical Medicine) together with the participants of the event. In particular, the situation of young scientists was discussed.

Removing the taboo of failure

The Spring School made it clear that failures are an integral part of research. In order to maintain the quality and innovative potential of research in the long term, a transparent approach to failures, research funding that is willing to take risks and the removal of taboos surrounding failure are important steps. Especially for the One Health sector, which is characterized by a high degree of networking and cooperation, a transparent approach to results and failures is elementary. This can also help to increase the credibility of science in society and reduce the pressure on young scientists.