The beauty of it all

Minister for Higher Education and Science Christina Egelund's speech at EU high-level conference on reforming re­search ass­ess­ment, December 3, 2026

Check against delivery. 

Thank you, Per Michael.
Dear guests. 

Today we are talking about a subject that goes to the very heart of research and science. How do we recognise good and solid research? How do we evaluate science? How do we ensure that our system reward the kind of science our society truly needs?

We meet at a moment of profound change. You could even call it a moment of reckoning.

Show me the incentives

For decades, academic life has been shaped – some would probably say constrained – by the familiar pressures of “publish or perish”. A culture that measures success in productivity and publications.

Over the years, this culture has produced remarkable achievements. But it has also forged incentives that are no longer aligned with our deepest scientific values.

We need to take a close look at the behaviours we are incentivising. 

The late businessman, Charlie Munger, once said: 

“Show me the incentives and I will show you the outcome.” 

That rings very true for research today. And I am not convinced the current system has the right incentives. 

This is also amplified by the rapid rise of artificial intelligence. 

AI can summarise, write, analyse and generate ideas at an unprecedented scale. These tools open extraordinary possibilities in research, but they also create a dilemma. 

If the volume of scientific output continues to grow exponentially, can our current evaluation systems still identify quality? Can we still separate great ideas from noise? Originality from clickbait?

I want to start with a story.

A young PhD student in Denmark once read a promising article in one of the world’s most prestigious journals. The paper claimed to have found a compound that could effectively immobilise sperm cells. A potential breakthrough in the development of a male contraceptive pill. 

Inspired, he ordered the compound and set to work. But nothing happened. Repeated experiments showed that the compound simply did not work.

When he contacted the original authors and the journal, they shrugged it off. And it took years and another large consortium spending time and resources on the compound to get it firmly disproved. 

This story is not unique. Across multiple fields – most famously social psychology – large-scale replication efforts have revealed that many celebrated scientific results cannot be reproduced. 

From nudging to the Stanford Prison Experiment, findings once treated as scientific truth have all crumbled under closer scrutiny.

This replication crisis has been called an existential crisis for science. In the United States, it is even being used as an argument among some politicians to defund universities and to disparage the scientific community as a whole. 

But the issue is of course nuanced. 

The fact that failed results are exposed shows that the self-regulatory mechanisms in science are working. We do have a system that is self-correcting, even if it sometimes is slow. 

What is truly broken is not the scientific method, but the incentive structures surrounding it and the culture they create. Beyond its impact on scientific quality, the “publish or perish” culture also takes a profound toll on individual researchers. 

Rethinking what we value

The constant race to produce more papers, secure more grants and outperform peers can create a climate of stress and insecurity. 

Young researchers, in particular, face uncertain career prospects, hold short-term contracts and are under pressure to prioritise quantity over the kind of deep, thoughtful work that probably attracted them to become a researcher in the first place. 

Such an environment invites misconduct and dishonesty and it undermines well-being and discourages risk-taking. 

But it also narrows the pipeline of talent. 
The intense pace and uncertain employment conditions disproportionally affect those who already face structural barriers. For instance, researchers with children, first-generation academics, and scholars from underrepresented backgrounds. 

Instead of widening participation, the system risks filtering out precisely the diversity of perspectives that makes research stronger.

A research culture built on fear of falling behind is just not sustainable. If we want creativity, curiosity and diversity to thrive, we need an environment that provides researchers with the time and stability they need to do their best and most original work.

So if we want science to flourish, we need to rethink what we value and how we recognise excellence.

For years, the main currency in science has been publications. Preferably many, and preferably in the right journals. And publications are crucial to the scientific process. 

But there may be an overreliance on publication metrics like impact factors, citation counts and the relentless focus on high-prestige journals. These are not necessarily reliable indicators of quality. And they can reward speed over thoroughness and sensational results over careful work.

But publication is not the only output of scientific work. 
Data, code, negative results, replications, teaching, mentoring, interdisciplinary collaboration. All of these are essential contributions to a healthy scientific ecosystem.

The replication crisis should not lead us to despair. It should lead us to rediscover the fundamental values that make science strong: Integrity, transparency, curiosity and collaboration.

A research culture grounded in mutual trust allows ideas to be tested, criticised and corrected. In a way that is respectful and allows for differences. That is the true engine of scientific progress.

This is also why we are updating the Danish Code of Conduct for Research Integrity to better reflect the reality of being a researcher today.

But the Danish code of conduct cannot stand on its own. 

No country can reform research evaluation alone. Science is an international enterprise, and our evaluation systems needs to be aligned across Europe. 

If we want to move beyond “publish or perish” towards a new evaluation regime based on quality we need to work together across the EU. Both among policy-makers and among researchers. This is not just a subject for a philosophy of science class. We have many global problems right now and we need excellent research and science to solve them. 

Quality over quantity

And to do that we need a sustainable research culture. 

This includes flexible funding mechanisms and more stable career paths. Time for deep thinking, freedom to explore uncertain ideas and space to fail and correct errors. It also includes not letting certain theories or perspectives become dogmas that oppress diversity of opinion, method and approach.

Science has never been easy. Nature guards its secrets very fiercely. Most results are temporary. Many ideas turn out to be wrong. 

That is not a crisis. That is the method. That is the beauty of it all.

Our task is not to eliminate uncertainty, but to build systems that encourage integrity, transparency and reflection. Systems that reward boldness and depth – quality over quantity. Systems that recognise that good science is slow, careful and collaborative.

I would like to leave you with a quote from the scholar Camille Paglia, who said – and I quote: 

“One brilliant article should outweigh one mediocre book.”

I could not agree more.

Thank you.