One year ago, Maya Bhatia, a biogeochemist at the University of Alberta in Edmonton, Canada, stepped out of a helicopter onto a glacier near Grise Fiord (Aujuittuq) in the Canadian Arctic to collect a sample of melt water. To the horror of her student and the pilot, she slipped into a stream and was swept into a moulin, a vertical shaft in the glacier. There was no way to save her. Bhatia left behind her husband and two young children.
As a geographer-turned-writer who has visited the Arctic, I know all too well the dangers of doing fieldwork in remote places. While researching environmental change, I’ve had to traverse glaciers, scare off grizzly bears and take measurements from fast-flowing rivers. In my opinion, institutions need to do much more to prepare field researchers for the unexpected. As I’ve experienced, thorough training and clear leadership can and will save lives.
Far from emergency services and without mobile-phone coverage, even experienced researchers can be caught out. For example, in August 2020, glaciologist Konrad Steffen died after falling into a crevasse on the Greenland ice sheet where he’d worked for more than 30 years. Wilderness travel brings dangers. In 2021, a helicopter accident east of Resolute in Nunavut, Canada, killed the crew and a biologist who had just finished surveying polar bears. In 2011, Martin Bergmann, the head of Canada’s Polar Continental Shelf Program, died in a plane crash.
Ecologists: don’t lose touch with the joy of fieldwork
Travel accidents are out of our hands, but much can be done to mitigate fieldwork risks. Safety courses are a good starting point, but not enough. As a graduate student studying Arctic glacier hydrology in the 2000s, it wasn’t until I arrived at my field site that I was briefed on how to travel safely across the ice and rescue someone from a crevasse. It made me more aware of these hazards, but I wasn’t confident that I would be able to save someone.
The next year, I pushed my supervisor to invest in a more in-depth course in crevasse rescue with a certified mountain guide. After going over the basics, we practised the techniques, such as setting anchors and using pulleys to drag people out of the snow. I also received training in wilderness first aid and firearms handling, for possible encounters with aggressive polar bears.
When I became a tenure-track professor in 2007 at the University of Lethbridge in Canada, and my fieldwork shifted to the Rocky Mountains, I completed training in all-terrain vehicle handling and avalanche rescue. My university required leaders of fieldwork trips like me to identify hazards and submit a plan outlining how we would deal with them. This included what to do in case of an accident or emergency, and how to avoid these situations in the first place. Although this is a good idea, it can prepare you for only a limited set of foreseen events.
All of my training was helpful during an emergency that a graduate student and I had in 2008 on the Belcher Glacier (Devon Island ice cap). We were cut off from our camp by rivers of slush, and my student got soaked to the waist while trying to cross one of them. It was an unexpected situation that I’m not sure I handled well. We ended up being evacuated by helicopter back to the research base in Resolute.
Science on the edge: how extreme outdoor skills enhanced our fieldwork
It was a reminder that not all things go as planned, and that you can’t control the environment, only your actions. That’s why I think that, in addition to safety training, the best way to avoid mishaps is to ensure that the principal investigator running the team of fieldworkers has completed leadership training for emergency situations.
After Bhatia’s death, the University of Alberta has, rightly, taken a closer look at fieldwork practices and responsibilities. The university now expects researchers to enact a series of scenarios that they might encounter in the field, including evaluating hazards, assessing incidents and responding to emergencies. This is a good first step. One factor in Bhatia’s tragic death was that she wasn’t wearing crampons or a harness tied to a rope, which might have saved her life. Her husband also noted that she was working long days and experiencing research pressures at the time.
The possibility of hiring guides for Arctic scientists is being discussed. But, in my view, this shifts too much responsibility away from the researchers and onto the guides, which might make fieldworkers less careful.
A better option is for universities and other institutions to prepare researchers more intensively for fieldwork — including nominating and training the team’s supervisor to be an effective leader in a crisis.
It’s crucial that expeditions are led by someone who can model appropriate behaviour, enforce safety protocols and make quick, good decisions if things start going sideways. They should have a handle on their colleagues’ well-being and group dynamics, and be able to deal with tricky situations quickly and effectively. Researchers who are too tired or stressed to operate safely should be supported, and trips to collect samples should be deferred or cancelled, rather than putting people in harms way.
Fieldwork will never be accident-free. But we need it — to validate remote-sensing research, to collect samples or to see the environment in action, something you can’t do from afar. And, with adequate training, we can make fieldwork much safer. Bhatia was an enthusiastic researcher at the top of her game before the tragedy. She didn’t deserve what happened to her; we must learn from this accident so that it doesn’t happen again.
Nature, Published online: 02 October 2024; doi:10.1038/d41586-024-03029-6
Neuroscientists have reconstructed the first complete wiring map of the fruit-fly brain, including 140,000 neurons and more than 50 million connections. This resource has already begun to revolutionize the field.
The Stacks Journal is upending conventional peer review by introducing collaboration into the process.Credit: FangXiaNuo/Getty
The peer-review system has been stressed and stretched to a near-breaking point. It’s becoming harder to find reviewers, many of whom see reviewing as a burden that is not adequately rewarded. The rise of predatory publishers, many of which falsely claim to provide a peer-review process; paper mills, which are known to fabricate peer reviews; and plagiarism of peer-review reports have harmed trust in the system.
The Stacks Journal is aiming to provide a faster, more transparent and trustworthy peer-review model by organizing committees of researchers to assess manuscripts.
Launched in July as an open-access, digital-only publication, the Stacks Journal is the brainchild of David Green, an ecologist based in Portland, Oregon. The inspiration, says Green, was his own experience with the inefficiencies of academic publishing. In 2020, Green, who had finished a study on the impact of wildfire on carnivores1, wanted to get the results out quickly so that they could inform land-management policy. But his paper languished in the publishing system for almost two years, with no clear explanation as to why. So, he resolved to change the process.
Green spoke to Nature Index about the inspiration for the Stacks, and how he hopes it will fix some of the weaknesses of academic publishing.
What inspired you to launch the Stacks Journal?
I talked to other ecologists at conferences and field sites, and everyone was frustrated with the status quo of scientific publishing — from huge article-processing fees and long peer-review times to the rise of predatory journals. These and other factors undermine people’s ability to publish their research; estimates from clinical-trial data suggest that around 50% of good data never get published2. We’re missing out on a lot of important information.
I started researching peer review and learnt that it hasn’t changed much in the past 40 years. So, I explored what a new system could look like. I did in-depth interviews with dozens of researchers in different fields and surveyed hundreds more to test ideas.
The result is the Stacks Journal’s peer-review process, which was designed to reflect how people discuss ideas in the Internet age: meeting online to collaborate across social-media platforms, for example. Advances in the way we communicate haven’t yet made it to the peer-review process.
2024 Research Leaders
How does the Stacks Journal’s peer-review model work?
We are shifting peer review away from an individual gatekeeper model, wherein an editor at a journal decides what should be published. Instead, we use a community-based model, in which we gather input from a group of people to collectively determine whether an article is published. We’ve designed this model to be rewarding to both authors and reviewers, and completely transparent.
What’s key is that the Stacks Journal’s peer-review process happens in collaboration instead of isolation. This is how peer review and publishing used to work. For instance, in the nineteenth century, the Royal Society in London invited groups of scholars with expertise in specific topics to come together, debate new work and determine whether it would be published. Now, most journals have two reviewers who assess a manuscript separately. At the Stacks, we bring together communities of reviewers to collaborate. It’s double-blind, to ensure fairness, and reviewers can see each other’s comments and discuss whether they agree.
All the peer-review reports, underlying data and code are publicly posted, along with the names of the reviewers.
What else sets the Stacks Journal apart?
We’ve created a ‘credibility score’ for each published article, so readers can quickly get a sense of the reviewer’s feedback. The credibility score is calculated as the percentage of reviewers who voted to accept the article for publication. So, for example, if six out of seven reviewers think an article should be published, its score will be 86%.
To recognize the role of the reviewers in contributing to the research, they can opt in to be credited as ‘collaborators’, listed just below the authors on the published article. That way, a reviewer can include their work on their CV.
Our publishing model is also different — we offer an annual membership for US$199 that allows unlimited open-access publishing. In conventional publishing, it can cost thousands of dollars to publish one article. In our research, we found that this limits a lot of researchers from ever sharing or publishing their findings.
How does the journal find and coordinate reviewers?
The Stacks is built on communities of researchers that form around specific topics. Right now, we’re focused on ecology, but soon we’ll add chemistry, computer science and medicine. Any eligible researcher can sign up to be a reviewer on our website for free. To be eligible, you must have published at least one peer-reviewed article in the relevant field of study.
When we receive a submission, we send it to reviewers with expertise in the paper’s topic. Reviewers submit their feedback on our online platform, which they use to discuss among themselves. The reviewers are all blinded to each other’s identities during the process, and no individual carries more weight than another.
It has been easy for us to find reviewers. They find the process rewarding, and they keep coming back.
What challenges have you encountered?
We’ve had to cap the number of reviewers on each article at seven, because that’s what our software can handle. This means we’ve had to turn people away. We want to have unlimited reviewers on every article, so we are building new software to make this happen.
Another challenge is the fact that we are a new journal — we don’t have an impact factor or third-party marker of credibility, so some scientists are not ready to submit their research to us. However, authors who have say that they love how streamlined the publishing process is and how much our review system strengthened their papers, which brings credibility to their research that is more long-lasting than that afforded by most journals.
Over the next year, we aim to publish more than 100 articles, including our first special issue, and will continue finding ways to do peer review in a more productive and efficient way.
This interview has been edited for length and clarity.
Nature Index’s news and supplement content is editorially independent of its publisher, Springer Nature. For more information about Nature Index, see the homepage.
The term ‘REF-able’ is now in common usage in UK universities. “Everyone’s constantly thinking of research in terms of ‘REF-able’ outputs, in terms of ‘REF-able’ impact,” says Richard Watermeyer, a sociologist at the University of Bristol, UK. He is referring to the UK Research Excellence Framework (REF), which is meant to happen every seven years and is one of the most intensive systems of academic evaluation in any country. “Its influence is ubiquitous — you can’t escape it,” says Watermeyer. But he and other scholars around the world are concerned about the effects of an extreme audit culture in higher education, one in which researchers’ productivity is continually measured and, in the case of the REF, directly tied to research funding for institutions. Critics say that such systems are having a detrimental effect on staff and, in some cases, are damaging researchers’ mental health and departmental collegiality.
Unlike other research benchmarking systems, the REF results directly affect the distribution of around £2 billion (US$2.6 billion) annually, creating high stakes for institutions. UK universities receive a significant proportion of their government funding in this way (in addition to the research grants awarded to individual academics).
Research assessment toolkit
Since its inception, the REF methodology has been through several iterations. The rules about which individuals’ work must be highlighted have changed, but there has always been a focus on peer-review panels to assess outputs. Since 2014, a team in each university department has been tasked with selecting a dossier of research outputs and case studies that must demonstrate societal impact. These submissions can receive anything from a four-star rating (for the most important, world-leading research) to just one star (the least significant work, of only national interest). Most departments aim to include three- or four-star submissions, often described as ‘REF-able’.
But the process is time-consuming and does not come cheap. The most recent REF, in 2021, was estimated to have cost £471 million. Tanita Casci, director of the Research Strategy & Policy Unit at the University of Oxford, UK, acknowledges that it’s resource-intensive, but says that it’s still a very efficient way of distributing funds, compared with the cost of allocating money through individual grant proposals. “I don’t think the alternative is better,” she concludes. The next exercise has been pushed back a year, until 2029, with planned changes to include a larger emphasis on assessment of institutional research culture.
Tanita Casci says the UK REF assessment is an efficient way to distribute funding.Credit: University of Oxford
Many UK academics see the REF as adding to an already highly competitive and stressful environment. A 2021 survey of more than 3,000 researchers (see go.nature.com/47umnjd) found that they generally felt that the burdens of the REF outweighed the benefits. They also thought that it had decreased academics’ ability to follow their own intellectual interests and disincentivized the pursuit of riskier, more-speculative work with unpredictable outcomes.
Some other countries have joined the assessment train — with the notable exception of the United States, where the federal government does not typically award universities general-purpose research funding. But no nation has chosen to copy the REF exactly. Some, such as the Netherlands, have instead developed a model that challenges departments to set their own strategic goals and provide evidence that they have achieved them.
Whatever the system, few assessments loom as large in the academic consciousness as the REF. “You will encounter some institutions where, if you mention the REF, there’s a sort of groan and people talk about how stressed it’s making them,” says Petra Boynton, a research consultant and former health-care researcher at University College London.
Strain on team spirit
Staff collating a department’s REF submission, selecting the research outputs and case studies to illustrate impact, can find themselves in an uncomfortable position, says Watermeyer. He was involved in his own department’s 2014 submission and has published a study of the REF’s emotional toll1. It’s a job that most academics take on “with trepidation”, he says. It can change how they interact with colleagues and how colleagues view and interact with them.
“You’re trying to make robust, dispassionate, critical determinations of the quality of research. Yet at the back of your mind, you are inescapably aware of the implications of the judgements that you’re making in terms of people’s research identities, their careers,” says Watermeyer. In his experience, people can get quite defensive. That scrutiny of close colleagues’ work “can be really disruptive and damaging to relationships”.
UK research assessment is being reformed — but the changes miss the mark
Watermeyer often found himself not only adjudicating on work but also acting as a counsellor. “You have to attend to the emotional labour that’s involved; you’re responsible for people’s welfare and well-being,” and no training is provided, he says. A colleague might think that their work has met expectations, only to find that assessors disagree. “I’ve been in situations where there are tears,” Watermeyer recalls. “People break down.”
For university support staff, the REF also looms large. Sometimes, more staff must be hired near the submission deadline to cope with the workload. “It is an unbelievable pressure cooker,” particularly at small institutions, says Julie Bayley, former director of research-impact development at the University of Lincoln, UK. Bayley was responsible for overseeing 50 case studies to demonstrate the impact of Lincoln’s research, and describes this as akin to preparing evidence for a legal case. “You are having to prove, to a good level of scrutiny, that this claim is true,” Bayley says. This usually involves collecting testimonial letters from organizations or individuals who can vouch for the research impact, something she sometimes did on behalf of researchers who feared straining the external relationships they had developed.
Boynton says there can be an upside. “There’s something really exciting about putting together [a case study] that shows you did something amazing,” she says. But she also acknowledges that those whose research is not put forward can feel as if their work doesn’t matter or is not respected, and that can be demoralizing.
The clamour about achieving four stars can skew attitudes about research achievements. Bayley recounts a senior academic tearfully showing her an e-mail from his supervisor that read, “It’s all well and good that you’ve changed national UK policy, but unless you change European policy, it doesn’t count.” She says her own previous research on teenage pregnancy met with similar responses because it involved meeting real needs at the grass-roots level, rather than focusing on national policy. “That’s the bit I find most heartbreaking. Four-star is glory for the university, but four-star is not impact for society,” says Bayley.
The picking and choosing between individual researchers has implications for departments. “That places some people on the ‘star player competition winner’ side and, particularly where resources are limited, that means those people get more support” from their departments, explains Bayley. She has witnessed others being asked to pick up the teaching workload of researchers who are selected to produce impact case studies for a REF submission. Boynton agrees: “It’s not a collegiate, collective thing — it’s divisive.”
Hidden contributions
Research assessment can also affect work that universities often consider ‘non-REF-able’. Simon Hettrick, a research software engineer at the University of Southampton, UK, was in this position in 2021. He collaborates with researchers to produce crucial software for their work. But, he says, universities find it hard to look beyond academic papers as the metric for success even though there are 21 categories of research output that can be considered, including software, patents, conference proceedings and digital and visual media.
In the 2021 REF, publications made up about 98.5% of submissions. Hettrick says that although other submissions are encouraged, universities tend not to select the alternatives, presumably out of habit or for fear they might not be judged as favourably.
Simon Hettrick says evaluations should include more contributions such as software.Credit: Simon Hettrick
The result is that those in roles similar to Hettrick’s feel demotivated. “You’re working really hard, without the recognition for that input you’re making,” he says. To counter this, Hettrick and others launched an initiative called The hidden REF that ran a 2021 competition to spotlight important work unrecognized by the REF, garnering 120 submissions from more than 60 universities. The competition is being run again this year.
In April, Hettrick and his colleagues wrote a manifesto asking universities to ensure that at least 5% of their submissions for the 2029 REF are ‘non-traditional outputs’. “That has been met with some consternation,” he says.
Regarding career advancement, REF submissions should not feed into someone’s prospects, according to Casci, who says that universities make strong efforts to separate REF assessments from decisions about individuals’ career progression. But “it’s a grey area” in Watermeyer’s experience; “it might not be reflected within formal promotional criteria, but I think it’s the accepted unspoken reality”. He thinks that academic researchers lacking ‘REF-able’ three- or four-star outputs are unlikely to be hired by any “serious research institution” — severely limiting their career prospects and mobility.
Watermeyer says the consequences for these individuals will vary. Some institutions try to boost the ratings of early-career academics by putting them on capacity-building programmes, including buddying schemes to foster collaborations with more ‘REF-able’ colleagues. But, for more senior staff, the downside could be a performance review. “People might be ‘encouraged’ to reconsider their research role, if they find themselves unable to satisfy the three-star criteria,” he says.
There’s a similar imperative for a researcher’s work to be used as an impact case study. “If your work is not selected for that competition, you lose the currency for your own progression,” says Bayley.
The REF also exacerbates inequalities that already exist in research, says Emily Yarrow, an organizational-behaviour researcher at Newcastle University Business School, UK. “There are still gendered impacts and gendered effects of the REF, and still a disproportionate negative impact on those who take time out of their careers, for example, for caring responsibilities, maternity leave.” A 2014 analysis she co-authored of REF impact case studies in the fields of business and management showed that women were under-represented: just 25% of studies with an identifiable lead author were led by women2. Boynton also points out that there are clear inequalities in the resources available to institutions to prepare for the REF, causing many researchers to feel that the system is unfair.
Emily Yarrow found that women were under-represented in research-evaluation case studies.Credit: Emily Yarrow
Although not all the problems researchers face can be attributed to the REF, it certainly contributes to what some have called an epidemic of poor mental health among UK higher-education staff. A 2019 report (see go.nature.com/3xsb78x) highlighted the REF as causing administrative overload for some and evoking a heightened, ever-present fear of ‘failure’ for others.
UK research councils have acknowledged the criticisms and have promised changes to the 2029 REF. Steven Hill, chair of the 2021 REF Steering Group at Research England in Bristol, UK, which manages the REF exercise, says these changes will “rebalance the exercise’s definition of research excellence, to focus more on the environment needed for all talented people to thrive”. Hill also says they will implement changes to break “the link between individuals and submissions” because there will no longer be a minimum or maximum number of submissions for each researcher. The steering group aims to provide more support in terms of how REF guidance is applied by institutions, to dispel misconceptions about requirements. “Some institutions frame their performance criteria in REF terms and place greater requirements on staff than are actually required by REF,” Hill says.
Other ways forward
Similar to the REF, the China Discipline Evaluation (CDE) occurs every four to five years. Yiran Zhou, a higher-education researcher at the University of Cambridge, UK, has studied attitudes to the CDE3 and says there are pressures in China to produce the equivalent of ‘REF-able’ research and similar concerns about the impact on academics. China relies much more on conventional quantitative publication metrics, but researchers Zhou interviewed criticized the time wasted in producing CDE impact case studies. Those tasked with organizing this often had to bargain with colleagues to collect the evidence they needed. “Then, they owe personal favours to them, like teaching for one or two hours,” says Zhou.
Increased competition has become a concern among Chinese universities, and Zhou says the government has decided not to publicize the results of the most recent CDE, only informing the individual universities. And, Zhou says, some of those she spoke to favoured dropping the assessment altogether.
Mammoth UK research assessment concludes as leaders eye radical shake up
In 2022, Australia did just that. Ahead of the country’s 2023 Excellence in Research for Australia (ERA) assessment, the government announced that it would stop the time-consuming process and start a transition to examine other “modern data-driven approaches, informed by expert review”. In October 2023, the Australian Research Council revealed a blueprint for a new assessment system and was investigating methods for smarter harvesting of evaluation data. It also noted that any data used would be “curated”, possibly with the help of artificial intelligence.
Some European countries are moving away from the type of competitive process exemplified by the REF. “For the Netherlands, we hope to move from evaluation to development” of careers and departmental strategies, says Kim Huijpen, programme manager for Recognition and Reward for the Universities of the Netherlands, based in The Hague, and a former chair of the working group of the Strategy Evaluation Protocol (SEP), the research evaluation process for Dutch universities. In the SEP, institutions organize subject-based research-unit evaluations every six years, but the outcome is not linked to government funding.
The SEP is a benchmarking process. Each research group selects indicators and other types of evidence related to its strategy and these, along with a site visit, provide the basis for review by a committee of peers and stakeholders. The protocol for 2021–27 has removed the previous system of grading. “We wanted to get away from this kind of ranking exercise,” explains Huijpen. “There’s a lot of freedom to deepen the conversation on quality, the societal relevance and the impact of the work — and it’s not very strict in how you should do this.”
The Research Council of Norway also runs subject-based assessments every decade, including institutional-level metrics and case studies, to broadly survey a field. “From what I hear from colleagues, the Norwegian assessment is much milder than the REF. Although it’s similar in what is looked at, it doesn’t feel the same,” says Alexander Refsum Jensenius, a music researcher at the University of Oslo. That’s probably because there is no direct link between the assessment and funding.
Refsum Jensenius has been involved in the Norwegian Career Assessment Matrix, a toolbox developed in 2021 by Universities Norway, the cooperative body of 32 accredited universities. It isn’t used to assess departments, but it demonstrates a fresh, broader approach.
What differentiates it from many other assessments is that in addition to providing evidence, there is scope for a researcher to outline the motivations for their research directions and make their own value judgements on achievements. “You cannot only have endless lists of whatever you have been doing, but you also need to reflect on it and perhaps suggest that some of these things have more value to you,” says Refsum Jensenius. For example, researchers might add context to their publication list by highlighting that opportunities to publish their work are limited by its interdisciplinary nature. There is also an element of continuing professional development to identify a researcher’s skills that need strengthening. Refsum Jensenius says this approach has been welcomed in the Norwegian system. “The toolbox is starting to be adopted by many institutions, including the University of Oslo, for hiring and promoting people.”
For many UK researchers, this more nurturing, reflective method of assessment might feel a million miles away from the REF, but that’s not to say that the REF process does not address ways to improve an institution’s research environment. Currently, one of the three pillars of assessment involves ‘people, culture and environment’, which includes open science, research integrity, career development and equity, diversity and inclusion (EDI) concerns. Since 2022, there have been discussions on how to better measure and incentivize good practice in these areas for the next REF.
Bayley thinks the REF can already take some credit for an increased emphasis on EDI issues at UK universities. “I will not pretend for a second it’s sorted, but EDI is now so commonly a standing item on agendas that it’s far more present than it ever was.”
But she is less sure that the REF has improved research culture overall. For example, she says after the 2014 REF, when the rules changed to require that contributions from all permanent research staff be submitted, she saw indications that some universities were gaming the system in a way that disadvantaged early-career researchers. Junior staff members were left on precarious temporary contracts, and she has seen examples of institutions freezing staff numbers to avoid the need to submit more impact case studies. “I’ve seen that many times across many universities, which means the early-career entry points for research roles are reduced.”
“The REF is a double-edged sword,” concludes Bayley. The administrative burden and pressures it brings are much too high, but it does provide a way to allocate money that gives smaller institutions more of a chance, she says. After the 2021 REF, even though top universities still dominated, many received less of the pot than previously, whereas some newer, less prestigious universities performed strongly. The biggest increase was at Northumbria University in Newcastle, where ‘quality-related’ funding rose from £7 million to £18 million.
For Watermeyer, the whole process is counterproductive, wasting precious resources and creating a competitive, rather than a collaborative, culture that might not tolerate the most creative thinkers. He would like to see it abolished. Hettrick is in two minds, because “the realist in me says it is necessary to explain to the taxpayer what we’re doing with their money”. He says the task now is to do the assessment more cheaply and more effectively.
Other research communities might not agree. As Huijpen points out, “there’s quite a lot of assessments in academic life, there are a lot of moments within a career where you are assessed, when you apply for funding, when you apply for a job”. From her perspective, it’s time to opt for less ranking and more reflection.
As a consultant in orthopaedic surgery at Khoo Teck Puat Hospital, Singapore, I’ve seen first-hand how cultural differences can be overlooked by large language models (LLMs).
Back in 2005, Singapore’s Health Promotion Board introduced categories of body mass index (BMI) tailored specifically for the local population. It highlighted a crucial issue — Asian people face a higher risk of diabetes and cardiovascular diseases at lower BMI scores compared with European and North American populations. Under the board’s guidelines, a BMI of 23 to 27.4 would be classified as ‘overweight’, a lower range than the global standard of 25 to 29.9 set by the World Health Organization (WHO).
Nature Career Guide: Faculty
I was reviewing recommendations for a person’s health plan generated by an artificial intelligence (AI) system, when I realized that it had categorized the person’s BMI of 24 as being inside conventional limits, disregarding the guidelines we follow in Singapore. It was a stark reminder of how important it is for AI systems to account for diversity.
This is one example of many. Having lived and worked in Malaysia, Singapore, the United Kingdom and the United States, I’ve gained an understanding of how cultural differences can affect the effectiveness of AI-driven systems. Medical terms and other practices that are well understood in one society can be misinterpreted by an AI system if it hasn’t been sufficiently exposed to its culture. Fixing these biases is not just a technical task but a moral responsibility, because it’s essential to develop AI systems that accurately represent the different realities of people around the world.
Identifying blind spots
As the saying goes, you are what you eat, and in the case of generative AI, these programs process vast amounts of data and amplify the patterns present in that information. Language bias occurs because AI models are often trained on data sets dominated by English-language information. This often means that a model will perform better on an English-language task than it will on those in other languages, inadvertently sidelining people whose first language is not English.
Imagine a library filled predominantly with English-language books; a reader seeking information in another language would struggle to find the right material — and so, too, do LLMs. In a 2023 preprint, researchers showed that a popular LLM performed better with English prompts than with those in 37 other languages, wherein it faced challenges with accuracy and semantics1.
Artificial-intelligence systems might not reflect important differences between cultures.Credit: Jaap Arriens/NurPhoto/Getty
Gender biases are another particularly pervasive issue in the landscape of LLMs, often reinforcing stereotypes embedded in the underlying data. This can be seen in word embeddings, a process in which words are represented by how semantically similar they are. In a 2016 preprint, Tolga Bolukbasi, a computer scientist then at Boston University in Massachusetts, and his colleagues showed how various word embeddings associated the word ‘man’ with ‘computer programmer’ and ‘woman’ with ‘homemaker’, amplifying gender stereotypes through its output2,3.
In a 2023 study, researchers prompted four LLMs with a sentence that included a pronoun and two stereotypically gendered occupations. The LLMs were 6.8 times more likely to pick a stereotypically female job when presented with a female pronoun, and 3.4 times more likely to pick a stereotypically male job with a male pronoun4.
Navigating the bias
To ensure that bias doesn’t creep into my work when using LLMs, I adopt several strategies. First, I treat AI outputs as a starting point rather than as the final product. Whenever I use generative AI to assist with research or writing, I always cross-check its results with trusted sources from various perspectives.
In a project from this Feburary that focused on developing AI-generated educational content for the prevention of diabetic neuropathy — a condition in which prolonged high blood-sugar levels causes nerve damage — I consulted peers from various backgrounds to ensure that the material was culturally sensitive and relevant to the diverse population groups in Singapore, including Malay, Chinese and Indian people.
After the AI created an initial draft of the prevention strategies, I shared the content with colleagues from each of these cultural backgrounds. My Malay colleague pointed out that the AI’s recommendations heavily emphasized dietary adjustments common in Western cultures, such as reducing carbohydrate intake, without considering the significance of rice in Malay cuisine. She suggested including alternatives such as reducing portion sizes or incorporating low-glycemic-index rice varieties that align with Malay dietary practices. Meanwhile, a Chinese colleague noted that the AI failed to address the traditional use of herbal medicine and the importance of food therapy in Chinese culture. An Indian colleague highlighted the need to consider vegetarian options and the use of spices such as turmeric, which is commonly thought, in Indian culture, to have anti-inflammatory properties that are beneficial for managing diabetes.
In addition to peer review, I ran a controlled comparison by writing my own set of prevention strategies without AI assistance. This allowed me to directly compare the AI-generated content with my findings to assess whether the AI had accurately captured the cultural intricacies of dietary practices among these groups. The comparison revealed that, although the AI provided general dietary advice, it lacked depth in accommodating cultural preferences from diverse population groups.
By integrating this culturally informed feedback and comparison, I was able to make the AI-generated strategies more inclusive and culturally sensitive. The final result provided practical, culturally relevant advice tailored to the dietary practices of each group, ensuring that the educational material was rigorous, credible and free from the biases that the AI might have introduced.
Despite these challenges, I think that it’s crucial to keep pushing forward. AI, in many ways, mirrors our society — its strengths, biases and limitations. As we develop this technology, society needs to be mindful of its technical capabilities and its impact on people and cultures. Looking ahead, I hope the conversation around AI and bias will continue to grow, incorporating more diverse perspectives and ideas. This is an ongoing journey, full of challenges and opportunities. It requires us to stay committed to making AI more inclusive and representative of the diverse world we live in.
The ‘file-drawer problem’, where findings with null or negative results gather dust and are left unpublished, is well known in science. There has been an overriding perception that studies with positive or significant findings are more important, but this bias can have real-world implications, skewing perceptions of drug efficacies, for example.
Multiple efforts to get negative results published have been put forward or attempted, with some researchers saying that the incentive structures in academia, and the ‘publish or perish’ culture, need to be overturned in order to end this bias.
This is an audio version of our Feature: So you got a null result. Will anyone publish it?
Studies that try to replicate the findings of published research are hard to come by: it can be difficult to find funders to support them and journals to publish them. And when these papers do get published, it’s not easy to locate them, because they are rarely linked to the original studies.
A database described in a preprint posted in April1 aims to address these issues by hosting replication studies from the social sciences and making them more traceable and discoverable. It was launched as part of the Framework for Open and Reproducible Research Training (FORTT), a community-driven initiative that teaches principles of open science and reproducibility to researchers.
The initiative follows other efforts to improve the accessibility of replication work in science, such as the Institute for Replication, which hosts a database listing studies published in selected economics and politics journals that academics can choose to replicate.
The team behind the FORTT database hopes that it will draw more attention to replication studies, which it argues is a fundamental part of science. The database can be accessed through the web application Shiny, and will soon be available on the FORTT website.
Nature Index spoke to one of the project’s leaders, Lukas Röseler, a metascience researcher and director of the University of Münster’s Center for Open Science in Germany.
Why did you create this database?
We’re trying to make it easier for researchers to make their replication attempts public, because it’s often difficult to publish them, regardless of their outcome.
We also wanted to make it easier to track replication studies. If you’re building on previous research and want to check whether replication studies have already been done, it’s often difficult to find them, partly because journals tend to not link them to the original work.
We started out with psychology, which has been hit hard by the replication crisis, and have branched out to studies in judgement and decision-making, marketing and medicine. We are now looking into other fields to understand how their researchers conduct replication studies and what replication means in those contexts.
Who might want to use the database?
A mentor of mine wrote a textbook on social psychology and said that he had no easy way of screening his 50 pages of references for replication attempts. Now, he can enter his references into our database and check which studies have been replicated.
The database can also be used to determine the effectiveness of certain procedures by tracking the replication history of studies. Nowadays, for instance, academics are expected to pre-register their studies — publishing their research design, hypotheses and analysis plans before conducting the study — and make their data freely available online. We would like to empirically see whether interventions such as these affect how likely a study is to be replicable.
2024 Research Leaders
How is the database updated?
It is currently an online spreadsheet, which we created by manually adding the original findings, their replication studies and their outcomes. So far, we have more than 3,300 entries — or replication findings — of just under 1,100 original studies. There are often multiple findings in one study; a replication study might include attempts to replicate four different findings, constituting four entries.
There are hundreds of volunteers who are collecting replications and logging studies on the spreadsheet. You can either just enter a study so that it’s findable, or include both the original study and the replication findings.
We are in contact with teams that conduct a lot of replication research, and we regularly issue calls for people to add their studies. This is a crowdsourced effort and a large proportion of it is based on the FORTT replications and reverses project, which is also crowdsourced. It aims to collate replications and ‘reversal effects’ in social science, in which replication attempts have results in the opposite direction compared with the original.
Do you plan to automate this process?
We are absolutely looking into ways to automate this. For instance, we are working on a machine-readable manuscript template, in which people can enter their manuscript and have it automatically read into the database.
We have code that automatically recognizes DOIs and cross checks them with all the original studies in the database to check whether there is a match. We are working on turning this into a search engine, but it’s beyond our capabilities and resources at the moment.
Does your database provide any data on the replications it hosts?
If you go to our website, there is a replication tracker, where you can see the percentage of studies that were able to replicate original findings, and those that failed to do so.
In a version of the database that we will launch in the coming months, users will be able to choose the criteria by which they judge whether a study successfully replicated the original findings. Right now, it’s all based on how strong the effect sizes — a measure of the relationship between two variables — were on both the original study and the replication attempts, but there are many other criteria and metrics of replication success that we are considering.
We’re also planning to launch a peer-reviewed, open-access journal at FORTT to publish replication studies from various disciplines.
This interview has been edited for length and clarity.
Nature Index’s news and supplement content is editorially independent of its publisher, Springer Nature. For more information about Nature Index, see the homepage.
Too often, the potential of a biomedical discovery to tackle disease is not fully realized. Perhaps the economic return is not high enough, or the mindset to look for alternative applications for an idea is missing. The discovery can end up buried in an academic publication or never see the light of day.
So many biomedical researchers partner with non-governmental organizations (NGOs) to transform their scientific ideas into products, working outside the typical biotechnology or pharmaceutical drug-development process. For these NGOs, collaborating with academics means that the products and technologies can reach the people who need them most. In return, academics get to develop their idea without having to spin off a company or sell it to a big pharmaceutical firm.
Academics with experience of partnering with NGOs make a strong case for these collaborations, and say that they should be more frequent. Nature asked four people who have formed such partnerships for their tips on how to collaborate successfully.
Annette von Delft (third from left) with other core members of the Covid Moonshot collaboration to produce low-cost antiviral drugs.Credit: Matteo Ferla
ANNETTE VON DELFT: Cooperation to battle an invisible enemy
Head of anti-infectives at the Centre for Medicines Discovery at the University of Oxford, UK.
At the height of the COVID-19 pandemic, we all felt the duty to do our part. When the first molecular structure of a protein found on the SARS-CoV-2 virus was released at the beginning of February 2020, some colleagues and I started a worldwide X (formerly Twitter) collaboration to identify molecules that could block infection by the virus. What was just an exchange of ideas soon grew to an open-science consortium of scientists, pharmaceutical research teams and students from around the world, called the COVID Moonshot.
The dream was to create a pill that everyone around the globe could get quickly and affordably. In no time, we had a handful of potential candidate targets for a drug. I had been working on the discovery and development of affordable drugs against viruses, bacteria and fungi long before COVID-19 struck, and I had always encountered a big dilemma: how do you convince venture capitalists that it is worth investing money in something that won’t deliver an economic return? But, with the world battling a viral enemy, we could rely on an inexhaustible wellspring of goodwill, from single researchers to private biotech firms.
A project such as this, born in an open-science collaborative way, could never lead to profit. We couldn’t waste time dealing with intellectual-property rights, so everyone who wanted to get involved was asked to give up any potential royalties: you can’t patent something created from crowdsourcing. Thanks to this global teamwork, we found a series of molecules that could theoretically bind to the main SARS-CoV-2 enzyme, Mpro, and block the virus’s replication machinery.
One candidate then passed further testing and showed antiviral drug potential. During this process, we spent a long time thinking about how to bring a non-patentable compound to the market. The international NGO Drugs for Neglected Diseases initiative (DNDi) was founded in 2003 by seven partners — including international aid organization Médecins Sans Frontières (also known as Doctors Without Borders), which provided initial funding — in response to the frustration of ineffective, unsafe, unavailable or unaffordable medicines.
DNDi had the preclinical testing and affordable-medicine expertise that COVID Moonshot wanted. We needed to access large-scale manufacturing and regulatory approvals, and DNDi provided this for us, either directly or through negotiated partnerships with other non-profit organizations or through biotech. For us, it worked as a bridge between researchers and the general public without the need to invest all the money typically associated with drug development. Of course, the costs of producing the drug and running preclinical tests remain. However, NGOs such as DNDi have the expertise and connections to outsource the early discovery and preclinical costs to philanthropic or government funders.
For scientists working on neglected diseases or antimicrobial resistance, areas in which pharmaceutical companies do not necessarily invest because they can’t get their money back, NGOs have picked up the torch. We brought the discovery, and they brought translational expertise to the project. We are now about to start the first human trial with our lead candidate. If all goes well, we could soon have an affordable antiviral for COVID-19 and be better prepared for future pandemics.
When starting collaborations with NGOs, you need to consider that every phase takes much longer, because you need to find alternative funding pathways. For academics who want to partner with an NGO, I would suggest looking for several potential partners. With their research project in mind, scientists should explore how big the organizations are, how quickly the process moves and how much access they have to alternative funding systems. But in my experience, linking up with NGOs is always worthwhile: their ability to reach more people is more comprehensive than that of drug companies, and their charitable values are worthy. At the end of the day, we do science to help people, and NGOs connect scientists with people most in need.
Medical director of No Leprosy Remains, Wim van Brakel (left) administers medicine to reduce leprosy progression in India.Credit: Danish Suhail
WIM VAN BRAKEL: Build long-term relationships to tackle niche diseases
Medical director at No Leprosy Remains in Amsterdam, the Netherlands.
There are still countries and areas where leprosy and other neglected diseases are a heavy burden. If a government adopts new treatments or prevention tactics, everyone wins. This is why disease-focused NGOs, such as No Leprosy Remains (NLR), strive to facilitate close collaborative work between academics and policymakers. Thanks to this cross-collaborative approach, NLR has introduced life-changing interventions worldwide. It has teamed up with academics and policymakers to draft the World Health Organization’s technical guidelines to stop leprosy transmission. Thanks to these, in 2023, the Maldives was able to achieve the goal of no local transmission — taking a step closer to becoming a leprosy-free country. NLR is also working on stronger preventions in India, Brazil, Bangladesh and Nepal.
As disease specialists, NLR is part of a big network of sister organizations and has offices in several countries and good connections in local governments. NLR offers academics the field knowledge and local liaisons that they might not be able to find in their research institute. For academics in low- and middle-income countries, NLR offers training on research methods and scientific writing, connections to Western research institutions and mentorship for career advancement.
In the academic world, NGOs are a tiny cog in the machine. For small entities like us, it is essential to establish long-term relationships with academic partners. Small NGOs usually work well with the same university partners and set of researchers. In this way, you can develop mutual respect, and know what to expect from each other.
In the future, I think it would benefit NGOs to have more staff with hybrid positions — those that cross over into the academic world. I worked as physician on the front lines for 17 years, but now I have moved to do more administrative work for NLR. This includes networking with our academic collaborators, which could help us to connect better with the research environment.
Sunday Isiyaku, who moved from academic to NGO-based research, speaks at an event celebrating 70 years of his employer Sightsavers working in Nigeria.Credit: Joy Tarbo/Sightsavers
SUNDAY ISIYAKU: Embrace the researcher-in-NGO career path
Country director for Nigeria and Ghana at Sightsavers in Kaduna, Nigeria.
When people think about academia and NGOs, they see two different career paths, but sometimes they blend. I started my career as a biomedical scientist at the Nigerian Institute of Trypanosomiasis Research in Kaduna. While climbing the academic ladder, I had a chance to collaborate on a few projects funded by the World Health Organization. I got involved with Sightsavers, an NGO focused on preventing avoidable eye conditions and blindness, and fighting the stigma around disabilities in low- and middle-income countries.
Sightsavers is one of the only international NGOs to hold the independent research organization status in the United Kingdom, meaning that, similar to universities, it can apply for specific funding to sustain its research. I felt that I could make a difference by using my research background to improve people’s lives. So, I jumped at the opportunity and embraced the researcher-in-NGO career path. What makes Sightsavers different from some NGOs is that research and evidence are at the core of its interventions.
Sightsavers has around 20 research staff, half of whom are employed in institutions in Africa, which is home to the majority of people affected by the eye conditions that Sightsavers addresses. These conditions are mostly a consequence of neglected diseases or treatable infections, but in low- and middle-income countries they are not tackled efficiently or quickly, with devastating consequences for people’s lives. This is when Sightsavers’s intervention is most needed. For example, river blindness is caused by a bite from a parasite-infected black fly (Simulium damnosum). In West Africa, it is responsible for a huge number of cases of infection-related blindness. Scientists proved that the drug ivermectin was effective at blocking parasite transmission, but it had never been tested for river blindness in people. Sightsavers partnered with researchers, local communities and organizations in West Africa to test it in people — and it worked. Thanks to Sightsavers’s data, ivermectin mass treatment is now a standard procedure to eliminate this disease from affected countries.
Sightsavers also establishes collaborations with local governments and communities to ensure that scientific discoveries are relevant and implemented in the context of the country or the community. The organization has helped academic institutions to communicate better with communities to encourage people to take preventive treatments for diseases such as river blindness, schistosomiasis and lymphatic filariasis.
Academic institutions and NGOs each have their own strengths and weaknesses. Sightsavers’s strength is understanding how to use research to deliver products that affect human lives. Its strict focus, however, can lead to differences in perspective and goals between the NGO and the academics. Often, researchers are more interested in gathering further knowledge on a disease, and they can lose focus on finding a prompt solution to a problem. An NGO’s goal might be to change a country’s policy regarding disease treatment, because it sees that current policy doesn’t work. In this case, the NGO needs research partners who can provide evidence that would influence such a policy change.
As an academic now working in an NGO, I know very well that for successful programmes, NGOs need to engage with academics in institutions, because their knowledge can help our programme to be the best. We need evidence to ensure we are doing the right thing. Academics ask the tough questions, which are essential to obtaining solid evidence. Joining forces with research staff employed by institutions, as I was when I started with Sightsavers, can help to shorten the distance between these two worlds. Hybrid positions help researchers to think about the applications of their work right from the start and help NGOs to access scientific discoveries early on. Bringing different people with different backgrounds and mindsets to work together is the only way to change people’s lives.
Daniel Fletcher’s research group at the University of California, Berkeley, tested the use of the CellScope diagnostic device with the help of several NGOs.Credit: Adam Lau/Berkeley Engineering
DANIEL FLETCHER: Make it easier to discover NGO funding opportunities
Bioengineer at the University of California, Berkeley, and inventor of the CellScope.
My laboratory focuses on discovering how single molecules come together to build a biological system. And my team does it by developing custom experimental methods to understand how molecules shape the structures and dynamics of a cell.
Once, a group of lab students and postdocs asked, “How is it possible to study pathology in remote places?” Intrigued, we started to make hypotheses and test them. This is how the CellScope was born, in 2011 . It’s a miniaturized microscope that uses mobile phone and tablet cameras to collect images. In the beginning, we didn’t have a good sense of how to use it, and I didn’t have the funds to test out the device, so I started looking around for potential collaborators.
Initially, we partnered with local non-profit organizations that were working in ocean conservation to test our device during ocean sampling and educational projects. That gave us great feedback to improve the device’s features, such as customizing software and optics. It also made me think about potential applications.
Next, we worked with the California Academy of Sciences in San Francisco and expanded the device’s use in education. Those NGO partnerships helped a lot in fully developing CellScope as a tool, because the organizations were prepared to invest in the scientific novelty, even if the commercial drive still needed clarification.
Once the lab team understood the full potential, I decided to continue working with NGOs to test the technology for biomedical applications in the populations that would benefit most. I had minimal experience in global health at the time, so I contacted the Bill & Melinda Gates Foundation in Seattle, Washington, an NGO leader in that area. It helped us to connect with researchers in other countries and to do field testing for health and disease-control applications, such as on-site diagnosis of tuberculosis or malaria, and analyses of eye conditions associated with infections in villages far away from main hospitals. The timing between diagnoses and interventions in such cases can really make the difference between life and death.
Through CellScope, I have partnered with several non-profit organizations, both small and large, and I learnt to navigate those partnerships’ positives and the challenges. As a researcher leading a project, you need to think about various ways of supporting it. NGOs are one way, but they should be part of a range of sources of support. NGOs can be great accelerators for projects when your idea matches their interests. But, because NGOs have limited funding, they need to maximize their priorities. So, if your project is not spot on, they will pursue others that make more sense for them.
I wish there were some easier way of learning about the funding opportunities from various NGOs. Most of my NGO partnerships came through word of mouth, rather than through a conventional open-call application. Having some sort of centralized database in which you can find NGOs’ priority areas, funding capabilities and possibly calls for grants would help academics to engage more with those realities.
Despite the challenges, these partnerships have been wonderfully productive for everybody involved. My team has learnt a lot about how a scientific idea can have a much broader range of applications than its original purpose. We learnt how out-of-the-box thinking can really change people’s lives. Communities now receive prompt diagnoses, thanks to what was originally just a curiosity project.
At the Center for Quantum Nanoscience (QNS), nestled in the hilly campus of Seoul’s Ewha Womans University, director of operations, Michelle Randall, shows off the facilities. “This is where we isolate our scanning tunnelling microscopes (STM) from any vibrations,” she says, pointing to an 80-tonne concrete damper, a mechanism that reduces interfering movements to near zero. Researchers at QNS are using STMs to image and manipulate individual atoms and molecules, chasing breakthroughs akin to last year’s assembly of a device made from single atoms that allows multiple qubits — the fundamental units of quantum information — to be controlled simultaneously (Y. Wang et al. Science382, 87–92; 2023). The work, done by QNS in collaboration with colleagues in Japan, Spain and the United States, could have applications in quantum computing, sensing and communication.
What gives QNS its edge, says Randall, is the diversity of teams that populate its labs. “Our composition is 50:50, South Korean and international, and we are an English-speaking workplace as a result,” she says. “We invest heavily in building relationships with our domestic scientific community and worldwide,” she adds, pointing to one room with four women — two South Koreans, one French, and one Iranian — exemplifying the collaborative spirit.
Nature Index 2024 South Korea
The diversity of the QNS team offers a glimpse of what research looks like in a country that is betting big on international collaboration. For 2024, South Korea has more than tripled its budget for global research and development (R&D) collaboration, committing to 1.8 trillion won (US$1.3 billion), up from 2023’s 500 billion won. The investment, which represents an increase from 1.6% to 6.8% of the government’s overall R&D budget, could see a shift away from using metrics such as university rankings, quantified research outputs and international student and faculty recruitment in favour of boosting ties with leading overseas research institutions in strategic areas. “There’s a huge amount of money that has suddenly been assigned to international research. With this comes many opportunities,” says Meeyoung Cha, scientific director of the Max Planck Institute for Security and Privacy, in Bochum, Germany, who holds joint positions at the Korea Advanced Institute of Science and Technology (KAIST) and the Korean Institute for Basic Science, in Daejeon.
The budget increase is part of the Korean Ministry of Science and ICT’s (MSIT) wider R&D Innovation Plan, announced in November 2023. It includes a new Global R&D Strategy Map, which will guide tailored collaboration strategies with specific countries based on their strengths in 12 critical and emerging technologies, such as semiconductors, artificial intelligence (AI) and quantum science. Industry strengths in 17 technologies related to achieving carbon neutrality and mitigating climate change will also be considered. In addition, MSIT has amended laws to allow overseas research institutions to directly participate in state R&D projects and aims to develop Global R&D Flagship Projects in key areas that will receive prioritized allocation of government funds.
Such moves are designed to refocus South Korea’s R&D, which has become stagnant over the past decade, according to MSIT, despite the country being the world’s second highest spender on R&D as a percentage of GDP, after Israel. In 2023, South Korea’s legislative national assembly approved a 14.7% cut to the overall 2024 R&D budget, from 31.1 trillion won in 2023. The cuts include shifting some more general funds for universities to a separate budget.
Foreign students line up to submit their applications at a job fair in Busan, South Korea.Credit: YONHAP/EPA-EFE/Shutterstock
“It seems that the term ‘budget cut’ really means redistributing money to more applied projects and international research initiatives,” says computational biologist, Martin Steinegger, based at Seoul National University. Steinegger experienced a 15–25% reduction in existing grants, paid annually from the National Research Foundation of Korea, the country’s main funding agency. This forced him to reduce conference travel for his students and use older hardware for research. “I have effectively less money than I did last year, but I can apply to many new things, it seems,” says Steinegger.
Off the back of such policy shifts, becoming the first Asian country to join the European Union’s Horizon Europe programme, the world’s largest research-funding scheme, is a major win for South Korea. Announced in March, the new partnership will drive collaborations between South Korean and European researchers in areas such as quantum technologies, semiconductors and next-generation wireless networks. South Korea is also forging bilateral cooperation agreements across Europe, such as with Denmark on clean-energy technologies and Germany on basic sciences, including the launch of a joint centre with the Max Planck Society, Germany’s flagship basic-research organization, at Yonsei University in Seoul.
Taking on more joint projects with Europe could help to diversify South Korea’s internationally collaborative outputs in the Nature Index. The United States, which has deep historic ties with South Korea dating back to the Korean War in the 1950s, is the country’s most important research partner in natural-sciences output, with a collaborative Share — a measure of joint contribution to research tracked by the Index — of 639.94 in 2023. China forms South Korea’s second-strongest partnership, with a collaborative Share of 300.81, followed by Japan, at 114.88 (see ‘Research ties’).
The number of natural-sciences articles in the Nature Index that have been co-authored by China- and South Korea-based researchers has grown considerably in recent years, up 222% between 2015 and 2023, compared with US–South Korean output, which dropped by 4% over the same period. But South Korean researchers report that collaborations with China are becoming more difficult, particularly in technology areas. According to data from South Korea’s national police agency, of the 78 cases of industrial technology leaks recorded between 2018 and mid-2023, 51 involved leaks to places or people in China. There is now also more oversight of collaborations with China than with other major research partners. “Researchers occasionally receive requests from their institutions or the government asking who is collaborating with China, says Cha. “They are aware that any collaboration may be monitored, creating a sense of censorship.”
In order to minimize its exposure to any supply-chain disruptions or political risks associated with ongoing US–China tensions, South Korea must look farther afield when establishing research links, says Lee Myung-hwa, who studies policy and innovation at the Science and Technology Policy Institute think tank, in Sejong. “The key is building trust with collaboration partners, which needs to be long-term, stable and maintained without being swayed by policy directions,” she says.
Cha highlights southeast Asia, a region that has long been of strategic and diplomatic interest to South Korea, as a place with untapped potential for joint innovation projects. “For instance, in Indonesia, there’s no governmental institution in charge of AI,” she says, which could open up the possibility of future collaborations around ethical and strategic development of AI technologies.
In 2023, the South Korean government committed to boosting cooperation with southeast Asia in areas including cybersecurity and communications technologies, and with individual nations, such as Vietnam, to help advance digital transition and clean-energy sectors. “Huge collaboration could happen if we work together,” says Cha.
Domestic challenges
With more than 10 million visitors moving between southeast Asian nations and South Korea each year, the region could also be important to South Korea in dealing with its dual demographic challenge: attracting overseas scientists in a country that is traditionally conservative towards immigration, and retaining homegrown talent. Solving these problems is paramount, as South Korea contends with the world’s lowest birthrate, driven by factors such as the rising costs of housing, education and childcare, a highly competitive and demanding work culture, and gender inequality issues, including the biggest gender pay gap among Organisation for Economic Co-operation and Development members. Student numbers are also in steep decline, which is putting some universities at risk of closure. An analysis of 195 Korean universities published by Seoul-based institute Jongro Academy in March showed that 51 had failed to fill their enrolment quotas for 2024. Of those, 43 were located outside the Seoul metropolitan area, accounting for 98% of the total unfilled seats.
To boost numbers, the South Korean Ministry of Education has announced new initiatives, including annual financial support for master’s, doctoral and postdoctoral researchers. These measures, which are part of the overall R&D budget, aim to incentivize mostly local students to continue their careers in research. For foreign students, the ministry wants to attract 300,000 of them by 2027 through its ‘Study Korea 300K Project’. Students will be targeted at events and language centres abroad and science graduates may be offered an easier pathway to permanent residency and South Korean citizenship. Language proficiency requirements for admission will also be reduced. Scholarship programmes are being expanded, including the government-funded Global Korea Scholarship invitation programme, which will increase recipient numbers from 4,543 in 2022 to 6,000 by 2027. The ministry has identified India and Pakistan in particular as important sources of science and engineering talent.
It’s unclear whether efforts to attract international students will bring more of a spotlight to the challenges faced by those who are already in the country. Lewis Nkenyereye, who studies computer and information security at Sejong University in Seoul, expresses concern for the many foreign students who work part-time to satisfy the minimum bank balance requirements of their enrolments. Language barriers and administrative hurdles have led to some of them being deported for not having adequate permits, says Nkenyereye, who is originally from Burundi. “The government is aware that most foreign students have part-time jobs and should adapt its policies to better accommodate their needs,” he says.
Religious and cultural differences also pose difficulties. Muaz Razaq, a student, who left Pakistan to pursue his PhD in computer science at Kyungpook National University in Daegu, is involved in a small mosque-reconstruction project next to his university that has ignited strong opposition from segments of the local community. Razaq says he’s heard many stories from other Muslim students across South Korea who describe being taunted by their peers over food choices and who lack designated spaces for practices such as ablution before prayers.
Challenging conditions for foreign students might be contributing to South Korea’s low levels of retention after graduation. According to a 2022 report by the Korea Research Institute for Vocational Education and Training in Seoul, the number of foreign students that are earning doctorates in South Korea quadrupled in the period 2012 to 2021. But the proportion of foreign students who returned to their home country after graduation has consistently increased, from 40.9% in 2016 to 62.0% in 2021.
Source: Nature Index
It is hoped that government-funded initiatives such as the Brain Pool programme, which gives doctoral researchers access to up to 300 million won annually for three years, and Brain Pool Plus, which offers outstanding researchers with expertise in core technology fields up to 600 million won annually for up to ten years, can help to attract and retain foreign talent. MSIT also plans to introduce support programmes to help new arrivals settle in and build networks.
Recent updates to visa rules for foreign researchers and students could make it easier for universities to attract overseas talent. In July, the Korean Ministry of Justice, which oversees immigration, greatly expanded the number of universities that are eligible to recruit foreign postgraduate and undergraduate students on D-2-5 research study visas and waived the three-year work-experience requirement for international master’s and PhD holders to obtain E-3 research visas.
New opportunities
The relatively low levels of English used at South Korean universities and research institutions is a major hurdle in the country’s drive towards internationalization. The number of university courses taught in English has increased in recent years, but Korean remains the primary language of instruction at many institutions. This affects foreign researchers at all career stages because they often require help from others or full-time assistance to navigate the environment, particularly in administrative matters, says Steinegger, who can manage daily life in Korean, but needs staff to help him with paperwork.
Seoul Robotics, a company that develops AI-powered software for autonomous driving and traffic management, has mandated an English-speaking work environment to attract international talent. Such a culture is unusual in South Korea; although many companies have English-speaking requirements, these are often not enforced, says Evan Thomas, business development manager at Seoul Robotics. “The ability to communicate in English without constant translation and cultural interpretation has been a significant advantage compared to more traditional South Korean companies,” he says.
Cultural attitudes towards foreigners can also hinder long-term retention, says Thomas. “Many South Koreans view foreigners as temporary visitors rather than potential long-term residents, discouraging them from settling in,” he says. A 2023 survey by the Korea Institute of Public Administration, a government-sponsored research institute in Seoul, seems to back this up, reporting that less than half of the respondents say they accept foreign nationals as members of South Korean society.
Given the shortages of local staff that are being recorded in strategic industries such as semiconductors and AI, it’s a problem that South Korea needs to address. Another report, by the University of Science and Technology in Daejon and the Korea Industrial Technology Association in Seoul, found that just 24% of 300 South Korean companies surveyed had foreign staff. Many cited a lack of information about foreign students as the reason, suggesting that there is a disconnect between academia and industry regarding graduate careers.
Hong Bui, a student from Vietnam, accepted a postdoctoral position at the Swiss Federal Institute of Technology Zurich in April, after completing her PhD at QNS. Bui cites the limited permanent career opportunities that are available to international researchers in Seoul as one of her reasons for wanting to leave, despite having a positive experience in QNS’s internationally focused environment. “South Korean companies often value overseas experience more than domestic experience, and many workplaces require Korean language proficiency,” she says.
As South Korea devotes record levels of resources to building ties with overseas institutions and attracting foreign researchers and students, its leaders hope that stronger research performance and innovation prowess will follow. But the success of such efforts hinges on the country’s ability to foster a more diverse research ecosystem, with fewer cultural challenges for foreigners to contend with.
“If the barriers are lowered and support is provided for overseas researchers to utilize South Korea’s leading research facilities and equipment, I think South Korea will become an attractive country for conducting research activities,” says Lee.
South Korea’s Share in natural-science journals in the Nature Index is shown alongside its closest competitors in the database. Among these countries, India is the only one to have increased its natural-science output between 2019 and 2023, with a 14.5% jump in adjusted Share between 2022 and 2023 alone.
People power
South Korea has the most researchers in science, technology and innovation roles (full-time equivalent) per million inhabitants — and by a large margin. It is notably the only non-European country in the top 10 by this measure in 2021. Japan, not shown here, is ranked 11th.
Subject strengths
A breakdown of subject contributions to countries’ overall 2023 output in journals tracked by the Nature Index is shown for South Korea and some of its closest competitors. With almost 55% of its output attributed to the physical sciences, South Korea joins India in this group as having a dominant subject that is worth more than half of its total output. France and Switzerland, by comparison, have a more balanced output.
On the up
The fastest rising South Korean institutions for the period 2019 to 2023 have recorded very modest gains in natural-science output in the Index, which could speak to the country’s relatively stable performance in recent years. Samsung Group, the only corporate institution in this list, had the largest percentage increase over the period, at 57.98%. This was from a much smaller adjusted Share of 10.77 in 2019, however, compared with Seoul National University at 174.97.
Most improved
The fastest-rising institutions in four natural-science subjects, and in the natural sciences overall, are shown for the period 2019 to 2023. Institutions are ranked according to change in adjusted Share, which for the Pohang University of Science and Technology was larger in the physical sciences than it was in the natural sciences overall.
Source: Nature Index
Big spender
Although South Korea spends more on its research and development as a proportion of its gross domestic product than most other countries in the world, this does not translate to higher Share in the Nature Index. Its output does seem relatively stable, however; among the selected countries shown below, many recorded a decline in Share (per million people) in natural-science journals over the past five years, but South Korea’s drop was smaller than most.