Tag: Authorship

  • understanding the horror of genocide

    understanding the horror of genocide

    [ad_1]

    Thousands of people gather at the Kicukiro College of Technology football pitch to commemorate the 2,000 people who were abandoned by United Nations troops during the 1994 genocide April 5, 2014 in Kigali, Rwanda.

    Much of the research on the genocide against Tutsi communities has neglected the testimonies of survivors.Credit: Chip Somodevilla/Getty

    This month marks 30 years since the start of the 1994 genocide against Rwanda’s Tutsi communities. Around 800,000 Tutsi were killed by armed Hutu militia and citizens over 100 days. Members of the Hutu and Twa communities also died, in what some scholars call the worst atrocity of the late twentieth century.

    This 30th anniversary is a poignant reminder of many things, but perhaps first and foremost of the international community’s failure to intervene and stop the killings. Massacres of Tutsi people had been happening for decades before 1994, but calls for help from inside Rwanda were ignored, with horrific consequences.

    This week, in a News Feature commemorating the anniversary of the atrocity, Nature has spoken to researchers about what has been learnt about the genocide, the consequences for its survivors and its aftermath. Lessons from studying a specific genocide can be applicable to many events that involve conflict.

    The 1948 Convention on the Prevention and Punishment of the Crime of Genocide, adopted after the Second World War, defines genocide as “an act committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group”. It is, the convention states, an “odious scourge” that “at all periods of history … has inflicted great losses on humanity”.

    Genocide is incredibly difficult to study. The hardest question of all concerns a genocide’s origins: how wars and violence can escalate to genocidal acts. At the same time, genocide studies is not one discipline. It spans the political and social sciences, anthropology, biology, economics, history, law, medicine, sociology and more. Researchers bring individual disciplinary insights, but must also collaborate. Nature heard from researchers studying peace-building between communities affected by the genocide, and learnt about mental-health approaches that have helped survivors. We also spoke to scientists who have studied how the trauma from the event has marked the DNA of survivors and their children. Intergenerational trauma — trauma relating to the genocide that affects younger generations who did not directly experience it — remains a challenge for mental-health services in Rwanda. But this is a legacy of all atrocities, and one that societies should be prepared for.

    In Rwanda’s case, the genocide nearly wiped out the country’s academic community; until recently, the study of the atrocity had largely been done by researchers from other countries. Rwanda’s scholars have re-established themselves and must be supported so they can lead the study of genocide, political violence and beyond. The country already hosts some of Africa’s notable research institutions, including a chapter of the African Institute for Mathematical Sciences in Kigali and the African Medicines Agency, soon to be established in the capital.

    Researchers in African countries face many barriers. They consistently report that international journals are too quick to reject their submissions. Some told Nature that this might be because of a perception that research from low-income nations or countries with limited academic autonomy is of low quality. One exceptional effort that is helping to overcome these barriers is the Research, Policy and Higher Education programme, focused on Rwanda. Now a decade old and launched by the UK-based charity Aegis Trust in Nottingham, this programme invites Rwandan scholars to submit research proposals; external researchers support them with advice and expertise to get the works published in international venues, such as peer-reviewed journals. The resulting works are collected in a resource called the Genocide Research Hub.

    So far, more than 40 scholars have published dozens of journal articles, book chapters and working papers. Some studies have already influenced Rwandan policy relating to the genocide. For example, Rwandan scholar Munyurangabo Benda, a philosopher of religion at the Queen’s Foundation, an ecumenical college in Birmingham, UK, investigated feelings of guilt among children of Hutu perpetrators born after the genocide. A peace-building project that involved this generation of children grew into a nationwide programme on reconciliation. Benda’s academic research played a part in broadening the programme’s offerings.

    In the immediate aftermath of atrocities, focus is often put on perpetrators, as legal organizations seek to make convictions and secure justice. But, in the study of genocide, it is imperative to listen to survivors, to establish their needs and how they can be supported, and also to ensure that their testimonies and experiences are not lost.

    Much of the research on the genocide against the Tutsi has neglected the testimonies of survivors, particularly women, says Noam Schimmel, a scholar of international studies and human rights at the University of California, Berkeley. Survivors need to be given opportunities to share and write about their own perspectives and experiences — whether in literature, as part of research or in journalism — which can help to overcome isolation and marginalization, and to improve their well-being and welfare.

    As atrocities continue to unfold around the world, researchers can learn from Rwanda. Those in positions of responsibility must allow researchers from affected countries to lead where they can, and to elevate the voices of survivors. In doing so, they will bring a deeper level of experience that might allow us to better study and understand these heinous acts. We might still be far from answers — but greater knowledge can only help to shine more light on this darkest of places.

    [ad_2]

    Source link

  • Three ways ChatGPT helps me in my academic writing

    Three ways ChatGPT helps me in my academic writing

    [ad_1]

    Jon Gruda

    For Dritjon Gruda, artificial-intelligence chatbots have been a huge help in scientific writing and peer review.Credit: Vladimira Stavreva-Gruda

    Confession time: I use generative artificial intelligence (AI). Despite the debate over whether chatbots are positive or negative forces in academia, I use these tools almost daily to refine the phrasing in papers that I’ve written, and to seek an alternative assessment of work I’ve been asked to evaluate, as either a reviewer or an editor. AI even helped me to refine this article.

    I study personality and leadership at Católica Porto Business School in Portugal and am an associate editor at Personality and Individual Differences and Psychology of Leaders and Leadership. The value that I derive from generative AI is not from the technology itself blindly churning out text, but from engaging with the tool and using my own expertise to refine what it produces. The dialogue between me and the chatbot both enhances the coherence of my work and, over time, teaches me how to describe complex topics in a simpler way.

    Whether you’re using AI in writing, editing or peer review, here’s how it can do the same for you.

    Polishing academic writing

    Ever heard the property mantra, ‘location, location, location’? In the world of generative AI, it’s ‘context, context, context’.

    Context is king. You can’t expect generative AI — or anything or anyone, for that matter — to provide a meaningful response to a question without it. When you’re using a chatbot to refine a section of your paper for clarity, start by outlining the context. What is your paper about, and what is your main argument? Jot down your ideas in any format — even bullet points will work. Then, present this information to the generative AI of your choice. I typically use ChatGPT, made by OpenAI in San Francisco, California, but for tasks that demand a deep understanding of language nuances, such as analysing search queries or text, I find Gemini, developed by researchers at Google, to be particularly effective. The open-source large language models made by Mixtral, based in Paris, are ideal when you’re working offline but still need assistance from a chatbot.

    Regardless of which generative-AI tool you choose, the key to success lies in providing precise instructions. The clearer you are, the better. For example, you might write: “I’m writing a paper on [topic] for a leading [discipline] academic journal. What I tried to say in the following section is [specific point]. Please rephrase it for clarity, coherence and conciseness, ensuring each paragraph flows into the next. Remove jargon. Use a professional tone.” You can use the same technique again later on, to clarify your responses to reviewer comments.

    Remember, the chatbot’s first reply might not be perfect — it’s a collaborative and iterative process. You might need to refine your instructions or add more information, much as you would when discussing a concept with a colleague. It’s the interaction that improves the results. If something doesn’t quite hit the mark, don’t hesitate to say, “This isn’t quite what I meant. Let’s adjust this part.” Or you can commend its improvements: “This is much clearer, but let’s tweak the ending for a stronger transition to the next section.”

    This approach can transform a challenging task into a manageable one, filling the page with insights you might not have fully gleaned on your own. It’s like having a conversation that opens new perspectives, making generative AI a collaborative partner in the creative process of developing and refining ideas. But importantly, you are using the AI as a sounding board: it is not writing your document for you; nor is it reviewing manuscripts.

    Elevating peer review

    Generative AI can be a valuable tool in the peer-review process. After thoroughly reading a manuscript, summarize key points and areas for review. Then, use the AI to help organize and articulate your feedback (without directly inputting or uploading the manuscript’s text, thus avoiding privacy concerns). For example, you might instruct the AI: “Assume you’re an expert and seasoned scholar with 20+ years of academic experience in [field]. On the basis of my summary of a paper in [field], where the main focus is on [general topic], provide a detailed review of this paper, in the following order: 1) briefly discuss its core content; 2) identify its limitations; and 3) explain the significance of each limitation in order of importance. Maintain a concise and professional tone throughout.”

    I’ve found that AI partnerships can be incredibly enriching; the tools often offer perspectives I hadn’t considered. For instance, ChatGPT excels at explaining and justifying the reasons behind specific limitations that I had identified in my review, which helps me to grasp the broader implications of the study’s contribution. If I identify methodological limitations, ChatGPT can elaborate on these in detail and suggest ways to overcome them in a revision. This feedback often helps me to connect the dots between the limitations and their collective impact on the paper’s overall contribution. Occasionally, however, its suggestions are off-base, far-fetched, irrelevant or simply wrong. And that is why the final responsibility for the review always remains with you. A reviewer must be able to distinguish between what is factual and what is not, and no chatbot can reliably do that.

    Optimizing editorial feedback

    The final area in which I benefit from using chatbots is in my role as a journal editor. Providing constructive editorial feedback to authors can be challenging, especially when you oversee several manuscripts every week. Having personally received countless pieces of unhelpful, non-specific feedback — such as, “After careful consideration, we have decided not to proceed with your manuscript” — I recognize the importance of clear and constructive communication. ChatGPT has become indispensable in this process, helping me to craft precise, empathetic and actionable feedback without replacing human editorial decisions.

    For instance, after evaluating a paper and noting its pros and cons, I might feed these into ChatGPT and get it to draft a suitable letter: “On the basis of these notes, draft a letter to the author. Highlight the manuscript’s key issues and clearly explain why the manuscript, despite its interesting topic, might not provide a substantial enough advancement to merit publication. Avoid jargon. Be direct. Maintain a professional and respectful tone throughout.” Again, it might take a few iterations to get the tone and content just right.

    I’ve found that this approach both enhances the quality of my feedback and helps to guarantee that I convey my thoughts supportively. The result is a more positive and productive dialogue between editors and authors.

    There is no doubt that generative AI presents challenges to the scientific community. But it can also enhance the quality of our work. These tools can bolster our capabilities in writing, reviewing and editing. They preserve the essence of scientific inquiry — curiosity, critical thinking and innovation — while improving how we communicate our research.

    Considering the benefits, what are you waiting for?

    [ad_2]

    Source link

  • how generative AI aids in accessibility

    how generative AI aids in accessibility

    [ad_1]

    Close up of a smart phone screen with a thumb hovering over the ChatGPT app icon

    Tools such as ChatGPT can level the field for scientists who are English-language learners.Credit: Alamy

    In 2015, Hana Kang experienced a traumatic injury that damaged the left hemisphere of her brain, disrupting her facility for language and ability to process abstract thoughts. She spent the next six years rebuilding her memory, recovering basic mathematics skills and relearning Korean, Japanese and English. In 2022, she returned to finish her bachelor’s degree in chemical biology at the University of California, Berkeley.

    Today, Kang works as a junior specialist at the university’s Center for Genetically Encoded Materials. She uses mobility aids and an oxygen concentrator to manage her chronic pain — physical tools that are essential to her well-being. But no less meaningful are the generative artificial intelligence (GAI) programs she turns to each day to manage her time, interact with peers and conduct research. Kang struggles to read social cues and uses chatbots to play out hypothetical conversations. These tools also help her on days when fatigue clouds her thinking — by transcribing and summarizing recordings of lectures she attends, gauging tone and grammar, and polishing her code. “Without these tools, I’d be very lost, and I don’t think I could have done what I’ve managed to do,” she says.

    Artificial intelligence (AI) tools — including chatbots such as ChatGPT, image generators such as Midjourney and DALL-E, and coding assistants such as Copilot — have arrived in force, injecting AI into everything from drafting the simplest grocery list to writing complex computer code. Academics remain divided over whether such tools can be used ethically, however, and in a rush to control them, some institutions have curtailed or completely banned the use of GAI. But for scientists who identify as disabled or neurodivergent, or for whom English is a second language, these tools can help to overcome professional hurdles that disproportionately affect marginalized members of the academic community.

    “Everybody’s talking about how to regulate AI, and there’s a concern that the people deciding these guidelines aren’t thinking about under-represented individuals,” says Chrystal Starbird, a structural biologist at the University of North Carolina at Chapel Hill. She recently turned her attention to how GAI can support diversity, equity and inclusion. “We have to make sure we’re not acting from a place of fear, and that we’re considering how the whole community might use and benefit from these tools.”

    Friend or foe?

    Shortly after OpenAI in San Francisco, California, released ChatGPT in late 2022, primary and secondary schools around the United States started banning chatbots amid fears of plagiarism and cheating. Universities worldwide soon followed suit, including institutions in France, Australia, India, China, the United Kingdom and the United States. Ayesha Pusey, a mental-health and neurodivergence specialist at a UK disability-services organization, learnt that some of her students were facing disciplinary action for using GAI. Pusey, who identifies as autistic, dyslexic and otherwise neurodivergent, uses these programs herself and says that although they can be used to cheat, they’re also invaluable for structuring her life. “I’ve had a lot of success just budgeting my time, down to the recipes I cook for myself.”

    Indeed, using chatbots as a kind of digital assistant has been game-changing for many scientists with chronic illnesses or disabilities or who identify as neurodivergent. Collectively, members of these groups have long shared experiences of being ignored (see Nature Rev. Chem. 7, 815–816; 2023) by an academic system that prioritizes efficiency — stories that are now backed by data (see go.nature.com/3vuch31) .

    For those who struggle with racing thoughts, it can be challenging to settle the mind when working. Tigist Tamir, a postdoctoral researcher at the Massachusetts Institute of Technology in Cambridge, has attention-deficit hyperactivity disorder, and uses chatbots — including a program called GoblinTools, developed for people who are neurodivergent — to turn that inner chatter into actionable tasks and cohesive narratives. “Whether I’m reading, writing or just making to-do lists, it’s very difficult for me to figure out what I want to say. One thing that helps is to just do a brain dump and use AI to create a boiled-down version,” she says, adding: “I feel fortunate that I’m in this era where these tools exist.”

    By contrast, people including Pusey and Kang are more likely to struggle when faced with a blank page, and find chatbots useful for creating outlines for their writing tasks. Both say they sometimes feel that their writing is stilted or their narrative thread is muddled, and value the peace of mind that AI gives them by checking their work for tone and flow.

    Four different AI generated images based on the same quote from a book describing a scene of a house with a dirt yard in the clearing of a wood

    An AI-generated visualization of a woodland clearing described in the novel I Am Charlotte Simmons by Tom Wolfe.Credit: Kate Glazko generated using Midjourney

    The usefulness of these tools extends beyond writing. Image generators such as OpenAI’s DALL-E allow Kate Glazko, a doctoral student in computer science at the University of Washington in Seattle, to navigate her aphantasia — the inability to visualize. When Glazko encounters a description in a book, she can enter the text into a program to create a representative image. (In February, OpenAI also announced Sora, which creates videos from text.) “Being able to read a book and see a visual output has made reading a transformative experience,” she says, adding that these programs also help people who cannot use a pencil or mouse to produce images. “It just creates a way to quickly participate in the design process.”

    Levelling the field

    Academia can also be a hostile place for scientists who are English-language learners. They often spend more time reading, writing and preparing English-language presentations than do those for whom English is their first language1, and they might be less inclined to attend or speak at conferences conducted in English. They are also less likely than fluent English speakers to be perceived as knowledgeable2 by colleagues, and journals are more likely to reject their papers (see Nature 620, 931; 2023).

    Daishi Fujita, a chemist at Kyoto University in Japan, was educated in Japanese. Before GAI, Fujita says, “My colleagues and I would often say how we wished we could read papers in our mother tongue.” Now, they can use ChatPDF — a chatbot that answers users’ questions about the contents of a PDF file — alongside speech recognition and translation tools such as Whisper and DeepL to smooth the reading process. Particularly for literature searches or when researching unfamiliar topics, Fujita uses GAI programs to define words in unfamiliar fields and to quickly gauge whether a paper might be helpful, saving hours of work.

    Generative AI can also be useful for structuring professional communications, allowing English-language learners to worry less over how their words might be perceived. María Mercedes Hincapié-Otero, a research assistant at the University of Helsinki who grew up speaking Spanish in Colombia, relies on GAI not just to structure and proof research papers, but also to draft e-mails and job applications. Passing her text through ChatGPT to check grammar and tone “helps make things a little more fair, as people like me often need to put more time and energy into producing writing at the required level”, Hincapié-Otero says. “I might ask someone to check, but if there’s no one available at the time, this becomes a great alternative.”

    Similarly, Fujita has started using chatbots to help to structure and proofread his peer-review comments. Peer review is already more laborious for scientists who are English-language learners, Fujita says, but because of the small size of his field, there’s also the risk that he could be identified by his writing style. “As a native speaker, you can feel when a comment is written by a non-native speaker,” he explains.

    Towards a better world

    As much as GAI has been a boon for accessibility, it can also perpetuate existing biases. Most chatbots are trained on text from the Internet, which is predominantly written by white, neurotypical men, and chatbot outputs mirror that language. Kieran Rose, an autism advocate based in the United Kingdom, says that for this reason, he never uses AI to change his style of writing. “I absolutely see the usefulness of AI,” he says, but “I don’t apologize for how I communicate”.

    Jennifer Mankoff, a computer scientist at the University of Washington, together with Glazko and other researchers, investigated the potential risks in a 2023 study3 in which scientists with disabilities or chronic illnesses tested GAI tools. Mankoff, who has Lyme disease and often experiences fatigue and brain fog, says that chatbots have proved helpful for tackling tedious tasks, such as collating a bibliography. But she and her co-authors also flagged instances in which chatbots returned ableist tropes, such as ChatGPT misrepresenting the findings of a paper to suggest that researchers speak only to caregivers and not to those receiving care. One co-author struggled to generate accurate images of people with disabilities: the results included disembodied hands and prosthetic legs. And although GAI programs can parrot rules for creating accessible imagery — such as providing the best colours for graphics that can be read by people with visual impairments — they often cannot apply them when creating content.

    Claire Malone sitting at her home computer

    Claire Malone uses AI for dictation.Credit: Claire Malone

    That said, GAI can also bring joy to peoples’ lives. Speaking to Nature, scientists shared stories of using the software to create knitting patterns, recipes, poetry and art. That might seem irrelevant to academic research, but creativity is a crucial part of innovation, Mankoff says. “Particularly for creative tasks — ideation, exploration, creating throwaway things as part of the creative process — accessibility tools don’t have all of the capabilities we would want,” she says. “But GAI really opens the door for people with disabilities to engage in this space where interesting advancements happen.”

    Claire Malone, a physicist turned science communicator based in London, is working on a science-fiction novel and uses AI to transcribe her thoughts through dictation — something she couldn’t do even a year ago. Malone has mobility, dexterity and speech conditions because of cerebral palsy, but in 2022, she discovered an AI tool called Voiceitt that transcribes atypical speech and integrates with ChatGPT. Whereas before she could type at six words per minute, “if I dictate, I can write at the pace that I speak”, she says, adding that the tool has been “transformative” in her work and personal life. In a LinkedIn post (see go.nature.com/3ixrynv), Malone shared how she can now get away from her desk and dictate text whenever inspiration strikes.

    As for Kang, she’s started using GAI to re-engage with her creative and social outlets. Before her accident, Kang often wrote fiction and graphic novels, and she has started to do so again using ChatGPT and image generators. She’s also rebuilding her social life by hosting house parties and using ChatGPT to generate conversation topics and even jokes. Using chatbots to inject humour back into her relationships has helped her to reconnect with friends and break the ice with strangers, she says. “Humour feels like such an unimportant thing when you’re trying to rebuild a life, but if you can afford to be funny, it feels like you’ve succeeded.”

    [ad_2]

    Source link

  • Nature is committed to diversifying its journalistic sources

    Nature is committed to diversifying its journalistic sources

    [ad_1]

    Press conference for the World Day for the Elimination of Violence against Women of Italian astronaut Samantha Cristoforetti.

    Italian astronaut Samantha Cristoforetti was interviewed by Nature’s Careers team in 2023.Credit: Massimo Di Vita/Mondadori Portfolio/Getty

    How can Nature’s journalists reach out to the broadest possible set of scientists and research-associated professionals in our journalism? That’s the question at the heart of our three-year effort to track the diversity of the sources interviewed in the journal’s News, Features and Careers articles, and in audio and video content.

    Journalism is a mirror of the community in which it exists — as communities and societies change, journalistic practice and content have to keep up, both to stay relevant and to reflect the needs and priorities of audiences accurately. That’s why, in April 2021, Nature’s journalism teams began recording three characteristics of diversity for their written, audio and video content: the pronouns of the people interviewed, their geographical location and their career stage.

    Men dominate the senior rungs of science and, historically, scientists and institutions in North America and Europe have dominated scientific publishing. Both trends are starting to change, albeit at different speeds.

    We published an initial set of statistics last February, covering the period from 1 April 2021 to 31 January 2023. Here, we provide an update for 1 February 2023 to 31 January 2024 (see ‘Diversity in Nature’s journalism’).

    DIVERSITY IN NATURE’S JOURNALISM. Graphic breakdown of 5,492 people quoted or paraphrased in Nature’s journalistic content.

    Our previous analysis of 1,241 written articles, podcasts and video content revealed that 59.6% of sources quoted or paraphrased used he/him pronouns; 76.6% were from North America or Europe; and 67.9% were established in their careers.

    For the 862 journalistic pieces in the current analysis, Nature’s staff journalists and freelance writers interviewed 3,679 sources. Of these, 3,569 (97%) provided their pronouns. These broke down into 2,147 sources (60.2%) who used he/him pronouns, 1,401 (39.3%) who used she/her and 21 (0.6%) who had they/them or other pronouns. These ratios are broadly unchanged from our earlier data.

    In total, 3,635 sources gave their geographical location. Of those, 2,865 (78.8%) were based in either North America or Europe, and 770 (21.2%) in the rest of the world. That represents a decrease in regional diversity compared with our previous analysis, which showed that 23.4% of sources were outside North America and Europe.

    Finally, when it comes to career stage, 3,478 sources provided data. Of these, 2,158 (62%) identified as established in their careers — including sources, such as professors and those who hold tenure, and non-academic ones with senior roles. Some 18.8% of sources fell in the ‘early career’ category, including graduate students, postdocs and non-tenured faculty members, compared with 19.6% previously. Around 19.1% fell into the ‘other’ category, which includes people in non-academic environments, such as industry, campaign organizations and policy. This group’s share in Nature’s journalistic content has increased from 12.5%.

    There are some caveats to our analysis. These data were gathered by Nature’s journalism teams in North America, Europe and the Asia–Pacific region. They do not include journalistic content commissioned by our other offices. Nor do they include content written by external authors, such as World Views and Careers columns. Furthermore, the results have not been tested for statistical significance.

    Still, the results provide a good overview for a large proportion of Nature’s journalism. We realize that reporting our findings is only the first step towards improving the diversity of our sources. Nature’s journalism teams are currently expanding their networks and are also looking at best practice in media and publishing industries.

    Diverse sources produce stronger journalism — and better represent today’s global scientific community. The shape and priorities of world science are changing, and we must adapt to reflect those changing realities.

    [ad_2]

    Source link

  • Peer-replication model aims to address science’s ‘reproducibility crisis’

    Peer-replication model aims to address science’s ‘reproducibility crisis’

    [ad_1]

    A group of three female technicians discuss work in laboratory while wearing white lab coats.

    An independent team could replicate select experiments in a paper before publication, to help catch errors and poor methodology.Credit: SolStock/Getty

    Could the replication crisis in scientific literature be addressed by having scientists independently attempt to reproduce their peers’ key experiments during the publication process? And would teams be incentivized to do so by having the opportunity to report their findings in a citable paper, to be published alongside the original study?

    These are questions being asked by two researchers who say that a formal peer-replication model could greatly benefit the scientific community.

    Anders Rehfeld, a researcher in human sperm physiology at Copenhagen University Hospital, began considering alternatives to standard peer review after encountering a published study that could not be replicated in his laboratory. Rehfeld’s experiments1 revealed that the original paper was flawed, but he found it very difficult to publish the findings and correct the scientific record.

    “I sent my data to the original journal, and they didn’t care at all,” Rehfeld says. “It was very hard to get it published somewhere where you thought the reader of the original paper would find it.”

    The issues that Rehfeld encountered could have been avoided if the original work had been replicated by others before publication, he argues. “If a reviewer had tried one simple experiment in their own lab, they could have seen that the core hypothesis of the paper was wrong.”

    Rehfeld collaborated with Samuel Lord, a fluorescence-microscopy specialist at the University of California, San Francisco, to devise a new peer-replication model.

    In a white paper detailing the process2, Rehfeld, Lord and their colleagues describe how journal editors could invite peers to attempt to replicate select experiments of submitted or accepted papers by authors who have opted in. In the field of cell biology, for example, that might involve replicating a western blot, a technique used to detect proteins, or an RNA-interference experiment that tests the function of a certain gene. “Things that would take days or weeks, but not months, to do” would be replicated, Lord says.

    The model is designed to incentivize all parties to participate. Peer replicators — unlike peer reviewers — would gain a citable publication, and the authors of the original paper would benefit from having their findings confirmed. Early-career faculty members at mainly undergraduate universities could be a good source of replicators: in addition to gaining citable replication reports to list on their CVs, they would get experience in performing new techniques in consultation with the original research team.

    Rehfeld and Lord are discussing their idea with potential funders and journal editors, with the goal of running a pilot programme this year.

    “I think most scientists would agree that some sort of certification process to indicate that a paper’s results are reproducible would benefit the scientific literature,” says Eric Sawey, executive editor of the journal Life Science Alliance, who plans to bring the idea to the publisher of his journal. “I think it would be a good look for any journal that would participate.”

    Who pays?

    Sawey says there are two key questions about the peer-replication model: who will pay for it, and who will find the labs to do the reproducibility tests? “It’s hard enough to find referees for peer review, so I can’t imagine cold e-mailing people, asking them to repeat the paper,” he says. Independent peer-review organizations, such as ASAPbio and Review Commons, might curate a list of interested labs, and could even decide which experiments will be replicated.

    Lord says that having a third party organize the replication efforts would be great, and adds that funding “is a huge challenge”. According to the model, funding agencies and research foundations would ideally establish a new category of small grants devoted to peer replication. “It could also be covered by scientific societies, or publication fees,” Rehfeld says.

    It’s also important for journals to consider what happens when findings can’t be replicated. “If authors opt in, you’d like to think they’re quite confident that the work is reproducible,” says Sawey. “Ideally, what would come out of the process is an improved methods or protocols section, which ultimately allows the replicating lab to reproduce the work.”

    Most important, says Rehfeld, is ensuring that the peer-replication reports are published, irrespective of the outcome. If replication fails, then the journal and original authors would choose what to do with the paper. If an editor were to decide that the original manuscript was seriously undermined, for example, they could stop it from being published, or retract it. Alternatively, they could publish the two reports together, and leave the readers to judge. “I could imagine peer replication not necessarily as an additional ‘gatekeeper’ used to reject manuscripts, but as additional context for readers alongside the original paper,” says Lord.

    A difficult but worthwhile pursuit

    Attempting to replicate others’ work can be a challenging, contentious undertaking, says Rick Danheiser, editor-in-chief of Organic Syntheses, an open-access chemistry journal in which all papers are checked for replicability by a member of the editorial board before publication. Even for research from a well-resourced, highly esteemed lab, serious problems can be uncovered during reproducibility checks, Danheiser says.

    Replicability in a field such as synthetic organic chemistry — in which the identity and purity of every component in a reaction flask should already be known — is already challenging enough, so the variables at play in some areas of biology and other fields could pose a whole new level of difficulty, says Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York, and co-founder of the bioRxiv and medRxiv preprint servers. “But just because it’s hard, doesn’t mean there might not be cases where peer replication would be helpful.”

    The growing use of preprints, which decouple research dissemination from evaluation, allows some freedom to rethink peer evaluation, Sever adds. “I don’t think it could be universal, but the idea of replication being a formal part of evaluating at least some work seems like a good idea to me.”

    An experiment to test a different peer-replication model in the social sciences is currently under way, says Anna Dreber Almenberg, who studies behavioural and experimental economics at the Stockholm School of Economics. Dreber is a board member of the Institute for Replication (I4R), an organization led by Abel Brodeur at University of Ottawa, which works to systematically reproduce and replicate research findings published in leading journals. In January, I4R entered an ongoing partnership with Nature Human Behaviour to attempt computational reproduction of data and findings of as many studies published from 2023 onwards as possible. Replication attempts from the first 18 months of the project will be gathered into a ‘meta-paper’ that will go through peer review and be considered for publication in the journal.

    “It’s exciting to see how people from completely different research fields are working on related things, testing different policies to find out what works,” says Dreber. “That’s how I think we will solve this problem.”

    [ad_2]

    Source link

  • Numbers highlight US dominance in clinical research

    Numbers highlight US dominance in clinical research

    [ad_1]

    As the leading country in health-sciences output in the Nature Index, the United States’ Share is almost 8,500, higher than the next 10 leading countries combined. As a result, US institutions feature prominently among the leading research organizations for the subject, with 30 of the top 50 being based there.

    The country’s dominance means that it comes top for Share in all but seven of the journals tracked by the Nature Index in the subject. This includes large general journals such as Nature Communications and specialist medical publications such as The New England Journal of Medicine. PLOS Medicine and Gut are two examples where authors based elsewhere (the United Kingdom and China) made the largest contribution.

    Proportion bar showing the leading five countries' Share and percentage of their contribution to health-sciences articles in 6 journals

    Source: Nature Index. Data analysis by Aayush Kagathra. Infographic by Simon Baker, Bec Crew and Tanner Maxwell.

    The United States is the clear frontrunner among the leading five countries for health-sciences research, with a Share almost four times higher than China, in second place. The United Kingdom is third, with a Share of almost 1,500, a higher placing than its fourth position overall in the Nature Index.

    Bar graph showing the leading countries in health-sciences output by Share in 2022-23 in the Nature Index

    Source: Nature Index. Data analysis by Aayush Kagathra. Infographic by Simon Baker, Bec Crew and Tanner Maxwell.

    Out of the top 25 countries for health-sciences articles in the Nature Index, five nations have a Share that makes up at least 29% of their overall footprint in the database across all subjects. Denmark, whose research is boosted by the success of companies such as Novo Nordisk, has the highest ratio in this regard at almost 40%.

    Bar graph showing five of 25 countries with the highest proportion of health-sciences output in the Nature Index

    Source: Nature Index. Data analysis by Aayush Kagathra. Infographic by Simon Baker, Bec Crew and Tanner Maxwell.

    As Harvard University, in Cambridge, Massachusetts, is the leading institution for high-quality health-sciences research, its involvement in the top institutional partnership in the field is no surprise. But its dominance does not extend to all the other leading collaborations, some of which involve institutions outside the United States.

    Bar graph showing the leading global institutional collaborations in health sciences in the Nature Index for 2022-23

    Source: Nature Index. Data analysis by Aayush Kagathra. Infographic by Simon Baker, Bec Crew and Tanner Maxwell.

    The difference in Nature Index health-sciences output between the leading academic institution, Harvard University in Cambridge, Massachusetts, and other top institutions is a Share of more than 600. Compared with Harvard, most of the leading institutions also have a lower proportion of their overall Nature Index output in health sciences.

    The University of Toronto in Canada and Johns Hopkins University in Baltimore, Maryland, are the only other academic institutions with a health-sciences Share of over 200. They also have a relatively strong focus on health sciences, with over 35% of their overall Nature Index research output in the subject area.

    Scatter plot showing selected institutions' Share in health sciences vs their health-science article contribution to overall Share in the Nature Index for 2022-23

    Source: Nature Index. Data analysis by Aayush Kagathra. Infographic by Simon Baker, Bec Crew and Tanner Maxwell.

    [ad_2]

    Source link

  • China has a list of suspect journals and it’s just been updated

    China has a list of suspect journals and it’s just been updated

    [ad_1]

    A deputy to the 13th National People's Congress reads at the library of University of Science and Technology Liaoning in Anshan.

    The National Science Library of the Chinese Academy of Sciences in Beijing.Credit: Yang Qing/Imago via Alamy

    China has updated its list of journals that are deemed to be untrustworthy, predatory or not serving the Chinese research community’s interests. Called the Early Warning Journal List, the latest edition, published last month, includes 24 journals from about a dozen publishers. For the first time, it flags journals that exhibit misconduct called citation manipulation, in which authors try to inflate their citation counts.

    Yang Liying studies scholarly literature at the National Science Library, Chinese Academy of Sciences, in Beijing. She leads a team of about 20 researchers who produce the annual list, which was launched in 2020 and relies on insights from the global research community and analysis of bibliometric data.

    The list is becoming increasingly influential. It is referenced in notices sent out by Chinese ministries to address academic misconduct, and is widely shared on institutional websites across the country. Journals included in the list typically see submissions from Chinese authors drop. This is the first year the team has revised its method for developing the list; Yang speaks to Nature about the process, and what has changed.

    How do you go about creating the list every year?

    We start by collecting feedback from Chinese researchers and administrators, and we follow global discussions on new forms of misconduct to determine the problems to focus on. In January, we analyse raw data from the science-citation database Web of Science, provided by the publishing-analytics firm Clarivate, based in London, and prepare a preliminary list of journals. We share this with relevant publishers, and explain why their journals could end up on the list.

    Sometimes publishers give us feedback and make a case against including their journal. If their response is reasonable, we will remove it. We appreciate suggestions to improve our work. We never see the journal list as a perfect one. This year, discussions with publishers cut the list from around 50 journals down to 24.

    Portrait of Liying Yang.

    Yang Liying studies scholarly literature at the National Science Library and manages a team of 20 to put together the Early Warning Journal List.Credit: Yang Liying

    What changes did you make this year?

    In previous years, journals were categorized as being high, medium or low risk. This year, we didn’t report risk levels because we removed the low risk category, and we also realized that Chinese researchers ignore the risk categories and simply avoid journals on the list altogether. Instead, we provided an explanation of why the journal is on the list.

    In previous years, we included journals with publication numbers that increased very rapidly. For example, if a journal published 1,000 articles one year and then 5,000 the next year, our initial logic was that it would be hard for these journals to maintain their quality-control procedures. We have removed this criterion this year. The shift towards open access has meant that it is possible for journals to receive a large number of manuscripts, and therefore rapidly increase their article numbers. We don’t want to disturb this natural process decided by the market.

    You also introduced journals with abnormal patterns of citation. Why?

    We noticed that there has been a lot of discussion on the subject among researchers around the world. It’s hard for us to say whether the problem comes from the journals or from the authors themselves. Sometimes groups of authors agree to this citation manipulation mutually, or they use paper mills, which produce fake research papers. We identify these journals by looking for trends in citation data provided by Clarivate — for example, journals in which manuscript references are highly skewed to one journal issue or articles authored by a few researchers. Next year, we plan to investigate new forms of citation manipulation.

    Our work seems to have an impact on publishers. Many publishers have thanked us for alerting them to the issues in their journals, and some have initiated their own investigations. One example from this year, is the open-access publisher MDPI, based in Basel, Switzerland, whom we informed that four of its journals would be included in our list because of citation manipulation. Perhaps it is unrelated, but on 13 February, MDPI sent out a notice that it was looking into potential reviewer misconduct involving unethical citation practices in 23 of its journals.

    You also flag journals that publish a high proportion of papers from Chinese researchers. Why is this a concern?

    This is not a criterion we use on its own. These journals publish — sometimes almost exclusively — articles by Chinese researchers, charge unreasonably high article processing fees and have a low citation impact. From a Chinese perspective, this is a concern because we are a developing country and want to make good use of our research funding to publish our work in truly international journals to contribute to global science. If scientists publish in journals where almost all the manuscripts come from Chinese researchers, our administrators will suggest that instead the work should be submitted to a local journal. That way, Chinese researchers can read it and learn from it quickly and don’t need to pay so much to publish it. This is a challenge that the Chinese research community has been confronting in recent years.

    How do you determine whether a journal has a paper-mill problem?

    My team collects information posted on social media as well as websites such as PubPeer, where users discuss published articles, and the research-integrity blog For Better Science. We currently don’t do the image or text checks ourselves, but we might start to do so later.

    My team has also created an online database of questionable articles called Amend, which researchers can access. We collect information on article retractions, notices of concern, corrections and articles that have been flagged on social media.

    Marked down: Chart showing drop in articles published in medium- and high-risk journals the year after the Early Warning Journal List is released.

    Source: Early Warning Journal List

    What impact has the list had on research in China?

    This list has benefited the Chinese research community. Most Chinese research institutes and universities reference our list, but they can also develop their own versions. Every year, we receive criticisms from some researchers for including journals that they publish in. But we also receive a lot of support from those who agree that the journals included on the list are of low quality, which hurts the Chinese research ecosystem.

    There have been a lot of retractions from China in journals on our list. And once a journal makes it on to the list, submissions from Chinese researchers typically drop (see ‘Marked down’). This explains why many journals on our list are excluded the following year — this is not a cumulative list.

    This interview has been edited for length and clarity.

    [ad_2]

    Source link

  • Nature publishes too few papers from women researchers — that must change

    Nature publishes too few papers from women researchers — that must change

    [ad_1]

    Shot of a young female scientist writing notes while working in a lab.

    Women and early-career researchers: Nature wants to publish your research.Credit: Getty

    Researchers submitting original research to Nature over the past year will have noticed an extra question, asking them to self-report their gender. Today, as part of our commitment to helping to make science more equitable, we are publishing in this editorial a preliminary analysis of the resulting data, from almost 5,000 papers submitted to this journal over a five-month period. As well as showing the gender split in submissions, we also reveal, for the first time, possible interactions between the gender of the corresponding author and a paper’s chance of publication.

    The data make for sobering reading. One stark finding is how few women are submitting research to Nature as corresponding authors. Corresponding authors are the researchers who take responsibility for a manuscript during the publication process. In many fields, this role is undertaken by some of the most experienced members of the team.

    During the period analysed, some 10% of corresponding authors preferred not to disclose their gender. Of the remainder, just 17% identified as women — barely an increase on the 16% we found in 2018, albeit using a less precise methodology. By comparison, women made up 31.7% of all researchers globally in 2021, according to figures from the United Nations science, education and cultural organization UNESCO (see go.nature.com/3wgdasb).

    Large geographical differences were also laid bare. Women made up just 4% of corresponding authors of known gender from Japanese institutions. Of researchers from the two countries submitting the most papers, China and the United States, women made up 11% and 22%, respectively. These figures reflect the fact that women’s representation in research drops at the most senior levels. They also mirror available data from other journals1, although it is hard to find direct comparisons for a multidisciplinary journal such as Nature.

    At Cell, which has a life-sciences focus, women submitted 17% of manuscripts between 2017 and 2021, according to an analysis of almost 13,000 submissions2. The most recent data on gender from the American Association for the Advancement of Science (AAAS), which publishes the six journals in the Science family, is collected and reported differently. Some 27% of their authors of primary and commissioned content, and their reviewers, are women, according to the AAAS Inclusive Excellence Report (see go.nature.com/3t6yyr8). Nonetheless, all of these figures are just too low.

    Another area of concern is acceptance rates. Of the submissions included in the current Nature analysis, those with women as the corresponding author were accepted for publication at a slightly lower rate than were those authored by men. Some 8% of women’s papers were accepted (58 out of 726 submissions) compared with 9% of men’s papers (320 out of 3,522 submissions). The acceptance rate for people self-reporting as non-binary or gender diverse seemed to be lower, at 3%, although this is a preliminary figure and we have reason to suspect that the real figure could be higher, as described below. Once we have a larger sample, we plan to test whether the differences are statistically significant.

    Sources of imbalance

    So, at what stage in the publishing process is this imbalance introduced? Men and women seem to be treated equally when papers are selected for review. The journal’s editors — a group containing slightly more women than men — were just as likely to send papers out for peer review for women corresponding authors as they were for men. For both groups, 17% of submitted papers went for peer review.

    A difference arose after that. Of those papers sent for review, 46% of papers with women as corresponding authors were accepted for publication (58 of 125) compared with 55% (320 of 586) of papers authored by men. The acceptance rate for non-binary and gender-diverse authors was higher at 67%. However, this is from a total of only three reviewed papers, a figure that is too small to be meaningful.

    This difference in acceptance rates during review tallies with the findings of a much larger 2018 study of 25 Nature-family journals, which used a name-matching algorithm, rather than self-reported data3. Looking at 17,167 papers sent for review over a 2-year period, the authors found a smaller but significant difference in acceptance rates, with 43% for papers with a woman as corresponding author, compared with 45% for a man. However, they were unable to say whether the difference was attributable to reviewer bias or variations in manuscript quality.

    Peering into peer review

    How much bias exists in the peer-review process is difficult to study and has long been the subject of debate. A 2021 study in Science Advances that looked at 1.7 million authors across 145 journals between 2010 and 2016 found that, overall, the peer-review and editorial processes did not penalize manuscripts by women4. But that study analysed journals with lower citation rates than Nature, and its results contrast with those of previous work5, which found gender-based skews.

    Moreover, other studies have shown that people rate men’s competence more highly than women’s when assessing identical job applications6; that there is a gender bias against women in citations; and that women are given less credit for their work than are men7. Taken together, this means we cannot assume that peer review is a gender-blind process. Most papers in our current study were not anonymized. We did not share how the authors self-reported, but editors or reviewers might have inferred gender from a corresponding author’s name. Nature has offered double-anonymized peer review for both authors and reviewers since 2015. Too few take it up for us to have been able to examine its impact in this analysis, but the larger study in 2018 looked at this in detail3.

    Data limitations

    There are important limitations to Nature’s data: we must emphasize again that they are preliminary. Moreover, they provide the gender of only one corresponding author per paper, not the gender distribution of a paper’s full author list. Furthermore, they don’t describe any other differences between authors.

    There are also aspects of the data that need to be investigated further. For example, we need to look into the possibility that the option of reporting as non-binary or gender diverse is being misinterpreted by some authors with English as a second language. We think that ironing out such misunderstandings could result in a higher acceptance rate for non-binary authors.

    Most importantly, these data give no insight into author experiences in relation to race, ethnicity and socio-economic status. Although men often have advantages compared with women, other protected characteristics also have a significant impact on scientists’ careers. Nature is participating in an effort by a raft of journal publishers to document and reduce bias in scholarly publishing by tracking a range of characteristics. This is a work in progress and sits alongside Springer Nature’s wider commitment to tackling inequity in research publishing.

    So what can Nature do to ensure that more women and minority-gender scientists find a home for their research in our pages?

    First, we want to encourage a more diverse pool of corresponding authors to submit. The fact that only 17% of submissions come from corresponding authors who identify as women might reflect existing imbalances in science (for example, it roughly tracks with the 18% of professor-level scientists in the European Union who are women, as reported by the European Commission8).

    But there remains much scope for improvement. We know that the workplace climate in academia can push women out or see them overlooked for senior positions9. A 2023 study published in eLife found that women tend to be more self-critical of their own work than men are and that they are more frequently advised not to submit to the most prestigious journals10.

    Second, just as prestigious universities should not simply lament their low application numbers from under-represented groups, we should not sit back and wait for change to come to us. To this end, our editors will actively seek out authors from these communities when at conferences and on laboratory visits. We will be more proactive in reaching out to women and early-career researchers to make sure they know that Nature wants to publish their research. We encourage authors with excellent research, at any level of seniority and at any institution, to submit their manuscripts.

    Third, in an effort to make peer review fairer, Nature’s editors have been actively working to recruit a more diverse group of referees; 2017 data found that women made up just 16% of our reviewers. We need to double down on our efforts to improve this situation and update readers on our progress. In the future, we also plan to analyse whether corresponding authors’ gender affects the number of review cycles they face, and whether there are differences in relation to gender according to discipline and prestige of their affiliated institution. We need to improve our understanding of the sources of inequity before we can work on ways to address them. Nature’s editors will also strive to minimize our own biases through ongoing unconscious-bias training.

    Last but not least, we will keep publishing our data on authorship and peer review, alongside complementary statistics on the gender of contributors to articles outside original research. Although today’s data present just a snapshot, Nature remains committed to tracking the gender of authors, to regularly updating the community on our efforts, and to exploring ways to make the publication process more equitable.

    [ad_2]

    Source link

  • COVID’s preprint bump set to have lasting effect on research publishing

    COVID’s preprint bump set to have lasting effect on research publishing

    [ad_1]

    Biologists wearing face masks work at a fume cupboard in a pharmaceutical laboratory.

    Researchers in Nantes, France, working on a COVID-19 vaccine in 2021. The use of preprints to disseminate research findings saw a major uptick during the pandemic.Credit: Loic Venance/AFP/Getty

    The COVID-19 pandemic saw an explosion in publication of preprint articles, many by authors who had never produced one before. Now it seems a high proportion of these scientists are likely to continue the practice.

    A survey published in PeerJ1 questioned researchers who had posted preprints relating to COVID-19 or the virus SARS-CoV-2 in 2020, across four preprint servers: arXiv, bioRxiv, medRxiv and ChemRxiv. Of the 673 people who completed the survey, just under 58% had posted their preprints on the biomedical server medRxiv; around 18% on arXiv, which focuses on mathematics and physical sciences; 14% on the life-sciences server bioRxiv; and 7% on ChemRxiv, a chemistry repository.

    For two-thirds of respondents, this was the first time they had published a preprint. Almost 80% of these said they intended to post preprints of at least some of their papers going forward.

    One of the most intriguing findings is the number of respondents who received feedback on their preprints, says study co-author Narmin Rzayeva, a scientometrics researchers at Leiden University in the Netherlands. Fifty-three per cent received comments from peers, more than half of which were delivered privately through closed channels such as by e-mail or during meetings. Around 20% of respondents received comments on the preprint platforms, which are publicly accessible.

    “We expected much lower numbers,” Rzayeva says, because preprint papers don’t typically receive much feedback.

    Previous work2 found that by the end of December 2021, just 8% of preprints posted on medRxiv since it launched in mid-2019 had received comments online. But that study considered only publicly posted comments.

    The impact of feedback

    Preprint feedback is having an effect, albeit unevenly. Of all survey respondents, just 1.9% reported making major changes to the results section of their preprints as a result of feedback. By contrast, 10.1% received such changes in response to peer review conducted as part of conventional journal publication. Rzayeva suspects that this is partly because authors feel obliged to make changes after receiving feedback from journal peer reviewers.

    Of the survey respondents who reported receiving feedback on their preprints, 21.2% said they had made substantial changes to their discussion and conclusions sections. “I find it pretty exciting and encouraging that authors are making the amount of changes to their preprints that they do in response to preprint commentary,” says Jessica Polka, executive director of ASAPbio, a non-profit organization in San Francisco, California, that promotes innovation in the life sciences.

    Polka notes that preprint feedback tends not to be as thorough as a review commissioned by a journal. An analysis of comments left on bioRxiv preprints posted between May 2015 and September 2019 found that only around 12% of non-author comments resembled those from conventional peer review3.

    Polka encourages researchers to strike up discussions over preprints. “By conducting peer review in the open, you integrate many more perspectives than you would by doing it behind closed doors,” she says.

    The preprint experience seems to have been positive for the survey respondents, 87% of whom said they had later submitted their paper to a peer-reviewed journal. Preprints shouldn’t replace journal articles, Rzayeva says, but should complement them and become an integral part of the publishing system.

    Taking AI into account

    Rzayeva acknowledges that the survey covered only 4 servers, which accounted for around 55% of all COVID-19 preprints published in 2020. As with most surveys, there was also a self-selection bias, meaning that the proportion of individuals with certain views could be overestimated.

    Anita Bandrowski, an information scientist at the University of California, San Diego, says the survey is important, but notes that it did not consider artificial intelligence (AI) tools that are giving automated feedback on preprints. Bandrowski was part of a group of biologists and software specialists who developed a set of automated tools that measure the rigour and reproducibility of COVID-19 preprints and post the results on the social-media platform X.

    Similar tools could become common as researchers consider ways to assess the rapidly growing number of preprints, and it will be important to find ways to track the results, says Bandrowski. She predicts that there will be “much more adoption of preprints in the future among biologists” as a result of researchers dipping their toes in during the pandemic.

    Polka agrees. “The pandemic gave us a window into what is possible with preprints. It’s just a matter of tweaking policies in order to make use of that potential.”

    [ad_2]

    Source link

  • Innovative funding systems are key to fighting inequities in African science

    Innovative funding systems are key to fighting inequities in African science

    [ad_1]

    With African investment in research and development (R&D) still well below the global average, African higher-education and research institutions rely on grants from outside the continent. This is not ideal, but it will be inevitable until African countries follow through on their promises to spend more on research.

    Most research grants are merit-based — intended to support the best ideas with the greatest potential for success — but this risks funnelling most of the funds to a few researchers in rich countries and at institutions that have built prestige and reputations. Poorer countries and newly established institutions are struggling to compete. A more innovative and equitable funding system is needed to ensure that they don’t get left behind.

    Grant makers — foundations, corporations and government agencies that fund research grants — are exploring new funding models to meet the needs of African researchers. A promising example is the hub-and-spoke model, which aims to distribute resources and knowledge in ways that balance merit with equity. The system features a centralized hub that receives funding and allocates it to each of its spokes, which are sprawled out around the wheel.

    In practice, the central hub is usually an African research centre or university that receives funding from grant makers and manages all of the procedures surrounding the award. Auxiliary institutions receive sub-grants from the hub to conduct defined research projects on behalf of the group. These spokes can be anywhere in the world, but the Developing Excellence in Leadership, Training, and Science in Africa (DELTAS Africa) initiative, which uses the hub-and-spoke model, has guidelines recommending that at least 60% of the spokes are African institutions.

    Hubs will generally select spokes with which their ideas align, as well as those that have good potential to deliver quality research. Spokes are judged on aspects such as their methodologies, training programmes or facilities and track record of grant management.

    Hubs are expected to assess the performance of their spokes, while maintaining communications and conflict-resolution protocols to ensure effective collaboration.

    Achieving wider reach

    Along with a focus on merit, diversity guidelines are integral to the successful running of a hub-and-spoke model. These guidelines aim to open up opportunities to more individuals and institutions than before, to ensure that there is a varied application pool to begin with.

    As a result, the model also encourages the forging of mutually beneficial collaborations between rich countries in the global north and less well-off ones in the global south as well as south–south collaborations. The total budget that can be allocated to spokes in countries outside Africa is capped to ensure that African institutions receive the lion’s share of funding. This is to prioritize Africa’s research opportunities and priorities and ensure a balance of power between global-north and global-south participants.

    DELTAS Africa’s hub-and-spoke model is being implemented by the Science for Africa Foundation, a non-profit organization based in Nairobi, with support from the London-based biomedical funder Wellcome and the UK Foreign, Commonwealth and Development Office. DELTAS Africa has established the model in all geographical regions of the African Union and has covered more areas of research than any other programme of its type. It has attained near 50:50 gender parity at all levels of the organization, including in directorship, authorship of publications, fellowships and research-management support. This improves on the rates across the African continent, where women form about 20–30% of the scientific workforce.

    Portrait of Susan Gichoga.

    Susan Gichoga is a grants officer at the Science for Africa Foundation, based in Nairobi, Kenya.Credit: SFA Foundation

    No funding model is without shortcomings or challenges. The scope and complexity of the various programmes, and the potential for cultural differences, for example, mean that set strategies are needed to ensure that groups are managed effectively.

    Yet, the hub-and-spoke model offers distinct advantages for grant makers by increasing the quality of proposals during the application stage and ensuring richer intellectual capital during the implementation stage. Funders can be assured that their R&D resources are having a wide reach, and are furthering the equity, impact and research output of the programmes.

    As outlined by the African Union’s Science Technology and Innovation Strategy for Africa — a framework established in 2015 to accelerate the transition to an innovation-led, knowledge-based economy — investing in R&D is crucial to address the continent’s unique public-health challenges and the looming effects of climate change. As funders accelerate R&D investments, they must ensure that diversity, equity and historical structural realities are factored into their grant-making approaches.

    [ad_2]

    Source link