Tag: Authorship

  • How South Korea can build better gender diversity into research

    How South Korea can build better gender diversity into research

    [ad_1]

    When designing a research study, integrating sex and gender as variables, such as by including both female and male participants and ensuring transgender people and those who do not fall into binary categorizations are also accounted for, is key to ensuring robust and reproducible results. But this is not being done nearly enough. In medical research, for example, centuries of female exclusion have led to inadequate knowledge and funding of diseases that affect women. In the development of generative artificial intelligence (AI), a lack of sex and gender considerations has perpetuated biases and stereotypes in areas such as image creation and language translation. Such oversights not only skew research findings but also undermine opportunities for discovery. Significant advancements have been made in fields such as cancer immunotherapy, cardiovascular disease and osteoporosis as a result of using sex and gender analysis (SGA) in research, and it has revealed important differences in how men and women metabolize drugs, leading to safer and more effective doses.

    Heisook Lee

    Heisook Lee.Credit: GISTeR

    Despite the clear need for SGA to become the norm in experimental design, there is much work to be done before the practice is standardized in research globally. In South Korea, SGA integration is encouraged and promoted through government initiatives, but more policy development and capacity building is needed to drive uptake. At the Korea Center for Gendered Innovations for Science and Technology Research (GISTeR) in Seoul, we are investigating the use of SGA in South Korean research. One analysis showed that between 2017 to 2021, just 5.65% of South Korean biomedical articles, on average, included SGA in the experimental or study design. This figure, which relies heavily on individual researchers choosing to engage with the practice, is lower than in countries where the integration of SGA is mandatory for research funding.

    The increasing complexity of study designs makes SGA integration a challenge for scientists in South Korea, especially early career researchers, who are not typically taught the practice. The limited availability of sex-disaggregated resources — data, animals, cells and other materials that have been collected and analysed separately for male, female and non-binary participants — further complicates matters and emphasizes the need for training to encourage more researchers to consider SGA in their work. As the South Korean government ramps up funding and support for international collaboration, its researchers will need to get up to speed on SGA integration. Horizon Europe, the European Union’s flagship research-funding programme that South Korea joined in March, mandates SGA integration in the research it funds, for example.

    Heajin Kim

    Heajin Kim.Credit: GISTeR

    Recent policy changes from the South Korean government have been encouraging, but they have not moved the needle much in terms of researcher and institution uptake of SGA. In 2020, amendments were made to the Korean Framework Act on Science and Technology to emphasize the importance of sex and gender characteristics. Two years later, Korea’s Fifth Science and Technology Master Plan, which outlines the country’s medium-to-long-term goals and priorities for 2023 to 2027, emphasized the importance of SGA integration.

    We need buy-in from funding agencies, publishers and institutions to ensure that researchers are equipped and incentivized to implement the practice. We propose the following strategies. First, funding agencies in South Korea should consider mandating SGA integration in the research they fund, and more academic journals need to strengthen their editorial policies by requiring SGA integration in manuscript submissions.

    The research community needs to ensure the management and standardization of resources, such as cells and biological models, and data that are sex or gender specific, so they can be used throughout the entire research process, from the initial design to the final analysis. At GISTeR, we are running training and outreach programmes in an effort to help researchers understand how to achieve this.

    Line chart showing the proportion of biomedicine research papers that integrated sex and gender analysis into their studies for selected countries for the period 2000 to 2021

    Source: Gendered Innovation for Science and Technology Research Center

    Last, it is important that indicators of SGA integration in research outputs are being developed at a global level, mirroring established metrics on quantity and quality. This approach would highlight where SGA is needed and encourage its use.

    It is crucial for South Korean science that improvements are made to SGA integration rates. This will not only elevate the quality of its outputs, but could help to solidify South Korea’s role in developing equitable and impactful solutions to the world’s most urgent societal challenges.

    Competing Interests

    The authors declare no competing interests.

    [ad_2]

    Source link

  • schemes selling fake references alarm scientists

    schemes selling fake references alarm scientists

    [ad_1]

    Close up of a desk and laptop with a man's hand resting on an envelope filled with US dollar bills.

    Citations for cash: researchers have identified services where scholars can buy citations to their papers in bulk.Credit: Vergani_Fotografia/Getty

    Research-integrity watchers are concerned about the growing ways in which scientists can fake or manipulate the citation counts of their studies. In recent months, increasingly bold practices have surfaced. One approach was revealed through a sting operation in which a group of researchers bought 50 citations to pad the Google Scholar profile of a fake scientist they had created.

    The scientists bought the citations for US$300 from a firm that seems to sell bogus citations in bulk. This confirms the existence of a black market for faked references that research-integrity sleuths have long speculated about, says the team.

    “We started to notice several Google Scholar profiles with questionable citation trends,” says Yasir Zaki, a computer scientist at New York University (NYU) Abu Dhabi, whose team described its sting operation in a February preprint1. “When a manuscript acquires hundreds of citations within days of publication, or when a scientist has an abrupt and large rise in citations, you know something is wrong.”

    These practices are troublesome because many aspects of a researcher’s career depend on how many references their papers garner. Many institutions use citation counts to evaluate scientists, and citation numbers inform metrics such as the h-index, which aims to measure scholars’ productivity and the impact of their studies.

    Citation manipulation can have real consequences. In June, Spanish newspaper El País reported that the country’s Research Ethics Committee has urged the University of Salamanca to investigate the work of its newly appointed rector, Juan Manuel Corchado, a computer scientist accused of artificially boosting his Google Scholar metrics. (Corchado did not respond to Nature’s request for comment.)

    References for sale

    Research-integrity watchers had already suspected that citations are for sale at paper mills, services that churn out low-quality studies and sell authorship slots on already-accepted papers, says Cyril Labbé, a computer scientist at Grenoble Alpes University in France. “Paper mills have the ability to insert citations into papers that they are selling,” he says.

    In November 2023, analytics firm Clarivate in Philadelphia, Pennsylvania, excluded more than 1,000 researchers from its annual list of highly cited researchers because of fears of citation gaming and ‘hyper-publishing’.

    In their sting operation, Zaki and his colleagues created a Google Scholar profile for a fictional scientist and uploaded 20 made-up studies that were created using artificial intelligence.

    The team then approached a company, which they found while analysing suspicious citations linked to one of the authors in their data set, that seemed to be selling citations to Google Scholar profiles. The study authors contacted the firm by e-mail and later communicated through WhatsApp. The company offered 50 citations for $300 or 100 citations for $500. The authors opted for the first option and 40 days later 50 citations from studies in 22 journals — 14 of which are indexed by scholarly database Scopus — were added to the fictional researcher’s Google Scholar profile.

    The team didn’t share the company’s name with Nature, citing concerns that revealing it could draw attention to its website, or the fake Google Scholar profile they created, because this might reveal the identities of the authors of the studies that planted the fake citations. Asked by Nature whether Google Scholar is aware that faked profiles can be created on its site, Anurag Acharya, distinguished engineer at the company said: “While academic misbehaviour is possible, it’s rare because all aspects are visible — articles indexed, articles included by an author on their profile, articles citing an author, where the citing articles are hosted and so on. Anyone in the world can call you on it.”

    In another demonstration of citation manipulation, last month researchers created a fake Google Scholar profile for a cat called Larry listing a dozen fake papers with Larry as the sole author. The researchers posted a dozen more nonsensical studies on the academic social-networking site ResearchGate that cited Larry’s papers. A week or so after Larry’s identity was revealed, Google Scholar removed the cat’s studies, those citing Larry, and the accumulated citations. ResearchGate has also removed the bogus studies citing Larry.

    Fake preprints

    Zaki and colleagues’ sting operation was born out of a broader effort to assess the scale of the fake-citation problem. They used software to examine about 1.6 million Google Scholar profiles that had at least 10 publications. They searched for profiles with more than 200 citations and instances in which researchers’ citations increased by 10 times or more each year or when the rise represented a jump of at least 25% of their total citation count. The team found 1,016 such profiles.

    Zaki says that many citations to the papers on those profiles are from preprint articles that haven’t been peer reviewed and that they are typically listed in the bibliographies of papers but not cited in the main body of the manuscripts.

    “Citations can easily be manipulated by creating fake preprints and through paid services,” says co-author Talal Rahwan, a computer scientist at NYU Abu Dhabi.

    The authors also surveyed 574 researchers working at the 10 highest-ranked universities in the world. They found that of those universities that consider citation counts when evaluating scientists, more than 60% obtain these data from Google Scholar.

    Fishy patterns

    Labbé isn’t convinced by the survey’s claim that Google Scholar is widely used to obtain researchers’ citation metrics. Allegations of citation manipulation on Google Scholar have surfaced in the past, he says, and academics have long suspected that there are vendors offering this sort of service. But the sting operation to reveal a citation seller is the first of its kind, he says.

    Guillaume Cabanac, a computer scientist at the University of Toulouse in France who has created a tool that flags fabricated papers that contain odd turns of phrase added to circumvent plagiarism-detection software, says that many studies are cropping up with citations to work that has nothing to do with the topic of the study.

    Labbé’s team is building a tool that automatically flags fishy citation patterns that might point to manipulation.

    To help with that, Zaki’s team proposes a metric called the citation-concentration index, designed to detect cases in which a scientist receives many citations from few sources. Such activity is often a sign of a ‘citation ring’, in which scientists agree to cite one another to inflate each other’s metrics. “Suspicious ones tend to have massive citations stemming from just a few sources,” Rahwan says.

    One fear among integrity sleuths is that fraudsters will conceive subtler practices to avoid being found out. For instance, one way to avoid being detected by the citation-concentration index, Labbé notes, is to buy a few citations at a time and not in bulk.

    For Labbé, the way to address citation gaming is to change the incentives in academia so that scientists are not under pressure to accumulated as many citations as possible to progress their careers. “The pressure for publication and citation is detrimental to the behaviour of scientists,” he says.

    [ad_2]

    Source link

  • A publishing platform that places code front and centre

    A publishing platform that places code front and centre

    [ad_1]

    An open book with a laser beam shinging across the pages

    Credit: Getty

    Last month, the Microscopy Society of America (MSA) quietly launched a journal that its creators hope will lead academic publishing in the future. Elemental Microscopy, a publication focused on reviews and methods tutorials, leverages an authoring and publishing platform called Curvenote to create rich, interactive, digital papers with coding and data analysis at their heart. By allowing readers to explore data and recreate results in the publication, the firm behind the platform, also called Curvenote, and its growing list of clients seek to ease science’s long-standing reproducibility crisis and modernize the scientific literature.

    “Science has changed a lot — both through the explosion of big data and the digitization of science — but fundamentally, we still write papers like we did 100 years ago, in a way that doesn’t prioritize methods and data analysis,” says Colin Ophus, an incoming computational microscopist at Stanford University in California and Elemental Microscopy’s first editor-in-chief. “Through this new partnership, we want to show the whole pipeline, from the experiment to the raw data to the final figure, and all the steps that go into that.” The partnership was announced at the MSA’s annual meeting in Cleveland, Ohio, in July.

    Based in Calgary, Canada, Curvenote is a web-first platform built atop a slate of open-source tools that are already widely used by data scientists. Researchers often host their data in GitHub repositories and run analyses in computational notebook environments such as R Markdown or Jupyter Notebook. But to publish a paper, they must distil hundreds of lines of code into a paragraph or two, sacrificing important details and relegating the code itself to supplemental materials. When the paper is converted to a Word document or PDF, it separates text and figures from the code and data used to create them and flattens otherwise dynamic data to static representations. By combining computational notebooks with a user-friendly formatting language called MyST Markdown, Curvenote makes it possible to draft and manage an interactive manuscript from start to finish, including peer review, without the cumbersome translations. And because Curvenote outputs to an XML format called Journal Article Tag Suite, which many publishers use, the tool should allow any publisher to work with the documents directly, says Rowan Cockett, a geophysicist who co-founded the platform in 2019.

    Dual benefits

    “I think Curvenote is very advanced, and I’m very happy with what they’ve done,” says Alberto Pepe, vice-president of technology at the biotechnology research non-profit organization Sage Bionetworks in Seattle, Washington. Pepe developed a collaborative publishing platform called Authorea before selling it to scientific publisher Wiley in 2018. “It really is a step above.”

    Screen recording showing the interactive features of a conference paper hosted on Curvenote

    The publishing platform Curvenote is used to create interactive articles, allowing readers to view data and figures easily.Credit: Curvenote (CC-BY-4.0); Moore et al./SciPy Proceedings 2023; Section 3.2.3 (CC-BY-3.0)

    According to Cockett, leveraging open-access tools to develop Curvenote has dual benefits: not only do these choices support the company’s mission of advancing open science, but they also give the platform staying power. “It’s very important that the tools we use are being stewarded by the open-science and open-source communities, such that they have deep integration,” he says, noting that past attempts to create computationally reproducible articles using custom tools have often struggled to keep up. “Anytime there’s an update, things break and initiatives fail.”

    Curvenote stems from Cockett’s doctoral studies at the University of British Columbia in Vancouver, Canada, when he saw how challenging it was for members of his laboratory to explain their coding to one another, or for others to replicate it after people had moved on. Although the platform began as a way for individual scientists to share their work, Cockett says that the team has also worked with secondary-school students, lab groups, institutions, and, over the past two years, publishers. In addition to the MSA — which plans to announce a call for papers for Elemental Microscopy later this year — Curvenote has partnered with the American Geophysical Union (AGU) and the publications SciPy Proceedings and Physiome.

    Alan Lujan, a computational economist at the open-source modelling project Econ-ARK, based at Johns Hopkins University in Baltimore, Maryland, who used Curvenote to create an article under review for SciPy Proceedings, praises the platform for its utility, including its seamless integration with scientific visualization libraries, such as Plotly, Leaflet, Altair and Bokeh.

    By launching a virtual environment in the paper, readers can view and execute the underlying code, and edit that code to experiment with the data. Data are hosted in the Google Cloud Platform and executed using Binder, making computation free to access, but also limiting the computing resources available. The interface supports interactive citations, including the ability to cross-reference and automatically number figures, equations and tables. When users hover over these interactive references, pop-ups provide information without requiring the reader to navigate away from the text.

    Researchers can author MyST Markdown documents in any plain-text editor, including Jupyter or the Quarto document system. Curvenote also hosts its own authoring and content-management platform that allows collaborators to edit MyST Markdown documents and append comments in a Google Doc-like format, even if they have limited coding experience. “I’ve been working a lot on publishing code and data together, and I find that people can be very hesitant to work with these computational notebooks because it seems very complicated to collaborate,” says Sarah Pederzani, an archaeologist and palaeoclimate scientist at the University of Utah in Salt Lake City. “Anything that can help with the uptake of these products is great.”

    At its annual meeting in December 2023, the AGU announced a pilot project called Notebooks Now! in partnership with Curvenote, and released a handful of publications to showcase what it can do. In an accompanying editorial1, the AGU’s leadership said that the initiative would “revolutionize the way data- and computation-rich scientific research is performed and published”. Instead of PDFs, Notebooks Now! would make dynamic computational documents the “version of record”, which could then be exported to PDF and other formats.

    According to an AGU spokesperson, the interest from its membership in Notebooks Now! has been “very positive”, and the AGU is now seeking funding to move the project forward “in the next few months”. At first, scientists will be able to submit Curvenote-formatted manuscripts to the journal Earth and Space Science — probably by the end of the year. The goal is to eventually offer the option for all AGU journals.

    Kayla Iacovino, an experimental petrologist at the NASA Johnson Space Center in Houston, Texas, who reformatted one of her publications for Notebooks Now!, says that the process was time consuming, but ultimately created a better manuscript. One scientist reached out afterwards, she says, to share that Curvenote had made it possible to ‘experience’ Iacovino’s publication, rather than simply reading it. “It really took it to the next level,” she says.

    [ad_2]

    Source link

  • Elite researchers in China say they had ‘no choice’ but to commit misconduct

    Elite researchers in China say they had ‘no choice’ but to commit misconduct

    [ad_1]

    “I had no choice but to commit [research] misconduct,” admits a researcher at an elite Chinese university. The shocking revelation is documented in a collection of several dozen anonymous, in-depth interviews offering rare, first-hand accounts of researchers who engaged in unethical behaviour — and describing what tipped them over the edge. An article based on the interviews was published in April in the journal Research Ethics1.

    The interviewer, sociologist Zhang Xinqu, and his colleague criminologist Wang Peng, both at the University of Hong Kong, suggest that researchers felt compelled, and even encouraged, to engage in misconduct to protect their jobs. This pressure, they conclude, ultimately came from a Chinese programme to create globally recognized universities. The programme prompted some Chinese institutions to set ambitious publishing targets, they say.

    The article offers “a glimpse of the pain and guilt that researchers felt”, when they engaged in unethical behaviour, says Elisabeth Bik, a scientific-image sleuth and consultant in San Francisco, California.

    But other researchers say the findings paint an overly negative picture of the Chinese programme. Zheng Wenwen, who is responsible for research integrity at the Institute of Scientific and Technical Information of China, under the Ministry of Science and Technology, in Beijing, says that the sample size is too small to draw reliable conclusions. The study is based on interviews with staff at just three elite institutes — even though more than 140 institutions are now part of the programme to create internationally competitive universities and research disciplines.

    Rankings a game

    In 2015, the Chinese government introduced the Double First-Class Initiative to establish “world-class” universities and disciplines. Universities selected for inclusion in the programme receive extra funding, whereas those that perform poorly risk being delisted, says Wang.

    Between May 2021 and April 2022, Zhang conducted anonymous virtual interviews with 30 faculty members and 5 students in the natural sciences at three of these elite universities. The interviewees included a president, deans and department heads. The researchers also analysed internal university documents.

    The university decision-makers who were interviewed at all three institutes said they understood it to be their responsibility to interpret the goals of the Double First-Class scheme. They determined that, to remain on the programme, their universities needed to increase their standing in international rankings — and that, for that to happen, their researchers needed to publish more articles in international journals indexed in databases such as the Science Citation Index.

    Some universities treated world university rankings as a “game” to win, says Wang.

    As the directive moved down the institutional hierarchy, pressure to perform at those institutes increased. University departments set specific and hard-to-reach publishing criteria for academics to gain promotion and tenure.

    Some researchers admitted to engaging in unethical research practices for fear of losing their jobs. In one interview, a faculty head said: “If anyone cannot meet the criteria [concerning publications], I suggest that they leave as soon as possible.”

    Zhang and Wang describe researchers using services to write their papers for them, falsifying data, plagiarizing, exploiting students without offering authorship and bribing journal editors.

    One interviewee admitted to paying for access to a data set. “I bought access to an official archive and altered the data to support my hypotheses.”

    An associate dean emphasized the primacy of the publishing goal. “We should not be overly stringent in identifying and punishing research misconduct, as it hinders our scholars’ research efficiency.”

    Not the whole picture

    The authors “hit the nail on the head” in describing the relationship between institutional pressure and research misconduct, says Wang Fei, who studies research-integrity policy at Dalian University of Technology.

    But she says it’s not the whole picture. Incentives to publish high-quality research are part of broader reforms to the higher-education system that “have been largely positive”. “The article focuses almost exclusively on the negative aspects, potentially misleading readers into thinking that Chinese higher education reforms are severely flawed and accelerating research misconduct.”

    Tang Li, a science- and innovation-policy researcher at Fudan University in Shanghai, agrees. The first-hand accounts are valuable, but the findings could be biased, she says, because those who accepted the interview might have strong feelings and might not represent the opinions of those who declined to be interviewed.

    Zheng disagrees with the study’s conclusions. In 2020, the government issued a directive for Double First-Class institutes. This states specifically that evaluations should be comprehensive, and not just focus on numbers of papers, she says. Research misconduct is a result not of the Double First-Class initiative, but of an “insufficient emphasis on research integrity education”, says Zheng.

    Punishing misconduct

    The larger problem, says Xiaotian Chen, a library and information scientist at Bradley University in Peoria, Illinois, is a lack of transparency and of systems to detect and deter misconduct in China. Most people do the right thing, despite the pressure to publish, says Chen, who has studied research misconduct in China. The pressure described in the paper could just be “an excuse to cheat”.

    The Chinese government has introduced several measures to crack down on misconduct, including defining what constitutes violations and specifying appropriate penalties. They have also banned cash rewards for publishing in high-impact journals.

    Wang Peng says that government policies need to be more specific about how they define and punish different types of misconduct.

    But Zheng says that, compared with those that apply in other countries, “the measures currently taken by the Chinese government to punish research misconduct are already very stringent”.

    The authors also ignore recent government guidance for elite Chinese institutions to break with the tendency of evaluating faculty members solely on the basis of their publications and academic titles, says Zheng.

    Tang points out that the road to achieving integrity in research is long. “Cultivating research integrity takes time and requires orchestrated efforts from all stakeholders,” she says.

    And the pressure to publish more papers to drive up university rankings “is not unique to China”, says Bik. “Whenever and wherever incentives and requirements are set up to make people produce more, there will be people ‘gaming the metrics’.”

    [ad_2]

    Source link

  • Chinese research collaborations shift to the Belt and Road

    Chinese research collaborations shift to the Belt and Road

    [ad_1]

    Although China is expanding its international research collaborations rapidly, the composition of these interactions is shifting, according to data from the Nature Index. Specifically, China’s researchers are increasingly working with scientists in countries taking part in the Beijing government’s Belt and Road Initiative (BRI).

    The BRI is often described as a modern-day reboot of the Silk Road, an ancient system of trade routes that connected China’s heartland with the eastern edge of Europe. Officially, the BRI is a bid to strengthen the resilience of China’s trade networks, both overland — across Asia into the Middle East and Africa — and by sea — upgrading ports and building maritime fuelling stations throughout the Asian continent.

    In reality it is about far more than just infrastructure. Sometimes referred to as Chinese President Xi Jinping’s signature policy, it’s an attempt to boost China’s economic and political influence by strengthening its ties with neighbours and other strategic partners around the world. The Green Finance and Development Center at Fudan University in Shanghai has been keeping track of the BRI’s progress. It estimates that China has spent more than US$1 trillion on the initiative since 2013 and that 151 countries have so far signed up to the project and the funding that comes with it.

    In science, the BRI has spearheaded a range of initiatives, from Chinese researchers helping to design key pieces of infrastructure in Africa, to countries in central Asia working with China on lunar-exploration plans. Data trends in the Nature Index seem to reflect this. The number of natural-sciences research papers involving China and at least one BRI country has risen by 132% between 2015 and 2023 (data for 2023 are approximated by a 12-month window from August 2022 to July 2023). Such articles accounted for 28% of all of China’s international collaboration in the index in 2023, up from 22% in 2015. At the same time, the overall number of internationally collaborative papers involving China has increased at a slower rate — growing by 83% in the same time period. Collaborative research output with the United States in the natural sciences, measured by bilateral collaboration score (CS), decreased by 15% between 2020 and 2022 — and it has stagnated since then. The data suggest that researchers in China are starting to favour working with countries that are closer to home or deemed to be strategically important by the central government, over others, particularly in the West.

    Proportional circles showing the change in bilateral collaboration score for Nature Index research conducted between China and 15 Belt and Road countries

    Source: Nature Index

    “I’m not at all surprised,” says Caroline Wagner, a researcher at Ohio State University in Columbus who specializes in public policy that relates to science and innovation. “I did a study for the US state department, looking at all of the diplomatic agreements that China has made on science and tech with different countries, and we could see a tremendous rise [in BRI collaborations].”

    Of the collaborations between China and BRI countries in the Nature Index, Singapore and South Korea come out on top. Singapore is China’s fifth largest research partner on papers in the database overall, including health sciences, with its bilateral CS rising by 8.4% between 2022 and 2023. These changes are likely to have been driven from the bottom up, says Jenny Lee, a science-policy researcher at the University of Arizona in Tucson. “It’s not like the Chinese Communist Party is saying to Chinese researchers that they must collaborate with these countries,” she says. Part of what the data are showing could also stem from China’s COVID-19 policy, which involved strict lockdowns and closed borders. “People didn’t go abroad to make connections at conferences during that time and it could just be that people in China are only just starting to do that again,” says Lee. Chinese scientists might still be wary of travelling farther afield to the United States and Europe, she says, and they might prefer to stay closer to home.

    Part of the growth in China–BRI research collaborations could be explained by quirks in how academics identify themselves on research papers, says Robert Tijssen, a science and innovation studies researcher at Leiden University in the Netherlands. “A growing number of ‘cosmopolitan’ academic researchers have multiple institutional affiliations in different countries, especially countries that share a language or a common research culture. This may apply to China and Singapore,” he says. On papers, that could look like international collaboration when it’s more like Chinese researchers working with Chinese researchers.

    Line chart showing the change in research publications from China from 2015 to 2023* by collaboration type

    Source: Nature Index

    China’s domestic research collaborations are also skyrocketing: the number of natural-science papers in the Nature Index authored solely by China-based researchers grew by 194% between 2015 and 2023. The implication, says Lee, is that US hegemony as the ‘go-to place’ for researchers around the world is in peril. “It’s yet another demonstration of how global science is shifting away from the West. International collaboration will continue to grow,” she says, but it “may be shifting to a more regionalized model”.

    Part of this is driven by geopolitics, she adds. Several countries, including the United Kingdom and United States, have banned Chinese firms such as Huawei from engaging in projects that involve key technology or infrastructure, such as telecommunications and electrical grids. The European Union is considering similar policies. That has a knock-on effect; researchers in China who are interested in working in these areas don’t change fields — they look for collaborators in other countries.

    “You can’t collaborate with Chinese nationals if you have NASA funding” in the United States, says Lee. “When we look at sensitive areas of research, or anything to do with national security, the United States is closing access to data and resources. I suspect that this is probably where this trend away from the West is happening. These are areas and fields that are growing in Singapore and South Korea, so it makes sense.”

    This article is part of Nature Index 2024 China, an editorially independent supplement. Advertisers have no influence over the content. For more information about Nature Index, see the homepage.

    [ad_2]

    Source link

  • Japan’s push to make all research open access is taking shape

    Japan’s push to make all research open access is taking shape

    [ad_1]

    Viewed through a window covered in red handwritten notes, a man wearing safety goggles holds a piece of repaired broken resin glass.

    Japan plans to make all publicly funded research available to read in institutional repositories.Credit: Toru Yamanaka/AFP via Getty

    The Japanese government is pushing ahead with a plan to make Japan’s publicly funded research output free to read. This month, the science ministry will assign funding to universities to build the infrastructure needed to make research papers free to read on a national scale. The move follows the ministry’s announcement in February that researchers who receive government funding will be required to make their papers freely available to read on the institutional repositories from January 2025.

    The Japanese plan “is expected to enhance the long-term traceability of research information, facilitate secondary research and promote collaboration”, says Kazuki Ide, a health-sciences and public-policy scholar at Osaka University in Suita, Japan, who has written about open access in Japan.

    The nation is one of the first Asian countries to make notable advances towards making more research open access (OA) and among first countries in the world to forge a nationwide plan for OA.

    The plan follows in the footsteps of the influential Plan S, introduced six years ago by a group of research funders in the United States and Europe known as cOAlition S, to accelerate the move to OA publishing. The United States also implemented an OA mandate in 2022 that requires all research funded by US taxpayers to be freely available from 2026.

    Institutional repositories

    When the Ministry of Education, Culture, Sports, Science and Technology (MEXT) announced Japan’s pivot to OA in February, it also said that it would invest 10 billion yen (around US$63 million) to standardize institutional repositories — websites dedicated to hosting scientific papers, their underlying data and other materials — ensuring that there will be a mechanism for making research in Japan open.

    Among the roughly 800 universities in Japan, more than 750 already have an institutional repository, says Shimasaki Seiichi, director of the Office for Nuclear Fuel Cycles and Decommissioning at MEXT in Tokyo, who was involved with drawing up the plan. Each university will host the research produced by its academics, but the underlying software will be the same.

    In 2022, Japan also launched its own national preprint server, Jxiv, but its use remains limited with only the few hundred preprint articles posted on the platform to date. Ide says that publishing as preprints is not yet habitual among many researchers in Japan, noting that only around one in five respondents to his 2023 survey1 on Jxiv were even aware that it existed.

    Green OA

    Japan’s move to greater access to its research is focusing on ‘green OA’ — in which authors make the author-accepted, but unfinalized, versions of papers available in the digital repositories, says Seiichi.

    Seiichi says that gold OA — in which the final copyedited and polished version of a paper is made freely available on the journal site — is not feasible on a wide scale. That’s because the cost to make every paper free to read would be too high for universities. Publishers levy an article-processing charge (APC) if the paper is made free to read, rather than being paywalled, a fee that covers a publisher’s costs.

    APCs are increasing at an average rate of 4.3% per year, notes Johan Rooryck, a scholar of French linguistics at Leiden University in the Netherlands, and executive director of cOAlition S.

    Rooryck says that Japan’s green OA strategy should be applauded. “It’s definitely something that one should do,” he says. “Especially for all the content that is still behind the paywall.”

    Kathleen Shearer, executive director of the Confederation of Open Access Repositories in Montreal, Canada, says that the Japanese plan is “equitable”.

    “It doesn’t matter where you publish, whether you have APCs or not, you are still able to comply with an open-access policy,” she says.

    She adds that the policy will mean that Japan has a unified record of all research produced by its academics because all institutional repositories are hosted on the same national server. “Japan is way ahead of the rest of us,” Shearer says. “More countries are moving in this direction but Japan really was one of the first.”

    Focusing on institutional repositories will have another benefit: it will not discriminate against research published in Japanese, Shearer says. “A big part of their scholarly ecosystem is represented in Japanese.”

    The plan to move to OA and support Japanese universities’ repositories comes as Japan grapples with its declining standing in international research.

    In a report released last October, MEXT found that Japan’s world-class research status is declining. For instance, Japan’s share in the top 10% of most-cited papers has dipped from 6% to 2%, placing it 13th on the list of nations, despite Japan having the fifth-highest research output.

    In March, Japan also vowed to triple its number of doctorate holders by 2040, after another report found that the country’s number of PhD graduates is also declining, making it an outlier among the major economies.

    [ad_2]

    Source link

  • fresh incentives for reporting negative results

    fresh incentives for reporting negative results

    [ad_1]

    Sarahanne Field giving a talk

    The editor-in-chief of the Journal of Trial & Error, Sarahanne Field wants to publish the messy, null and negative results sitting in researchers’ file drawers.Credit: Sander Martens

    Editor-in-chief Sarahanne Field describes herself and her team at the Journal of Trial & Error as wanting to highlight the “ugly side of science — the parts of the process that have gone wrong”.

    She clarifies that the editorial board of the journal, which launched in 2020, isn’t interested in papers in which “you did a shitty study and you found nothing. We’re interested in stuff that was done methodologically soundly, but still yielded a result that was unexpected.” These types of result — which do not prove a hypothesis or could yield unexplained outcomes — often simply go unpublished, explains Field, who is also an open-science researcher at the University of Groningen in the Netherlands. Along with Stefan Gaillard, one of the journal’s founders, she hopes to change that.

    Calls for researchers to publish failed studies are not new. The ‘file-drawer problem’ — the stacks of unpublished, negative results that most researchers accumulate — was first described in 1979 by psychologist Robert Rosenthal. He argued that this leads to publication bias in the scientific record: the gap of missing unsuccessful results leads to overemphasis on the positive results that do get published.

    Over the past 30 years, the proportion of negative results being published has decreased further. A 2012 study showed that, from 1990 to 2007, there was a 22% increase in positive conclusions in papers; by 2007, 85% of papers published had positive results1. “People fail to report [negative] results, because they know they won’t get published — and when people do attempt to publish them, they get rejected,” says Field. A 2022 survey of researchers in France in chemistry, physics, engineering and environmental sciences showed that, although 81% had produced relevant negative results and 75% were willing to publish them, only 12.5% had the opportunity to do so2.

    One factor that is leading some researchers to revisit the problem is the growing use of predictive modelling using machine-learning tools in many fields. These tools are trained on large data sets that are often derived from published work, and scientists have found that the absence of negative data in the literature is hampering the process. Without a concerted effort to publish more negative results that artificial intelligence (AI) can be trained on, the promise of the technology could be stifled.

    “Machine learning is changing how we think about data,” says chemist Keisuke Takahashi at Hokkaido University in Japan, who has brought the issue to the attention of the catalysis-research community. Scientists in the field have typically relied on a mixture of trial and error and serendipity in their experiments, but there is hope that AI could provide a new route for catalyst discovery. Takahashi and his colleagues mined data from 1,866 previous studies and patents to train a machine-learning model to predict the best catalyst for the reaction between methane and oxygen to form ethane and ethylene, both of which are important chemicals used in industry3. But, he says, “over the years, people have only collected the good data — if they fail, they don’t report it”. This led to a skewed model that, in some cases, enhanced the predicted performance of a material, rather than realistically assessing its properties.

    Portrait of Felix Strieth-Kalthoff in the lab

    Synthetic organic chemist Felix Strieth-Kalthoff found that published data were too heavily biased toward positive results to effectively train an AI model to optimize chemical reaction yields.Credit: Cindy Huang

    Alongside the flawed training of AI models, the huge gap of negative results in the scientific record continues to be a problem across all disciplines. In areas such as psychology and medicine, publication bias is one factor exacerbating the ongoing reproducibility crisis — in which many published studies are impossible to replicate. Without sharing negative studies and data, researchers could be doomed to repeat work that led nowhere. Many scientists are calling for changes in academic culture and practice — be it the creation of repositories that include positive and negative data, new publication formats or conferences aimed at discussing failure. The solutions are varied, but the message is the same: “To convey an accurate picture of the scientific process, then at least one of the components should be communicating all the results, [including] some negative results,” says Gaillard, “and even where you don’t end up with results, where it just goes wrong.”

    Science’s messy side

    Synthetic organic chemist Felix Strieth-Kalthoff, who is now setting up his own laboratory at the University of Wuppertal, Germany, has encountered positive-result bias when using data-driven approaches to optimize the yields of certain medicinal-chemistry reactions. His PhD work with chemist Frank Glorius at the University of Münster, Germany, involved creating models that could predict which reactants and conditions would maximize yields. Initially, he relied on data sets that he had generated from high-throughput experiments in the lab, which included results from both high- and low-yield reactions, to train his AI model. “Our next logical step was to do that based on the literature,” says Strieth-Kalthoff. This would allow him to curate a much larger data set to be used for training.

    But when he incorporated real data from the reactions database Reaxys into the training process, he says, “[it] turned out they don’t really work at all”. Strieth-Kalthoff concluded the errors were due the lack of low-yield reactions4; “All of the data that we see in the literature have average yields of 60–80%.” Without learning from the messy ‘failed’ experiments with low yields that were present in the initial real-life data, the AI could not model realistic reaction outcomes.

    Although AI has the potential to spot relationships in complex data that a researcher might not see, encountering negative results can give experimentalists a gut feeling, says molecular modeller Berend Smit at the Swiss Federal Institute of Technology Lausanne. The usual failures that every chemist experiences at the bench give them a ‘chemical intuition’ that AI models trained only on successful data lack.

    Smit and his team attempted to embed something similar to this human intuition into a model tasked with designing a metal-organic framework (MOF) with the largest known surface area for this type of material. A large surface area allows these porous materials to be used as reaction supports or molecular storage reservoirs. “If the binding [between components] is too strong, it becomes amorphous; if the binding is too weak, it becomes unstable, so you need to find the sweet spot,” Smit says. He showed that training the machine-learning model on both successful and unsuccessful reaction conditions created better predictions and ultimately led to one that successfully optimized the MOF5. “When we saw the results, we thought, ‘Wow, this is the chemical intuition we’re talking about!’” he says.

    According to Strieth-Kalthoff, AI models are currently limited because “the data that are out there just do not reflect all of our knowledge”. Some researchers have sought statistical solutions to fill the negative-data gap. Techniques include oversampling, which means supplementing data with several copies of existing negative data or creating artificial data points, for example by including reactions with a yield of zero. But, he says, these types of approach can introduce their own biases.

    Portrait of Ella Peltonen

    Computer scientist Ella Peltonen helped to organize the first International Workshop on Negative Results in Pervasive Computing in 2022 to give researchers an opportunity to discuss failed experiments.Credit: University of Oulu

    Capturing more negative data is now a priority for Takahashi. “We definitely need some sort of infrastructure to share the data freely.” His group has created a website for sharing large amounts of experimental data for catalysis reactions. Other organizations are trying to collect and publish negative data — but Takahashi says that, so far, they lack coordination, so data formats aren’t standardized. In his field, Strieth-Kalthoff says, there are initiatives such as the Open Reaction Database, launched in 2021 to share organic-reaction data and enable training of machine-learning applications. But, he says, “right now, nobody’s using it, [because] there’s no incentive”.

    Smit has argued for a modular open-science platform that would directly link to electronic lab notebooks to help to make different data types extractable and reusable. Through this process, publication of negative data in peer-reviewed journals could be skipped, but the information would still be available for researchers to use in AI training. Strieth-Kalthoff agrees with this strategy in theory, but thinks it’s a long way off in practice, because it would require analytical instruments to be coupled to a third-party source to automatically collect data — which instrument manufacturers might not agree to, he says.

    Publishing the non-positive

    In other disciplines, the emphasis is still on peer-reviewed journals that will publish negative results. Gaillard, a science-studies PhD student at Radboud University in Nijmegen, the Netherlands, co-founded the Journal of Trial & Error after attending talks on how science can be made more open. Gaillard says that, although everyone whom they approached liked the idea of the journal, nobody wanted to submit articles at first. He and the founding editorial team embarked on a campaign involving cold calls and publicity at open-science conferences. “Slowly, we started getting our first submissions, and now we just get people sending things in [unsolicited],” he says. Most years the journal publishes one issue of about 8–14 articles, and it is starting to publish more special issues. It focuses mainly on the life sciences and data-based social sciences.

    In 2008, David Alcantara, then a chemistry PhD student at the University of Seville in Spain who was frustrated by the lack of platforms for sharing negative results, set up The All Results journals, which were aimed at disseminating results regardless of the outcome. Of the four disciplines included at launch, only the biology journal is still being published. “Attracting submissions has always posed a challenge,” says Alcantara, now president at the consultancy and training organization the Society for the Improvement of Science in Seville.

    But Alcantara thinks there has been a shift in attitudes: “More established journals [are] becoming increasingly open to considering negative results for publication.” Gaillard agrees: “I’ve seen more and more journals, like PLoS ONE, for example, that explicitly mentioned that they also publish negative results.” (Nature welcomes submissions of replication studies and those that include null results, as described in this 2020 editorial.)

    Journals might be changing their publication preferences, but there are still significant disincentives that stop researchers from publishing their file-drawer studies. “The current academic system often prioritizes high-impact publications and ground-breaking discoveries for career advancement, grants and tenure,” says Alcantara, noting that negative results are perceived as contributing little to nothing to these endeavours. Plus, there is still a stigma associated with any kind of failure. “People are afraid that this will look negative on their CV,” says Gaillard. Smit describes reporting failed experiments as a no-win situation: “It’s more work for [researchers], and they don’t get anything in return in the short term.” And, jokes Smit, what’s worse is that they could be providing data for an AI tool to take over their role.

    Ultimately, most researchers conclude that publishing their failed studies and negative data is just not worth the time and effort — and there’s evidence that they judge others’ negative research more harshly than positive outcomes. In a study published in August, 500 researchers from top economics departments around the world were randomized to two groups and asked to judge a hypothetical research paper. Half of the participants were told that the study had a null conclusion, and the other half were told the results were sizeably significant. The null results were perceived to be 25% less likely to be published, of lower quality and less important than were the statistically significant findings6.

    Some researchers have had positive experiences sharing their unsuccessful findings. For example, in 2021, psychologist Wendy Ross at the London Metropolitan University published her negative results from testing a hypothesis about human problem-solving in the Journal of Trial & Error7, and says the paper was “the best one I have published to date”. She adds, “Understanding the reasons for null results can really test and expand our theoretical understanding.”

    Fields forging solutions

    The field of psychology has introduced one innovation that could change publication biases — registered reports (RRs). These peer-reviewed reports, first published in 2014, came about largely as a response to psychology’s replication crisis, which began in around 2011. RRs set out the methodology of a study before the results are known, to try to prevent selective reporting of positive results. Daniël Lakens, who studies science-reward structures at Eindhoven University of Technology in the Netherlands, says there is evidence that RRs increase the proportion of negative results in the psychology literature.

    In a 2021 study, Lakens analysed the proportion of published RRs whose results eventually support the primary hypothesis. In a random sample of hypothesis-testing studies from the standard psychology literature, 96% of the results were positive. In RRs, this fell to only 44%8. Lakens says the study shows “that if you offer this as an option, many more null results enter the scientific literature, and that is a desirable thing”. At least 300 journals, including Nature, are now accepting RRs, and the format is spreading to journals in biology, medicine and some social-science fields.

    Yet another approach has emerged from the field of pervasive computing, the study of how computer systems are integrated into physical surroundings and everyday life. About four years ago, members of the community started discussing reproducibility, says computer scientist Ella Peltonen at the University of Oulu in Finland. Peltonen says that researchers realized that, to avoid the repetition of mistakes, there was a need to discuss the practical problems with studies and failed results that don’t get published. So in 2022, Peltonen and her colleagues held the first virtual International Workshop on Negative Results in Pervasive Computing (PerFail), in conjunction with the field’s annual conference, the International Conference on Pervasive Computing and Communications.

    Peltonen explains that PerFail speakers first present their negative results and then have the same amount of time for discussion afterwards, during which participants tease out how failed studies can inform future work. “It also encourages the community to showcase that things require effort and trial and error, and there is value in that,” she adds. Now an annual event, the organizers invite students to attend so they can see that failure is a part of research and that “you are not a bad researcher because you fail”, says Peltonen.

    In the long run, Alcantara thinks a continued effort to persuade scientists to share all their results needs to be coupled with policies at funding agencies and journals that reward full transparency. “Criteria for grants, promotions and tenure should recognize the value of comprehensive research dissemination, including failures and negative outcomes,” he says. Lakens thinks funders could be key to boosting the RR format, as well. Funders, he adds, should say, “We want the research that we’re funding to appear in the scientific literature, regardless of the significance of the finding.”

    There are some positive signs of change about sharing negative data: “Early-career researchers and the next generation of scientists are particularly receptive to the idea,” says Alcantara. Gaillard is also optimistic, given the increased interest in his journal, including submissions for an upcoming special issue on mistakes in the medical domain. “It is slow, of course, but science is a bit slow.”

    [ad_2]

    Source link

  • Plagiarism in peer-review reports could be the ‘tip of the iceberg’

    Plagiarism in peer-review reports could be the ‘tip of the iceberg’

    [ad_1]

    Mikołaj Piniewski is a researcher to whom PhD students and collaborators turn when they need to revise or refine a manuscript. The hydrologist, at the Warsaw University of Life Sciences, has a keen eye for problems in text — a skill that came in handy last year when he encountered some suspicious writing in peer-review reports of his own paper.

    Last May, when Piniewski was reading the peer-review feedback that he and his co-authors had received for a manuscript they’d submitted to an environmental-science journal, alarm bells started ringing in his head. Comments by two of the three reviewers were vague and lacked substance, so Piniewski decided to run a Google search, looking at specific phrases and quotes the reviewers had used.

    To his surprise, he found the comments were identical to those that were already available on the Internet, in multiple open-access review reports from publishers such as MDPI and PLOS. “I was speechless,” says Piniewski. The revelation caused him to go back to another manuscript that he had submitted a few months earlier, and dig out the peer-review reports he received for that. He found more plagiarized text. After e-mailing several collaborators, he assembled a team to dig deeper.

    The team published the results of its investigation in Scientometrics in February1, examining dozens of cases of apparent plagiarism in peer-review reports, identifying the use of identical phrases across reports prepared for 19 journals. The team discovered exact quotes duplicated across 50 publications, saying that the findings are just “the tip of the iceberg” when it comes to misconduct in the peer-review system.

    Dorothy Bishop, a former neuroscientist at the University of Oxford, UK, who has turned her attention to investigating research misconduct, was “favourably impressed” by the team’s analysis. “I felt the way they approached it was quite useful and might be a guide for other people trying to pin this stuff down,” she says.

    Peer review under review

    Piniewski and his colleagues conducted three analyses. First, they uploaded five peer-review reports from the two manuscripts that his laboratory had submitted to a rudimentary online plagiarism-detection tool. The reports had 44–100% similarity to previously published online content. Links were provided to the sources in which duplications were found.

    The researchers drilled down further. They broke one of the suspicious peer-review reports down to fragments of one to three sentences each and searched for them on Google. In seconds, the search engine returned a number of hits: the exact phrases appeared in 22 open peer-review reports, published between 2021 and 2023.

    The final analysis provided the most worrying results. They took a single quote — 43 words long and featuring multiple language errors, including incorrect capitalization — and pasted it into Google. The search revealed that the quote, or variants of it, had been used in 50 peer-review reports.

    Predominantly, these reports were from journals published by MDPI, PLOS and Elsevier, and the team found that the amount of duplication increased year-on-year between 2021 and 2023. Whether this is because of an increase in the number of open-access peer-review reports during this time or an indication of a growing problem is unclear — but Piniewski thinks that it could be a little bit of both.

    Why would a peer reviewer use plagiarized text in their report? The team says that some might be attempting to save time, whereas others could be motivated by a lack of confidence in their writing ability, for example, if they aren’t fluent in English.

    The team notes that there are instances that might not represent misconduct. “A tolerable rephrasing of your own words from a different review? I think that’s fine,” says Piniewski. “But I imagine that most of these cases we found are actually something else.”

    The source of the problem

    Duplication and manipulation of peer-review reports is not a new phenomenon. “I think it’s now increasingly recognized that the manipulation of the peer-review process, which was recognized around 2010, was probably an indication of paper mills operating at that point,” says Jennifer Byrne, director of biobanking at New South Wales Health in Sydney, Australia, who also studies research integrity in scientific literature.

    Paper mills — organizations that churn out fake research papers and sell authorships to turn a profit — have been known to tamper with reviews to push manuscripts through to publication, says Byrne.

    However, when Bishop looked at Piniewski’s case, she could not find any overt evidence of paper-mill activity. Rather, she suspects that journal editors might be involved in cases of peer-review-report duplication and suggests studying the track records of those who’ve allowed inadequate or plagiarized reports to proliferate.

    Piniewski’s team is also concerned about the rise of duplications as generative artificial intelligence (AI) becomes easier to access. Although his team didn’t look for signs of AI use, its ability to quickly ingest and rephrase large swathes of text is seen as an emerging issue.

    A preprint posted in March2 showed evidence of researchers using AI chatbots to assist with peer review, identifying specific adjectives that could be hallmarks of AI-written text in peer-review reports.

    Bishop isn’t as concerned as Piniewski about AI-generated reports, saying that it’s easy to distinguish between AI-generated text and legitimate reviewer commentary. “The beautiful thing about peer review,” she says, is that it is “one thing you couldn’t do a credible job with AI”.

    Preventing plagiarism

    Publishers seem to be taking action. Bethany Baker, a media-relations manager at PLOS, who is based in Cambridge, UK, told Nature Index that the PLOS Publication Ethics team “is investigating the concerns raised in the Scientometrics article about potential plagiarism in peer reviews”.

    An Elsevier representative told Nature Index that the publisher “can confirm that this matter has been brought to our attention and we are conducting an investigation”.

    In a statement, the MDPI Research Integrity and Publication Ethics Team said that it has been made aware of potential misconduct by reviewers in its journals and is “actively addressing and investigating this issue”. It did not confirm whether this was related to the Scientometrics article.

    One proposed solution to the problem is ensuring that all submitted reviews are checked using plagiarism-detection software. In 2022, exploratory work by Adam Day, a data scientist at Sage Publications, based in Thousand Oaks, California, identified duplicated text in peer-review reports that might be suggestive of paper-mill activity. Day offered a similar solution of using anti-plagiarism software, such as Turnitin.

    Piniewski expects the problem to get worse in the coming years, but he hasn’t received any unusual peer-review reports since those that originally sparked his research. Still, he says that he’s now even more vigilant. “If something unusual occurs, I will spot it.”

    [ad_2]

    Source link

  • Algorithm ranks peer reviewers by reputation — but critics warn of bias

    Algorithm ranks peer reviewers by reputation — but critics warn of bias

    [ad_1]

    An algorithm ranks the reputation of peer reviewers on the basis of how many citations the studies they have reviewed attracted.

    The tool, outlined in a study published in February1, could help to identify which papers could become high impact during peer review, its creators say. They add that, during peer review, authors should put the most weight on the recommendations and feedback from reviewers of previous papers that have been highly cited.

    The study authors extracted citation data from 308,243 papers published by journals of the American Physical Society (APS) between 1990 and 2010 that had accumulated more than 5 citations each. Information about the referees of these papers was not available, so the authors used an algorithm to create imaginary reviewers, which rated papers on the basis of an algorithm that was trained on citation data from the APS data set. Using the review scores that these papers received in real life (a score of 1 being poor and 5 being outstanding), the study authors compared how closely the imaginary reviewers’ scores correlated to the actual scores the papers received.

    To rank the imaginary reviewers, the study authors tracked the citations accumulated by the papers published between 1990 and 2000 and checked the review scores they were given. Imaginary reviewers that gave high review scores to papers that went on to attract a high number of citations were given a high ranking.

    The authors then tested how effective these reputation rankings were in predicting citation numbers of papers refereed by the same imaginary reviewers in the second decade of the data. The study found that the imaginary reviewers’ recommendations on the 2000–10 papers were in line with the actual citation counts of these papers over that time span, says study co-author An Zeng, an environmental scientist at Beijing Normal University. This suggests that the algorithm is good at predicting high-impact papers, he adds.

    More eyes on peer reviewers

    Previous attempts to quantify and predict the reach of studies have been widely criticized for relying too heavily on citation-based metrics, which, critics say, exacerbate existing biases in academia. A 2021 study2 found that non-replicable papers are cited more than replicable studies, possibly because they have more ‘interesting’ results.

    Zeng acknowledges the limitations of focusing on citation metrics, but says that it’s important to evaluate the work of peer reviewers. Solid studies are sometimes rejected because of one negative review, he notes, but there’s little attention given to how professional or reliable that reviewer is. “If this algorithm can identify reliable reviewers, it will give less weight to the reviewers who are not so reliable,” says Zeng.

    Journal editors often use search tools to identify candidates to peer review papers, but they have to manually decide who to contact. If referee activities were ranked and quantified, this would make it easier for journal editors to choose, Zeng points out.

    However, ranking reviewers on their reputation is likely to exacerbate the inequities and biases that exist in peer review, says Anita Bandrowski, an information scientist at the University of California, San Diego.

    As previous data have shown, most of the responsibility of the peer-review process in science falls to a small subset of peer reviewers — typically men in senior positions in high-income nations that are geographically closer to most journal editors.

    Bandrowski notes that the algorithm might favour those with a long history of reviewing, because they’ve had more time to accumulate citations on their refereed papers. “The oldest reviewers by this metric would be the best reviewers and yet the oldest reviewers are going to be retired or dead,” she says.

    Zeng disagrees that his approach will make the selection of peer reviewers more inequitable than it is now. After implementing the reputation ranking, editors might find that some reviewers who are not frequently invited have high reputation scores — in some cases better than those who are inundated with referee requests, he says.

    Capturing the nuance

    Laura Feetham-Walker, a reviewer-engagement manager at the Institute of Physics Publishing in Bristol, UK, worries that the algorithm might not account for incremental studies, negative findings and replications of previous studies, all of which are crucial for science, albeit often not highly cited.

    “Under their system, a reviewer who gave a favourable recommendation on an incremental study — for example, for a journal that does not have novelty as an editorial criterion — would go down in the reviewer reputation ranking, simply because that manuscript would be unlikely to accrue large numbers of citations when published,” she says.

    Neither does the ranking account for researchers who have never reviewed before, Feetham-Walker adds, or at least those who have never reviewed for a particular publisher.

    “We know that a reviewer’s ability to provide a helpful review is dependent not just on their expertise, but also their availability and interest in the subject matter. We also know that reviewers are human, and their reviewing behaviour can change over time depending on various factors,” Feetham-Walker says. “A nuanced algorithm that took all of this into account, as well as adding new reviewers to enrich the pool, would be of genuine value to publishers.”

    [ad_2]

    Source link

  • Researchers want a ‘nutrition label’ for academic-paper facts

    Researchers want a ‘nutrition label’ for academic-paper facts

    [ad_1]

    Inspired by the nutrition-facts labels that have been displayed on US food packaging since the 1990s, John Willinsky wants to see academic publishing take a similar approach to help to inform readers on how strictly a paper meets scholarly standards.

    A team at the Public Knowledge Project, a non-profit organization run by Willinsky and his colleagues at Simon Fraser University in Burnaby, Canada, has been investigating how such a label might be standardized in academic publishing1.

    Willinsky spoke to Nature Index about what he hopes to achieve with the initiative.

    Why should academic papers have publication-facts labels?

    I, like many others, have grown concerned about research integrity. Through transparency, we want to show how closely journals and authors are adhering to the scholarly standards of publishing. We want to help readers, including researchers, the media and the public, to decide whether an article is worth reporting on or citing.

    The facts that we have selected for the label include publisher and funder names, the journal’s acceptance rate and the number of peer reviewers. The label also shows whether the paper includes a competing-interests statement and an editor list, where the journal is indexed and whether the data have been made publicly available. Averages for other participating journals are listed, for comparison.

    It’s important that such information is readily available. When we conducted an exercise with secondary-school students, asking them to find these facts for a single academic article online, many of them took 30 minutes to do so. Some couldn’t find the information. This finding justifies the need for the label: it shouldn’t take half an hour to establish that a journal adheres to scholarly standards.

    How did you create the label?

    The US nutrition-facts label has been proved to change people’s behaviour, specifically their food-purchasing habits2. Given that so much work went into the label’s development, I thought it would be wise to build on its design.

    On the basis of our early consultations with researchers, editors, science journalists, primary-school teachers and others, we created a prototype with eight elements that reflect scholarly publishing standards. We’re now gathering feedback, and might decide to change some of the facts, or to add others. Some people, for example, suggested that we include the number of days that the peer-review process took to complete.

    We’ve built in ways to automatically generate the label, to ensure that the format is standardized across journals and articles and to make the label available in several languages. We have created a third-party verification system, too, to ensure that authors’ identities are not revealed to peer reviewers and vice versa. This relies on authors, reviewers and editors using ORCID, the service that provides unique indicators with which to identify researchers.

    The label will be displayed on the article landing page of the journal website and will be included in the article PDF.

    How are you trialling the label’s use?

    We’ve completed work with ten focus groups involving journal editors and authors in the United States and Latin America. We also interviewed 15 science journalists about what kinds of fact they’d want to see at a glance.

    We built the label specifically for journals using the scholarly publishing workflow system Open Journal System (OJS), run by the Public Knowledge Project. By the middle of the year, we hope to launch a pilot programme involving more than 100 journals using the OJS. The goal is to explore the prospects of industry-wide implementation of the label by next year.

    How could journals be compelled to display such a label?

    Unlike the nutrition-facts label, which was mandated by the US government, the publication-facts label is the result of voluntary concern about research integrity in the publishing industry.

    Although many groups, such as the International Association of Scientific, Technical and Medical Publishers and the Committee on Publication Ethics, manage concerns about research integrity by releasing guidelines on best practices and accumulating tools to flag suspicious activity, we feel that they have not addressed the fact that open access is public access. We need to adapt our practices to cater to the needs of different audiences, not just those in academia.

    Although we’re initially building the label for OJS journals, it is an open-source plug-in that other publishing platforms will easily be able to adapt. The software is currently listed as being ‘under development’ on GitHub and will be shared there on release.

    We want to show the publishing industry that we’ve piloted this in our own environment and that it is readily adaptable. We want to show that, although you could build your own label, for the sake of comprehensibility, it’s better to have a common format.

    This interview has been edited for length and clarity.

    [ad_2]

    Source link