Tag: Research management

  • Should I climb the career ladder as a manager, or will I regret leaving the lab bench behind?

    Should I climb the career ladder as a manager, or will I regret leaving the lab bench behind?

    [ad_1]

    Cartoon of a man jumping from a lab jacket into a business suit.

    Illustration: David Parkins

    The problem

    Dear Nature,

    I am a chemical engineer with a PhD, working in the food industry. I’m at a point in my career where I need to decide whether I want a managerial career path or should stick with technical, problem-solving work in research and development.

    My biggest worry is that, if I make the wrong decision, my career will go in an unsatisfying direction and I’ll regret it forever. I do not want to be in a situation where I have to spend a lot of time and energy to correct my path.

    I’m looking for some guidance and resources, such as published literature or personality tests, to help me choose. I’d rather spend time considering this now than spend the rest of my career kicking myself for not being more thoughtful in my decisions.

    Thank you — Chem Eng. at a Crossroads

    The advice

    You’re not alone. The transition from technical roles to management is a common theme in the careers of scientists and engineers who work in industry. Deciding whether, when and how to make the move is a serious undertaking.

    In industry, an individual contributor is someone ‘doing the work’ of research and development. They answer to a project manager or supervisor, but do not have anyone who answers to them. Although these jobs are what people tend to think of when they envision a scientist’s work in industry, companies often offer limited opportunities for promotion on this path.

    Lack of chances for advancement in technical or hands-on roles can lead mid-career engineers and scientists to transition to management, even when they don’t have the skills, working style or inclination to succeed in a leadership role. One 2008 study1 found that mid-career engineers who felt ‘derailed’ in their career paths tended to be reluctant, under-prepared managers. They felt passed over for further promotion, experienced little satisfaction in their work and had a reduced sense of personal effectiveness in their work.

    Nature reached out to three scientists for guidance on how to approach this kind of career crossroads.

    Know yourself

    Roni Wright is a molecular biologist who runs a laboratory group at the International University of Catalonia in Barcelona, Spain. She also runs workshops, courses and one-on-one training in career development for scientists at the Barcelona Biomedical Research Park, which brings together research institutes based in the city. The company you work for might have something similar — large organizations and research centres often offer career-development resources for their employees. Wright suggests that the first step is to carry out a self-assessment, reflecting on your skills, working style and values.

    You asked about personality tests. These are hotly debated scientifically, but can be helpful starting points for self-reflection, providing some insight into your behavioural patterns and decision-making style, and thus are often used by large employers to encourage such thinking. Wright suggests that the Myers–Briggs Type Indicator (MBTI) and DISC assessment are popular places to start. The MBTI was developed by US writers Katharine Cook Briggs and Isabel Briggs Meyers in 1944, inspired by the work of Swiss psychologist and psychotherapist Carl Jung. Through a series of about 90 questions, the MBTI evaluates the test-taker’s preferences in four aspects of personality (introversion–extraversion, sensing–intuition, thinking–feeling and judging–perceiving) and sorts them into one of 16 types.

    DISC assessments, based on the DISC personality theory developed by US psychologist William Moulton Marston in the 1920s, are specifically geared towards workplace interaction. They categorize the test-taker according to four personality profiles — which Marston called dominance, inducement, submission and compliance — to help them understand their own working style and develop strategies for engaging with others. Since the 1940s, various companies have published assessments based on Marston’s theory, including the publishing company Wiley, with its test Everything DiSC, and Truity Psychometrics in Roseville, California. Most companies update the model and adapt the acronym to their own terminology.

    Versions of both these self-assessments are available to take online for free.

    Honest conversation

    To get a clearer understanding of your own strengths and weaknesses, Nimrod Levin, a vocational psychologist and career-counselling specialist at the University of Lausanne, Switzerland, recommends getting an outside perspective. “Talk to people you trust at all levels of the organization — meaning people that are above you, at the same level and below you — and have an honest conversation about this career move,” suggests Levin. “How do they see it, what do they anticipate being a challenge for you and what do they see that would be an asset for you in one role or the other?” In this “360-degree reflective process”, some recurring themes are likely to reveal themselves.

    Your question alludes to another important, and often-underestimated, factor — the people you would be working with. Levin says that, in his experience, “it’s often more the interpersonal environment, than the specific tasks of the job, that determine to what degree the person is happy”. Instead of framing this as choosing between two job titles, you could look at it as a choice between configurations of co-workers: the groups of people you would be working with and how you would relate to them in either role.

    Personal situation

    Jennifer Hunt offers a personal perspective on career shifts in your field. Hunt is a chemical engineer who worked in research and development for 33 years, first as an individual contributor and then as a project manager for contracts to develop hydrogen fuel cells. When the opportunity arose, she transitioned out of research and into a more people-focused role in applications engineering at Unison Energy. That career move helped Hunt, who is based in California, to find the financial stability she needed at that stage of her life. “I had two small kids. I didn’t have another income coming in from a partner, and I didn’t know if I’d have a job after each contract was up,” she says. “I decided that I needed something else.”

    She continues: “Instead of the hamster wheel of always trying to find funding for the next project, I had a steady income. So that is something to ask yourself — how much of the decision is about finances? On the managerial path, you end up making more money.”

    But it’s not for everyone. “As a manager, you have responsibility over the livelihoods of the people on your team,” Hunt stresses. “They need you to be their guide. It’s a tricky role.” The best bosses, she says, are the ones who are able to teach without demeaning, learn from the people who work for them and act as mentors to their teams. If you can do that, you might find management very fulfilling.

    Hunt doesn’t regret taking the leap, but leaving the lab involved some sacrifice.

    “I will say that I loved working in the lab. I missed the high of being a player in the whole movement of knowledge,” says Hunt. “When you leave the bench, you’re still part of that movement, but in a different way. You get a different perspective on the field.”

    She has used that perspective to draw connections. The company she now works for is not in the business of research and development, but Hunt is using knowledge and connections from her past work to get the firm involved in research projects, kick-starting collaborations with research groups and introducing the company to funding opportunities with the US Department of Energy. These kinds of project helped her to recover some of the thrill and feeling of making a contribution that she loved about lab work. “It’s exciting to help bridge the gap between the technology of the future and the actual industry of today,” she says.

    All three advice-givers agree that there is no shame in pausing to recalibrate or change direction. “Careers are rarely linear,” says Wright. “Lives change, circumstances change, we change and, if we want to be both successful and happy, our careers change with us.”

    In Wright’s years of running career-development workshops, the panellists she has hosted have come from a wide array of scientific backgrounds and diverse career paths. But they tend to offer a certain piece of advice in common, she says. “It always strikes me how the main piece of advice is to follow what makes you happy, what you love doing. As scientists, we all have that passion. Make that first move, try something new, follow your passion and you will land on your feet.”

    [ad_2]

    Source link

  • academics lack access to powerful chips needed for research

    academics lack access to powerful chips needed for research

    [ad_1]

    Promotional artwork of the NVIDIA H100 NVL GPU.

    Tech giant NVIDIA’s H100 graphics-processing unit is a sought after chip for artificial-intelligence research.Credit: NVIDIA

    Many university scientists are frustrated by the limited amount of computing power available to them for research into artificial intelligence (AI), according to a survey of academics at dozens of institutions worldwide.

    The findings1, posted to the preprint server arXiv on 30 October, suggest that academics lack access to the most advanced computing systems. This can hinder their ability to develop large language models (LLMs) and do other AI research.

    In particular, academic researchers sometimes don’t have the resources to obtain powerful enough graphics processing units (GPUs) — computer chips commonly used to train AI models that can cost thousands of dollars. By contrast, researchers at large technology companies have higher budgets and can spend more on GPUs. “Every GPU adds more power,” says study co-author Apoorv Khandelwal, a computer scientist at Brown University in Providence, Rhode Island. “While those industry giants might have thousands of GPUs, academics maybe only have a few.”

    “The gap between academic and industry models is huge, but it could be a lot smaller,” says Stella Biderman, executive director at EleutherAI, a non-profit AI research institute in Washington DC. Research into this disparity is “super important”, she says.

    Long waits

    To assess the computing resources available to academics, Khandelwal and his colleagues surveyed 50 scientists across 35 institutions. Of the respondents, 66% rated their satisfaction with their computing power as 3 or less out of 5. “They’re not satisfied at all,” says Khandelwal.

    Universities have varying set-ups for GPU access. Some might have a central compute cluster shared by departments and students, where researchers can request GPU time. Other institutions might purchase machines for lab members to use directly.

    Computing shortage: Bar chart showing results of a survey of academics showing that researchers typically have limited access to graphics processing units, restricting their ability to train machine-learning models.

    Source: Ref. 1

    Some scientists said that they had to wait days to access GPUs, and noted that waiting times were particularly high around project deadlines (see ‘Computing shortage’). The results also highlight global disparities in access. For example, one respondent mentioned the difficulties of finding GPUs in the Middle East. Just 10% of those surveyed said that they had access to NVIDIA’s H100 GPUs, powerful chips designed for AI research.

    This barrier makes the process of pre-training — feeding vast sets of data to LLMs — particularly challenging. “It’s so expensive that most academics don’t even consider doing science on pre-training,” says Khandelwal. He and his colleagues think that academics provide a unique perspective in AI research, and that a lack of access to computing power could be limiting the field.

    “It’s just really important to have a healthy, competitive academic research environment for long-term growth and long-term technological development,” says co-author Ellie Pavlick, who studies computer science and linguistics at Brown University. “When you have industry research, there’s clear commercial pressure and this incentivizes sometimes exploiting sooner and exploring less.”

    Efficient methods

    The researchers also investigated how academics could make better use of less-powerful computing resources. They calculated the time it would take to pre-train several LLMs using low-resource hardware — with between 1 and 8 GPUs. Despite these limited resources, the researchers were able to successfully train many of the models, although it took longer and required them to adopt more efficient methods.

    “We can actually just use the GPUs we have for longer, and so we can kind of make up for some of the differences between what industry has,” says Khandelwal.

    “It’s cool to see that you can actually train a larger model than many people would have assumed on limited compute resources,” says Ji-Ung Lee, who studies neuroexplicit models at Saarland University in Saarbrücken,Germany. He adds that future work could look at the experiences of industry researchers in small companies, who also struggle with access to computing resources. “It’s not like everyone who has access to unlimited compute gets it,” he says.

    [ad_2]

    Source link

  • Killer questions at science job interviews and how to ace them

    Killer questions at science job interviews and how to ace them

    [ad_1]

    An illustration showing a repeating patten of purple questions marks

    Credit: Getty

    Nature’s 2024 hiring in science survey

    This article is the third in a short series discussing the results of Nature’s 2024 global survey of hiring managers in science. The survey, created in partnership with Thinks Insights & Strategy, a research consultancy in London, launched in June and was advertised on nature.com, in Springer Nature digital products and through e-mail campaigns. It received 1,134 self-selecting respondents from 77 countries, based in academia, industry and other sectors, including industry responses provided in partnership with Walr, a market-research panel. The full survey data sets are available at go.nature.com/3bgpazn.

    Preparing for a scientific job interview? Knowing in advance the types of questions that recruiters love to ask can give you a considerable edge, and can buy you time to work on your answers. In this article, we’ll look at some of the favourite or most revealing questions that are used by hiring managers. These data were gleaned from Nature’s 2024 global survey of more than 1,100 laboratory heads and research leaders from academia, industry and other sectors.

    The questions listed below are designed to probe your technical knowledge, interest in a given research field, future ambitions and how you manage conflicts with colleagues or other challenges. By understanding these four question types — and the curveball questions you might also get — you’ll be better equipped to showcase your expertise and passion for science.

    Technical knowledge or experience

    Typical questions

    • Tell me about one of your recent research projects.

    • How would you tackle this [specific research question], and how does your background support your approach?

    Why they are asked. Most applicants will expect to answer interview questions about their research and experience. According to hirers who responded to the survey, these can be great starter questions to allow candidates to settle into the interview before facing something more challenging. Such questions provide insights into the applicant’s problem-solving ability, and they also allow the interviewer to gauge someone’s communication and presentation skills when speaking about something they should know well.

    Worth remembering. Hirers often spring technical questions on applicants to unmask anyone who might have exaggerated their skills. Tulio de Oliveira, who heads the Centre for Epidemic Response and Innovation at Stellenbosch University in South Africa, says asking technical questions helps him “separate who will be good at the job” from who is simply “good at doing interviews”. One engineer working in industry in France said that they like to use questions that are premised on ‘false’ or incorrect information. “If the candidate answers it like they know about it, I remove them from the shortlist of potential hires.”

    Curveball questions

    • “I ask a basic maths question. You’d be surprised how often people can’t answer them.” — Academic group leader in the biological sciences in the United Kingdom.

    • “Tell me a story about your best project so far, in five minutes.” — Associate professor in the biological sciences in Sweden.

    Interest in the team or field

    Typical questions

    • What aspects of our group’s research do you find especially interesting, and why?

    • What do you think has been the most important discovery in our field in the past five years?

    Why they are asked. Hirers like to see evidence that candidates have done their homework before an interview. Questions about the hiring lab are a way to test this, and they also help interviewers to understand applicants’ motivations — whether their chief desire is to find any job, or whether it’s this particular job that interests them.

    Worth remembering. Be prepared to talk about research that isn’t your own. Which study you choose might not matter as much as having something to say and how you talk about it. Glenn Geher, a psychology researcher at the State University of New York at New Paltz, says that if a candidate hesitates when asked to talk about other people’s work, they might be driven mainly by external rewards, seeing research as ”almost a chore needed to achieve certain outcomes like a degree or tenure”. But if the candidate “excitedly describes an interesting additional line of research”, their motivation is probably more intrinsic, he says.

    Curveball questions

    • “Having read our recent paper on [topic], what would you do next?” — Professor of medical science in Ireland.

    • “Describe the thing that you are best at that you think would be a key contribution to our team.” — Research-group head in the biological sciences at a non-governmental organization in the United States.

    Tulio de Oliveira and Dr Wonderful Tatenda Choga look at a computer in a laboratory

    Tulio de Oliveira (left) asks candidates questions that test their technical knowledge.Credit: Tommy Trenchard/Panos Pictures

    Tackling challenges and conflicts

    Typical questions

    • Describe a situation in which you faced a major challenge at work and explain how you solved it.

    • How would you handle a conflict with a colleague?

    Why they are asked. Interviewers ask about coping with failure to evaluate candidates’ levels of self-awareness and to gauge their conflict-resolving skills. Questions can be about something that actually happened, or can focus on a hypothetical scenario; it’s worth preparing for both of these possibilities.

    Worth remembering. Interviewers will be looking for evidence of introspection and learning, so bear that in mind when choosing which experiences to share. “Anyone with experience as an academic should be able to tell you multiple stories about things not going exactly according to plan,” says Geher. Candidates’ answers can reveal whether they are prepared to take responsibility for problems that emerged, or prefer to shift the blame to others, he says. “If they show signs that they genuinely know that they have a lot to learn — and welcome this fact — that is usually a good sign.” One programme manager in medical research reported giving candidates a ‘prioritization’ challenge, where the applicant must list a number of tasks in the order in which they’d choose to tackle them. One task involves a staff member wanting a five-minute private chat about a personal matter. “We prefer candidates that rank this first, as it demonstrates their humanity.”

    Curveball questions

    • “Research has its ups and downs; what skills do you have that will enable you to get through the tough days?” — Chemistry professor, country unknown.

    • “How would you manage work-related burn-out and health?” — Pharmaceutical lab head in Saudi Arabia.

    Future ambitions and goals

    Typical questions

    • Can you describe your career aspirations for the next five years?

    • How does this role align with your long-term goals?”

    Why they are asked. Given that many science jobs are short-term contracts, hirers often want to know what your plans are for when the job ends. For longer-term positions, such as tenure track or equivalent roles, these questions help recruiters to assess what you will bring to a broad department or division. Such questions also test whether candidates understand the demands of a scientific career. One principal investigator who responded to the survey said that the ability to chart a realistic course for career development is one of the skills that candidates nowadays most commonly lack, adding: “Grad school does not teach this.”

    Worth remembering. For short-term positions, there’s nothing wrong with seeing a job as a stepping stone, but make sure that you still explain how your experience and skills will contribute to the team’s success. Several hirers reported that they prefer candidates who express a long-term interest in their research area. That said, although clear long-term career visions might impress recruiters, it’s usually better to be honest if there are aspects of your future that you are unsure about. “It is easy to identify someone who’s not being honest when answering, and I personally prefer the ones that don’t shy away when saying that they don’t know something,” one astronomer working in academia in Chile said.

    Curveball questions

    • “If funding were unlimited, what research problem would you like to tackle?” — Biological sciences lab leader in the United States.

    • “What is your plan if you are not employed in our organization?” — Academic medical researcher in Iran.

    [ad_2]

    Source link

  • Can AI review the scientific literature — and figure out what it all means?

    Can AI review the scientific literature — and figure out what it all means?

    [ad_1]

    When Sam Rodriques was a neurobiology graduate student, he was struck by a fundamental limitation of science. Even if researchers had already produced all the information needed to understand a human cell or a brain, “I’m not sure we would know it”, he says, “because no human has the ability to understand or read all the literature and get a comprehensive view.”

    Five years later, Rodriques says he is closer to solving that problem using artificial intelligence (AI). In September, he and his team at the US start-up FutureHouse announced that an AI-based system they had built could, within minutes, produce syntheses of scientific knowledge that were more accurate than Wikipedia pages1. The team promptly generated Wikipedia-style entries on around 17,000 human genes, most of which previously lacked a detailed page.

    Rodriques is not the only one turning to AI to help synthesize science. For decades, scholars have been trying to accelerate the onerous task of compiling bodies of research into reviews. “They’re too long, they’re incredibly intensive and they’re often out of date by the time they’re written,” says Iain Marshall, who studies research synthesis at King’s College London. The explosion of interest in large language models (LLMs), the generative-AI programs that underlie tools such as ChatGPT, is prompting fresh excitement about automating the task.

    Some of the newer AI-powered science search engines can already help people to produce narrative literature reviews — a written tour of studies — by finding, sorting and summarizing publications. But they can’t yet produce a high-quality review by themselves. The toughest challenge of all is the ‘gold-standard’ systematic review, which involves stringent procedures to search and assess papers, and often a meta-analysis to synthesize the results. Most researchers agree that these are a long way from being fully automated. “I’m sure we’ll eventually get there,” says Paul Glasziou, a specialist in evidence and systematic reviews at Bond University in Gold Coast, Australia. “I just can’t tell you whether that’s 10 years away or 100 years away.”

    At the same time, however, researchers fear that AI tools could lead to more sloppy, inaccurate or misleading reviews polluting the literature. “The worry is that all the decades of research on how to do good evidence synthesis starts to be undermined,” says James Thomas, who studies evidence synthesis at University College London.

    Computer-assisted reviews

    Computer software has been helping researchers to search and parse the research literature for decades. Well before LLMs emerged, scientists were using machine-learning and other algorithms to help to identify particular studies or to quickly pull findings out of papers. But the advent of systems such as ChatGPT has triggered a frenzy of interest in speeding up this process by combining LLMs with other software.

    It would be terribly naive to ask ChatGPT — or any other AI chatbot — to simply write an academic literature review from scratch, researchers say. These LLMs generate text by training on enormous amounts of writing, but most commercial AI firms do not reveal what data the models were trained on. If asked to review research on a topic, an LLM such as ChatGPT is likely to draw on credible academic research, inaccurate blogs and who knows what other information, says Marshall. “There’ll be no weighing up of what the most pertinent, high-quality literature is,” he says. And because LLMs work by repeatedly generating statistically plausible words in response to a query, they produce different answers to the same question and ‘hallucinate’ errors — including, notoriously, non-existent academic references. “None of the processes which are regarded as good practice in research synthesis take place,” Marshall says.

    A more sophisticated process involves uploading a corpus of pre-selected papers to an LLM, and asking it to extract insights from them, basing its answer only on those studies. This ‘retrieval-augmented generation’ seems to cut down on hallucinations, although it does not prevent them. The process can also be set up so that the LLM will reference the sources it drew its information from.

    This is the basis for specialized, AI-powered science search engines such as Consensus and Elicit. Most companies do not reveal exact details of how their systems work. But they typically turn a user’s question into a computerized search across academic databases such as Semantic Scholar and PubMed, returning the most relevant results.

    An LLM then summarizes each of these studies and synthesizes them into an answer that cites its sources; the user is given various options to filter the work they want to include. “They are search engines first and foremost,” says Aaron Tay, who heads data services at Singapore Management University and blogs about AI tools. “At the very least, what they cite is definitely real.”

    These tools “can certainly make your review and writing processes efficient”, says Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark in Odense, who trains academics in AI tools and has designed his own, called Research Kick. Another AI system called Scite, for example, can quickly generate a detailed breakdown of papers that support or refute a claim. Elicit and other systems can also extract insights from different sections of papers — the methods, conclusions and so on. There’s “a huge amount of labour that you can outsource”, Bilal says.

    Laptop screen with an AI-powered tool called Elicit with papers' summary.

    Elicit, like several AI-powered tools, aims to help with academic literature reviews by summarising papers and extracting data.Credit: Nature

    But most AI science search engines cannot produce an accurate literature review autonomously, Bilal says. Their output is more “at the level of an undergraduate student who pulls an all-nighter and comes up with the main points of a few papers”. It is better for researchers to use the tools to optimize parts of the review process, he says. James Brady, head of engineering at Elicit, says that its users are augmenting steps of reviewing “to great effect”.

    Another limitation of some tools, including Elicit, is that they can only search open-access papers and abstracts, rather than the full text of articles. (Elicit, in Oakland, California, searches about 125 million papers; Consensus, in Boston, Massachusetts, looks at more than 200 million.) Bilal notes that much of the research literature is paywalled and it’s computationally intensive to search a lot of full text. “Running an AI app through the whole text of millions of articles will take a lot of time, and it will become prohibitively expensive,” he says.

    Full-text search

    For Rodriques, money was in plentiful supply, because FutureHouse, a non-profit organization in San Francisco, California, is backed by former Google chief executive Eric Schmidt and other funders. Founded in 2023, FutureHouse aims to automate research tasks using AI.

    This September, Rodriques and his team revealed PaperQA2, FutureHouse’s open-source, prototype AI system1. When it is given a query, PaperQA2 searches several academic databases for relevant papers and tries to access the full text of both open-access and paywalled content. (Rodriques says the team has access to many paywalled papers through its members’ academic affiliations.) The system then identifies and summarizes the most relevant elements. In part because PaperQA2 digests the full text of papers, running it is expensive, he says.

    The FutureHouse team tested the system by using it to generate Wikipedia-style articles on individual human genes. They then gave several hundred AI-written statements from these articles, along with statements from real (human-written) Wikipedia articles on the same topic, to a blinded panel of PhD and postdoctoral biologists. The panel found that human-authored articles contained twice as many ‘reasoning errors’ — in which a written claim is not properly supported by the citation — than did ones written by the AI tool. Because the tool outperforms people in this way, the team titled its paper ‘Language agents achieve superhuman synthesis of scientific knowledge’.

    Group of scientists standing and sitting posing in the FutureHouse office with a bird drawing on the wall. The team is behind PaperQA and WikiCrow AI tools..

    The team at US start-up FutureHouse, which has launched AI systems to summarize scientific literature. Sam Rodriques, their director and co-founder, is on the chair, third from right.Credit: FutureHouse

    Tay says that PaperQA2 and another tool called Undermind take longer than conventional search engines to return results — minutes rather than seconds — because they conduct more-sophisticated searches, using the results of the initial search to track down other citations and key phrases, for example. “That all adds up to being very computationally expensive and slow, but gives a substantially higher quality search,” he says.

    Systematic challenge

    Narrative summaries of the literature are hard enough to produce, but systematic reviews are even worse. They can take people many months or even years to complete2.

    A systematic review involves at least 25 careful steps, according to a breakdown from Glasziou’s team. After combing through the literature, a researcher must filter their longlist to find the most pertinent papers, then extract data, screen studies for potential bias and synthesize the results. (Many of these steps are done in duplicate by another researcher to check for inconsistencies.) This laborious method — which is supposed to be rigorous, transparent and reproducible — is considered worthwhile in medicine, for instance, because clinicians use the results to guide important decisions about treating patients.

    In 2019, before ChatGPT came along, Glasziou and his colleagues set out to achieve a world record in science: a systematic review in two weeks. He and others, including Marshall and Thomas, had already developed computer tools to reduce the time involved. The menu of software available by that time included RobotSearch, a machine-learning model trained to quickly identify randomized trials from a collection of studies. RobotReviewer, another AI system, helps to assess whether a study is at risk of bias because it was not adequately blinded, for instance. “All of those are important little tools in shaving down the time of doing a systematic review,” Glasziou says.

    The clock started at 9:30 a.m. on Monday 21 January 2019. The team cruised across the line at lunchtime on Friday 1 February, after a total of nine working days3. “I was excited,” says epidemiologist Anna Mae Scott at the University of Oxford, UK, who led the study while at Bond University; everyone celebrated with cake. Since then, the team has pared its record down to five days.

    Could the process get faster? Other researchers have been working to automate aspects of systematic reviews, too. In 2015, Glasziou founded the International Collaboration for the Automation of Systematic Reviews, a niche community that, fittingly, has produced several systematic reviews about tools for automating systematic reviews4. But even so, “not very many [tools] have seen widespread acceptance”, says Marshall. “It’s just a question of how mature the technology is.”

    Elicit is one company that says its tool helps researchers with systematic reviews, not just narrative ones. The firm does not offer systematic reviews at the push of a button, says Brady, but its system does automate some of the steps — including screening papers and extracting data and insights. Brady says that most researchers who use it for systematic reviews have uploaded relevant papers they find using other search techniques.

    Systematic-review aficionados worry that AI tools are at risk of failing to meet two essential criteria of the studies: transparency and reproducibility. “If I can’t see the methods used, then it is not a systematic review, it is simply a review article,” says Justin Clark, who builds review automation tools as part of Glasziou’s team. Brady says that the papers that reviewers upload to Elicit “are an excellent, transparent record” of their starting literature. As for reproducibility: “We don’t guarantee that our results are always going to be identical across repeats of the same steps, but we aim to make it so — within reason,” he says, adding that transparency and reproducibility will be important as the firm improves its system.

    Specialists in reviewing say they would like to see more published evaluations of the accuracy and reproducibility of AI systems that have been designed to help produce literature reviews. “Building cool tools and trying stuff out is really good fun,” says Clark. “Doing a hardcore evaluative study is a lot of hard work.”

    Earlier this year, Clark led a systematic review of studies that had used generative AI tools to help with systematic reviewing. He and his team found only 15 published studies in which the AI’s performance had been adequately compared with that of a person. The results, which have not yet been published or peer reviewed, suggest that these AI systems can extract some data from uploaded studies and assess the risk of bias of clinical trials. “It seems to do OK with reading and assessing papers,” Clark says, “but it did very badly at all these other tasks”, including designing and conducting a thorough literature search. (Existing computer software can already do the final step of synthesizing data using a meta-analysis.)

    Glasziou and his team are still trying to shave time off their reviewing record through improved tools, which are available on a website they call the Evidence Review Accelerator. “It won’t be one big thing. It’s that every year you’ll get faster and faster,” Glasziou predicts. In 2022, for instance, the group released a computerized tool called Methods Wizard, which asks users a series of questions about their methods and then writes a protocol for them without using AI.

    Rushed reviews?

    Automating the synthesis of information also comes with risks. Researchers have known for years that many systematic reviews are redundant or of poor quality5, and AI could make these problems worse. Authors might knowingly or unknowingly use AI tools to race through a review that does not follow rigorous procedures, or which includes poor-quality work, and get a misleading result.

    By contrast, says Glasziou, AI could also encourage researchers to do a quick check of previously published literature when they wouldn’t have bothered before. “AI may raise their game,” he says. And Brady says that, in future, AI tools could help to flag and filter out poor-quality papers by looking for telltale signs such as P-hacking, a form of data manipulation.

    Glasziou sees the situation as a balance of two forces: AI tools could help scientists to produce high-quality reviews, but might also fuel the rapid generation of substandard ones. “I don’t know what the net impact is going to be on the published literature,” he says.

    Some people argue that the ability to synthesize and make sense of the world’s knowledge should not lie solely in the hands of opaque, profit-making companies. Clark wants to see non-profit groups build and carefully test AI tools. He and other researchers welcomed the announcement from two UK funders last month that they are investing more than US$70 million in evidence-synthesis systems. “We just want to be cautious and careful,” Clark says. “We want to make sure that the answers that [technology] is helping to provide to us are correct.”

    [ad_2]

    Source link

  • Grass-roots grant-writing approaches can help researchers at small institutions to succeed

    Grass-roots grant-writing approaches can help researchers at small institutions to succeed

    [ad_1]

    A close-up shot of a young woman's hand writing the word "money" on a yellow sticky note

    Community support groups help researchers to write successful grant proposals.Credit: Virojt Changyencham/Getty

    Chemist David Sanabria-Ríos was no stranger to receiving the cold shoulder from the US National Institutes of Health (NIH).

    Several times, he had applied for funding for his research on synthesizing small, new molecules at the Inter American University of Puerto Rico Metropolitan Campus in San Juan. But Sanabria-Ríos says that his proposals often fared worse than being rejected — they were not even discussed or scored by reviewers at the NIH, the largest biomedical research funder in the world. Although this outcome stings, it’s fairly common.

    The NIH receives tens of thousands of grant proposals every year and it can give dedicated feedback to only a fraction of those. An even smaller fraction is ultimately funded.

    Sanabria-Ríos says that although his science was sound, his problem was a lack of effective grant writing. Part of this issue stemmed from a language barrier he faced when writing grant proposals in English instead of his native Spanish. However, a lack of grant-related resources at his university, such as a grants office to assist in editing proposals, added to this disconnect.

    “My university is mainly an undergraduate institution,” Sanabria-Ríos says. “We don’t have specific programmes” to help with grant-writing, such as are found at the Massachusetts Institute of Technology in Cambridge and other institutions.

    Sanabria-Ríos is not alone. Writing grant proposals is necessary to advance in the scientific world, but the challenges that have to be overcome to get research funded can feel more intense at small and less research-intensive universities.

    For example, in the United States, research-intensive universities that regularly receive top-tier grants from institutions such as the NIH are known as R1 or ‘very high research activity’ institutions. They typically have administrative offices dedicated to moving grant paperwork along and infrastructure to support researchers who take time off from teaching to write proposals. However, for researchers at smaller institutions in the United States that mainly serve undergraduates and have a large proportion of students from minority backgrounds, such resources for grant writing are scarce.

    To bridge that gap, researchers on these campuses are using their shared experiences to help each other stay on track and overcome what might be unfamiliar logistical obstacles in their grant proposals, such as crafting a realistic research budget or carving out time in their busy schedules to write. For some researchers, this might mean holding informal writing sessions together and sharing goals over coffee; for others, it means finding mentorship outside their university.

    Katia Del Rio-Tsonis missed out on this kind of community support when she began her research career at the National Autonomous University of Mexico in Cuernavaca in the 1990s. Informal support and mentorship between colleagues when writing grants might be even more valuable than resources that are offered by institutions, says Del Rio-Tsonis, who is now a biologist at Miami University in Oxford, Ohio. Miami University is an R2 institution — defined as having ‘high research activity’ — that has a large proportion of undergraduate students compared with postgraduate students.

    “There has been an incredible change in the support,” she says. “A lot of us try to find colleagues who can be helpful.”

    Pulling together from grass roots

    When biologist Kelly Tseng arrived at the University of Nevada, Las Vegas (UNLV), in 2012, writing grants was just one of many new challenges she faced. UNLV is a minority-serving institution — that is, it has a significant number of students from one or more minority groups including Indigenous people and those from Black, Hispanic, Asian and Pacific Islander backgrounds — and it achieved R1 status in 2018.

    For new investigators, Tseng says, there can be a lot of obstacles that eat away at the time researchers can dedicate to grant writing, such as setting up their independent laboratory or taking on a full teaching load. The grant-writing process itself can also be confusing for those with limited experience, she says.

    “It’s not just writing the proposal but there are many other documents that a researcher needs to prepare, such as a budget, that need to be submitted at the same time,” Tseng says. “And sometimes you focus so much on the proposal that you forget about the other parts.”

    Biologist Melissa Harrington is the associate vice-president of the research-development team at Delaware State University in Dover, an R2 institution and a historically Black college and university. Harrington says that a lot of new investigators who arrive at Delaware State are starting their grant experience from scratch.

    Many, she says, “have never seen a grant proposal; they don’t even know what it looks like. I see that as the biggest obstacle.”

    Having earned a PhD from Harvard University in Cambridge, Massachusetts, before arriving at UNLV, Tseng was familiar with what a good grant proposal looked like. However, she still faced a steep learning curve when it came to submitting her own proposals. One resource that helped her to work through those growing pains was attending informal grant clubs hosted by faculty members in her department.

    “This was started by a couple of senior faculty members who came from research-intensive institutions, who had success with grant writing for the NIH,” Tseng says.

    The idea behind the club, she explains, is that anyone working on a proposal in the cellular biology department could sign up for a weekly slot to bring in a section of their draft proposal and receive feedback from two senior faculty members. These draft sections were also shared with any other faculty members who were interested in attending the meetings and they could listen in on the feedback given.

    “Many people who participated found it really helpful to clarify their proposal,” Tseng says. “Sometimes, when you spend a lot of time writing a proposal, it becomes hard to see the weaknesses in it.”

    For Wendy Beane, a biologist at Western Michigan University in Kalamazoo, the accountability she needed to keep her proposals on track was missing. She says that although her university, an R2 institution with a high percentage of undergraduate students compared with postgraduate students, offers some support for grant writing, colleagues also turn to each other for help with staying on top of grant deadlines.

    “The biggest issue with grant writing is that it’s probably the most important thing you need to do, but it always gets put to the bottom of the list” when you have an assay experiment to run or a deadline for submitting a talk, Beane says. “Holding each other accountable is something we did at the grass-roots level.”

    When she was a junior faculty member, Beane says, she and a group of her peers would come together and set goals with each other, either through in-person conversations or by e-mail, to help them achieve milestones during their grant writing, such as submitting a proposal by the end of a grant application cycle. Beane says that the cohort also held small group-writing sessions in a colleague’s office once a week for about an hour.

    “We’d say ‘we’re going to get together in so-and-so’s office, bring your coffee’ and then we would just sit in the same room and type,” Beane says. A strict no-talking rule was implemented during dedicated writing time.

    The importance of mentorship

    Although group support can be important for success, it doesn’t necessarily replace one-on-one guidance through mentorship, Beane says. As she sees it, there are three levels of mentorship that are important to draw on when writing a grant: feedback from someone outside your field, feedback from someone in your field but outside your institution and your ‘work best friend’ who will be candid with you.

    The mentor from an external institution can be particularly beneficial, says Del Rio-Tsonis, who has been a mentor to Tseng. “Cross-institutional mentoring is important because then you don’t have a bias and you don’t have to deal with departmental politics or jealousy,” she says. “You’re just helping with the science.”

    But it’s not always easy for new investigators to make these mentorship connections. At Delaware State, Harrington says, such connections are supported by an NIH programme called Centers of Biomedical Research Excellence (COBRE) that puts in place a more formal mentorship programme, which is both internal and external to an institution.

    A standing Barbara Duncan is seen leading a grants training workshop to sitting participants

    A grant-writing training session at a 2022 Interactive Mentoring to Enhance Research Skills (iMERS) workshop at the University of Kentucky in Lexington.Credit: University of Kentucky Photography

    For Sanabria-Ríos, mentorship came from a programme at the University of Kentucky in Lexington called Interactive Mentoring to Enhance Research Skills (iMERS), which offers free mentorship to faculty members at minority-serving institutions who are looking to land a NIH grant.

    Melissa Nickell is the centre administrator for iMERS’ sister programme, the SuRE Resource Center, which is also based at University of Kentucky. Nickell says that these programmes work mainly with resource-limited institutions that have researchers who have great scientific ideas, but might lack some of the nuts and bolts for successful grant writing. For example, some scientists might not fully appreciate the details that are needed to make a grant proposal both compliant and competitive, she says.

    Sanabria-Ríos first began working with his iMERS mentor virtually during the COVID-19 pandemic, in May 2020. His mentor, Sarah D’Orazio, a microbiologist at the University of Kentucky’s College of Medicine, advised him on how to write persuasively and for a non-specialist audience, because NIH reviewers are not always in a researcher’s field or subfield. Taking that advice to heart, Sanabria-Ríos submitted his grant proposal to the NIH in February 2021 and received a score and constructive feedback for the first time — but the proposal was ultimately rejected.

    “When I received my score, I was happy,” he says. “It was a good score, but not a fundable score. But I recognized it as an invitation for resubmission.”

    In June 2022, Sanabria-Ríos met D’Orazio in person in Lexington, and they worked together to provide targeted revisions in response to the harshest bits of feedback on his proposal. He resubmitted the proposal in February last year for NIH R15 funding, which supports small-scale research projects at mainly undergraduate institutions and funds researchers who have not previously received significant NIH grants. He proposed to develop synthetic fatty acids, which can form holes in bacterial membranes and ultimately lead to cell death, as a new type of antibiotic that might be difficult for bacteria to develop resistance against. His resubmission won approval.

    “This is the first R15 grant that my institution has received in its history,” he says. “We are working hard to enhance the level of research at our institution. This is a specific example of moving in that direction.”

    Even for researchers at small, low-resourced institutions, support for grant writing will look different at different universities. What could be helpful for some scientists, Beane says, might come across as micromanaging for others. What remains crucial is the sense of community and support that researchers find with each other.

    “Most of us face more noes than yeses” when it comes to grant proposals being funded, Tseng says. “It’s always helpful to have others to talk with about it and to learn from each other’s experiences. It’s really just a support to keep writing and keep submitting.”

    [ad_2]

    Source link

  • Science communication will benefit from research integrity standards

    Science communication will benefit from research integrity standards

    [ad_1]

    Close-up of an anti-vaccine protester holding a placard

    An anti-vaccination protester in New York City. Researchers are aiming to improve public trust in science by discussing uncertainty in their communications.Credit: Michael Nigro/Pacific Press/LightRocket/Getty

    “Twenty seconds, professor, and no long words.” This is what a BBC producer once told Ian Fells, a chemical engineer at Newcastle University, UK, shortly before Fells was due to appear on a live broadcast. It was more than 30 years ago, at a time when few researchers were trained in how to condense science into sound bites, while staying true to the accuracy of their message.

    Today, that challenge could be even bigger. The smartphone makes every researcher a potential writer, audio producer or broadcaster. Although many scientists have taken to communicating directly with the public, others are afraid to do so, not least because social-media platforms offer few guardrails or protections against disinformation. Another reason for their hesitancy is that principles that are fundamental to research — such as the scientific process, uncertainty around the results and the context — are difficult to fit into fast and short content formats.

    Rhys Morgan, head of research policy, governance and integrity at the University of Cambridge, UK, has a fairly radical — or at least unusual — proposal. In a report published last month by the League of European Research Universities (LERU), a network of 24 institutions, Morgan proposes that public-facing science-communication work should adhere to the same research-integrity principles that are used for scholarly publications, and suggests that universities should support scientists who do so (see go.nature.com/4hxw4ag). In journal articles, researchers describe the methods used to obtain their findings and whether, for example, animals or artificial-intelligence tools were used in experiments; they explain how a finding fits in with the current knowledge in a field and declare conflicts of interest.

    The idea deserves more attention from universities, companies and campaigning organizations — all of which are now much more involved in science communication than at any time in the past. It might not work in all contexts and there will be challenges to its implementation, but the concept should be discussed more widely.

    There’s a view in the world of professional communication — for example, in companies that provide media training — that people prefer certainty to uncertainty. There are also studies that support this idea, not least the work of Daniel Ellsberg (published before he became famous for revealing a classified US study on the Vietnam war). The problem with emphasizing certainty as the default option when communicating science to a wider audience is that this is not how researchers discuss their findings in scholarly journals. In such instances, data are often communicated as a range, with levels of confidence in the results. Most researchers are careful not to overstate a finding, or use language that could be misinterpreted to mean certainty. Communicating results that sound certain when they are provisional could also harm a researcher’s reputation. Public trust in science, already under strain in many countries, could be further reduced (C. Dries et al. Public Underst. Sci. 33, 777–794; 2024).

    The LERU report doesn’t go into how Morgan’s proposals could be implemented. But there are important implications, for corporate, government and university media offices. Many press officers work closely with scientists to ensure science is communicated accurately both on social media and in conventional mass media. They go out of their way to find researchers who have knowledge about and passion for what they do. However, at some institutions, staff members have fewer resources to communicate research results, compared with in the past, according to a 2022 report on the changing role of university press officers by science-communication consultant Helen Jamison for the Science Media Centre in London (see go.nature.com/3ccqxba). This is in part because many senior leaders in universities regard press-office communication as mainly about boosting their institute’s profile and reputation. Morgan and Jamison’s reports suggest that scientists need to be supported better by their institutions and recognized for their efforts in research communications, too.

    Communicating uncertainty is often difficult, but there are tools and research available to those willing to try. ‘How to Communicate Uncertainty’, a 2020 report by researcher Dora-Olivia Vicol at the University of Oxford, UK, summarizes some of the literature nicely and provides helpful suggestions, such as how to effectively discuss a range of values and what the impact on audiences is when different words are used to describe uncertainty (see go.nature.com/3ufox9j). It was published by a consortium of fact-checking organizations: Africa Check in Johannesburg, Chequeado in Buenos Aires and London-based Full Fact. This shows that the ideas proposed by Morgan were already on the radar in this communications sector.

    Science communication can do more to embrace uncertainty. It’s up to everyone who talks about research to consider describing both the process and the outcomes of the work — even in a 20-second sound bite with no long words.

    [ad_2]

    Source link

  • Is there a ‘Goldilocks zone’ for paper length?

    Is there a ‘Goldilocks zone’ for paper length?

    [ad_1]

    An unusually long and complex research paper has caught the attention of the scientific community, sparking questions about the ideal length of a paper. The study, by computational biologist Manolis Kellis and his colleagues, was published in Nature1 in July. Spanning 35 pages, it contains more than 20,000 words, and has 16 figures — or 61, if those in the Supplementary information are included. It describes changes in the genes, cellular pathways and cell types of people with Alzheimer’s disease across six regions of the brain, and provides a detailed atlas of gene expression.

    When Kellis, who runs a computational-biology laboratory at the Massachusetts Institute of Technology (MIT) in Cambridge, shared the paper on X (formerly Twitter), the size of it seemed to divide his peers. Some were complimentary: “It must have taken a lot of effort and resources to get this done, so all in all, it is a great paper,” one response read. Others were concerned about its usefulness. “How can anyone read this article, let alone review and critique the work?” asked another user.

    An analysis of paper characteristics across scientific fields2, published in 2023, suggests that this study is an outlier in the medical and health sciences, where papers typically hover around ten pages in length. However, it is not so unusual when compared with papers in subject areas such as mathematics, law or the humanities — all of which often exceed 20 pages.

    Kellis’s work raises the question of how accessible research papers should be, and how readers in and beyond academia are expected to consume them. For example, is it better to publish large data sets alongside long and dense papers, to keep the information contained in one place? Or should researchers home in on specific topics and publish their results across several papers?

    Alireza Haghighi, a geneticist at Harvard Medical School in Boston, Massachusetts, says that there is value in the former approach, particularly at a time when data sets are becoming increasingly large. “Although focus has traditionally been important in publications, we must acknowledge the complexity of new methods and the huge volumes of data generated today,” says Haghighi. “Not all papers can or should be understood in one hour.”

    Does size matter?

    Papers that provide broad, detailed overviews and extensive data sets — sometimes called ‘atlases’, in the omics fields of genomics, transcriptomics and proteomics — allow researchers to see the big picture, says Haghighi. They enable readers to “identify connections across different areas, and generate new hypotheses”, he explains, and adds that he sees them as drivers of innovation that can guide large-scale, integrative research initiatives better than a more focused paper might.

    Responding to the discussion on X, Kellis said he understands that some people will be overwhelmed by his lab’s paper. He likened the work to “a good book with many chapters and many pages”, and said that “each paragraph, parenthesis, panel, supplementary figure, can hide potential hints and secrets that the authors themselves may have missed”. Kellis also suggested that for those who were overwhelmed by the results, tools such as the ChatGPT Consensus app, which is regarded as an academic search engine, could be useful for summarizing some of the paper’s findings.

    Li-Huei Tsai, a neuroscientist at MIT and a corresponding author on the paper, told Nature Index that she is proud of the work, which has “produced important insights into genomic underpinnings of Alzheimer’s vulnerability and resilience”. Kellis did not respond to Nature Index’s request for comment.

    Researchers who spoke to Nature Index flagged a number of issues with big, data-dense articles. Luke Dabin, an epigeneticist at the Indiana University School of Medicine in Indianapolis, is a “huge fan” of big data sets and atlas papers, because they have the potential to be a hotbed for generating hypotheses and can inform the design of future experiments. But Dabin says that such papers can sometimes be difficult to interpret — even by scientists working in the same field — and can have quality-control issues. “The Kellis paper has 475 figure panels and is difficult for me to digest, let alone someone with no training or experience in single-cell omics,” Dabin says. Haghighi agrees that accuracy can become a problem in large papers. “We should appreciate that atlas maps are more prone to inaccuracies due to their scope and complexity,” he says.

    Such papers can also be resource-heavy for journal editors to publish. It took almost two years for Kellis’s paper to progress from acceptance to publication, although it might not have been under review the entire time. A spokesperson for Nature noted that “the length of the review process for papers submitted to Nature varies considerably from manuscript to manuscript”, and said that its primary focus is “to ensure that a rigorous peer-review process takes place”. (Nature Index’s news and supplement content is editorially independent of its publisher, Springer Nature.)

    On X, Kellis noted that “it was a Herculean task by the reviewers and editors, as it was of course for the authors, to go through every figure, every panel, and every result” as part of the publishing process.

    A case for brevity?

    Some researchers argue that there is simply not enough time to read such long and dense papers. “The readership on most academic papers is low anyway, so writing a long paper is just inviting it not to be read even more,” says Daniel Price, an astrophysicist at Monash University in Melbourne, Australia, and former editor-in-chief of the journal Publications of the Astronomical Society of Australia, which publishes research on data-heavy topics such as modelling and computational astronomy.

    Price says it’s unlikely that anyone has ever read the entirety of one of his monster astrophysics papers3, which clocks in at 82 pages and has 57 figures. “It’s definitely too long,” he says of the paper, admitting that it could easily have been 60 pages instead. The problem with going long, he adds, is that it’s “undisciplined” and compromises the ability to self-edit.

    Haghighi says some improvements could be made to long, data-heavy papers. He suggests that publishers standardize the way such papers are formatted and published by introducing new editorial guidelines and implementing “a dynamic, continuous review process” that allows authors to update their work regularly over time, after publication. “I appreciate that this might not be easy,” says Haghighi, but “it would make the review process more effective and consistent and make it easier for the scientific community”.

    Formatting guidelines at most major journals tend to favour shorter articles with fewer figures. Nature, for instance, suggests the typical length of biological, clinical and social sciences papers should not normally exceed 8 pages, or 4,300 words, and 5–6 figures. That said, it does not enforce specific limits, and instead leaves this up to the editor’s discretion.

    In astrophysics, a field that is characterized by vast data sets that are often analysed by large, international teams, there are some examples of how a research finding can be broken down into more digestible parts. For instance, after the first image of a black hole was captured by the Event Horizon Telescope — a global network of radio telescopes run by a group of more than 300 scientists — the team published 6 papers in a special edition of The Astrophysical Journal Letters. Each paper presented an aspect of the research, looking at methodology, specific features of the black hole and the image itself.

    Price thinks paper series such as this are “definitely a better idea” than one long paper, and adds that there is a lot to be said for concise papers. He points to a 2016 paper published by the LIGO Scientific Collaboration4 — a conglomerate of more than 100 institutions collaborating in the search for gravitational waves — after its seminal detection of gravitational waves using instruments in Washington and Louisiana. “It’s eight pages [ten, including references] and it revolutionized astrophysics,” he says.  

    [ad_2]

    Source link

  • I had to let a student go and I feel as though I failed them — how do I do better next time?

    I had to let a student go and I feel as though I failed them — how do I do better next time?

    [ad_1]

    Cartoon showing a scientist climbing a ladder made of DNA cutting a climbing rope with a hand reaching up from below.

    Illustration: David Parkins

    The problem

    Dear Nature,

    A PhD student in my laboratory was consistently unmotivated and failed to do the most basic things that I consider essential for research, such as keeping an up-to-date notebook. This is one of the requirements outlined in the lab manual and I ask all members of my team to sign an agreement committing to abide by it.

    I tried to help the student, but nothing seemed to get through to them, and after giving them many warnings I asked them to find training elsewhere.

    I feel it was the right thing to do for the sake of the lab, but I’m also left with feelings of guilt and personal failure. I’m a woman of colour and have regularly faced colleagues who didn’t give me a fair opportunity to develop my research and advance my career. As a result, when I started my own lab more than a year ago, I was determined that I would not hinder anyone’s progress. I’m now left with a sense that I contributed to the same gatekeeping I experienced.

    Was the way I handled the situation wrong? Could I have done more to support the student? And how can I do things differently next time to ensure that I don’t feel this way again? — A rueful molecular biologist

    The advice

    Nature asked two careers advisers and a research-group leader to answer your questions. They all agree that letting go of a lab member who is unmotivated and not responding to your efforts to help is the best thing for everyone involved. However, they did have some advice on how you might prevent the situation arising again and ensure that you are doing all you can to support the student, even after they leave.

    Harmit Malik, a geneticist at the Fred Hutch Cancer Center in Seattle, Washington, makes sure that his team members meet any prospective addition to the lab and can give their opinions on a candidate’s suitability and attitude. “It is our job to look past what they’ve achieved before — because it could be a function of privilege or something else — and really focus on motivation, their interest in the lab and their curiosity for science,” says Malik.

    Malik adds that it can be hard to stay mindful of these factors as a new principal investigator. “The oppressive nature of an empty lab means that you’re dying to fill it with people,” he says. But, at this stage, it’s even more important to be vigilant: taking on someone who needs a lot of attention and monitoring will add unnecessary stress. “Hiring the wrong person is worse than hiring no person at all,” Malik says.

    Making your expectations clear is the next step. Raquel Salinas, director of student affairs and career development at the MD Anderson Cancer Center in Houston, Texas, says that putting together a lab manual and asking any new lab members to read and sign it, as you did, works well. “This just outlines what your expectations are as a faculty member, and what the student should expect from you.” She says this should be explicit, achievable and in clear language. New team members must also feel able to discuss any aspects they are unsure about in an open, non-intimidating environment.

    If a lab member isn’t fulfilling the responsibilities that they have agreed to, you need to consider all the potential reasons why. Ashley Ruba, based in Seattle, works as a careers consultant for PhD students. She says that some students who are struggling might be finding it hard to navigate what she terms the ‘hidden curriculum’ of an academic research career: the social and cultural norms and responsibilities, which might not be explicitly taught, such as building a professional network, developing a research compass and maintaining a healthy work–life balance.

    Salinas says that supervisors have a responsibility to ask whether there are any external factors that might be influencing a student’s work, such as their mental or physical health. This can be a difficult topic to discuss, both for you and the student. “We might frame it as ‘Is there something I should know about that’s affecting your work?’ or ‘Can I help connect you with some resources that might help?’,” says Salinas. The student doesn’t have to share anything, and you shouldn’t expect them to, but asking shows that you recognize that problems arise and that you’re open to discussing them.

    Ultimately, when a lab member fails to meet expectations, you need to have an open and honest discussion, which you did. However, simply asking someone “How can I help you to succeed?” places the onus on them and is unlikely to result in effective suggestions, says Salinas.

    Malik says that having standardized paperwork can make these conversations easier, and ensure that both you and the student are clear about what the problem is and what you expect from each other going forwards. Having a written record of these discussions will also help further down the line when assessing how well the student has achieved the goals you agreed on.

    For these discussions, Malik uses a sheet from the individualized development plan developed by Angela DePace, a systems biologist at Harvard Medical School in Boston, Massachusetts, and her colleagues1. This has sections covering accomplishments, research goals and professional and personal targets. Whenever anyone commits what Malik considers to be a serious breach of lab protocol, he works through it with them. “When issues come up, that form is our default option in terms of discussing what went wrong and what I would like the student to do, and then we both sign it,” he says.

    If a student repeatedly fails to meet the expectations set, then you are entitled to ask them to leave the lab, Malik says. Having the humility to recognize that this situation isn’t necessarily any fault of your own is the best way to avoid feelings of guilt or failure. Salinas also suggests helping the student to find a lab that might be a better fit. “You can just acknowledge that ‘I don’t think I’m the right mentor match for you, but I want to help you transition on to a lab that might be a better scientific fit or a better working-style fit’,” she says.

    Nevertheless, Salinas says that if you have to let someone go, it’s always a good idea to question the reasons why. “The faculty member, being new, is right to reflect on their practices,” she says. Ruba adds that if you do find yourself letting go of more people in the future, you should seek help, advice and feedback on your mentoring style.

    Salinas says that mentoring should be “a two-way street”, and you should always be receptive to feedback from those you supervise. Everyone can improve, and it’s important to be self-critical in a constructive manner without being saddled with guilt. Ruba is more direct: “If it’s just one student, it might not be you,” she says. “But if it’s multiple students in your lab who are leaving, then it probably is you.”

    [ad_2]

    Source link

  • How we pivoted to studying Ukrainian researchers during the war

    How we pivoted to studying Ukrainian researchers during the war

    [ad_1]

    The Odesa Technical College is seen damaged with smashed windows and a smashed clock

    Many universities and public buildings in Ukraine have been destroyed during the war.Credit: Yulii Zozulia/Future Publishing/Getty

    After Berdiansk in eastern Ukraine was occupied by Russian forces in March 2022, living and working safely became nearly impossible, especially for those unwilling to work under the Russian authorities. As a result, our university, Berdyansk State Pedagogical University, now operates only virtually.

    Our academic community will never forget this period. No one knew what lay ahead: constant power cuts, unstable Internet, the fear of missile attacks and frequent air-raid alarms. We spent our days searching for a phone signal, food and electricity — these resources were scarce. Whenever we found mobile coverage, we called and wrote to each other — colleagues, students and the administration — sharing information, discussing concerns and trying to address the many questions about surviving under occupation. These calls made both of us realize the importance of support, communication and having like-minded individuals around us. At the same time, the experience taught us to value things we once took for granted.

    After being forcibly displaced to different regions in Ukraine, we realized that the best way to express ourselves and draw attention to the crisis was through research. Shifting our focus from nanotechnology (Y.S.) and teaching excellence in higher education (N.T.), we decided to collaborate to investigate the impact of the war on Ukrainian academics.

    Before the war, we were a university administrator (Y.S.) and an associate professor of psychology (N.T.) — we often felt that our work was underfunded, and that we were in a rut when it came to our scientific contributions. Now, however, our new research focus has become our raison d’être, redefining our academic purpose. By sharing our findings, we reach policymakers, international organizations and academic communities.

    Our studies have included several rounds of surveys of more than 1,500 Ukrainian academics, investigating the challenges they face and their needs during the war, as well as around 100 interviews with displaced researchers to explore their experiences with adapting, resuming scientific activity and mental health. Our studies also document the war’s long-term impacts on higher education and science. For us, this is more than just research — it’s a way to tell the truth about the war’s impact on the academic community, and to guard the scientific front while soldiers on the front lines defend our country.

    At the beginning of the war, our research focused on the challenges faced by academics at relocated universities. We investigated the ability of both those who relocated and those who remained under occupation to continue their research, and explored how to support our colleagues. We also studied the university relocation process, documenting these experiences in publications.

    Science under air raids

    Our research had two key findings1. First, Ukrainian academics forced to work under wartime conditions face challenges that make scientific activity nearly impossible. Issues such as poor Internet connectivity, loss of infrastructure and the growing sense that pre-war research projects are no longer relevant significantly hinder progress. Therefore, targeted support from the global scientific community has been crucial to address the daily struggles of Ukrainian academics, who remain determined to continue their work, whether in bomb shelters, remotely from various Ukrainian cities (including those under Russian occupation) or from abroad.

    Natalia Tsybuliak stands at a lectern to present research on mental health

    Natalia Tsybuliak presents her research on the mental health of Ukranian researchers during the war.Credit: Natalia Tsybuliak

    Second, although mobility programmes offered new research opportunities abroad, many displaced Ukrainian academics struggled to return to their pre-war levels of scientific activity. For many, the psychological and logistical challenges of resettlement hampered their ability to contribute fully. And many displaced scientists had to shift their research focus. Given these challenges, supporting researchers who remain in Ukraine through remote collaborations could be a more effective solution for both short- and long-term outcomes.

    We also studied the university relocation process, focusing on the resilience of institutions such as Berdyansk State, which adopted a ‘university without walls’ model: giving up on a physical location, and instead running as an entirely virtual university2. Despite losing access to its physical campus, the university continues to operate online, carrying out its three core functions: teaching, research and serving the displaced community. We estimate that there are 28 other relocated Ukrainian universities that have developed various strategies to continue operating.

    We analysed key hurdles, such as the loss of material resources, personnel retention and maintaining academic continuity. Despite these difficulties, the university retained around 80% of its staff and student body after the virtual transition. The rapid shift to remote teaching and learning highlighted the importance of reliable digital platforms. Studying the experiences of Berdyansk State and other relocated universities helps to inform policy development aimed at better supporting the academic community in military conflict zones, especially if university buildings are reduced to rubble — as has been seen not only in Ukraine, but also in other conflict regions, such as Gaza.

    Health in wartime

    Another focus was the mental health of Ukrainian academics. We have already uncovered alarming rates of burnout3. Our findings show that burnout is most severe among those who have been internally or externally displaced. Unsurprisingly, 48% of men and 61% of women experienced high levels of emotional exhaustion during the war. Contributing factors include deteriorating security, economic instability and increased professional workloads.

    Displaced academics have taken on heavier teaching loads, adapting their course content and teaching methods to digital, flexible formats. Many are also mentoring more students, offering support for each student during wartime and managing increased administrative duties as they navigate new operational challenges. Displaced academics, in particular, have limited resources and must rebuild their work in new environments. They also reported dealing with isolation, losing social ties in their original communities and struggling to form new ones.

    We further investigated anxiety among Ukrainian academics two years into the full-scale war: 44% of participants reported experiencing moderate to severe anxiety4. In peacetime, anxiety levels among academic staff typically ranged from 26% to 38%. Male academic staff members reported higher anxiety levels than did their female counterparts, a reversal of typical peacetime trends. Displaced academics experienced higher levels of severe anxiety than did those who remained in place.

    In times of crisis, science has become our way to survive and understand and contribute to something greater than ourselves. In the darkest moments, we found light in our research, which continues to guide us as we navigate these stormy waters. Science is not just a job. It’s a path to understanding ourselves and the abnormality around us — a path that gives us strength and faith.

    [ad_2]

    Source link

  • How I’m learning to navigate academia as someone with ADHD

    How I’m learning to navigate academia as someone with ADHD

    [ad_1]

    A photograph of Ana Bastos

    Rather than being a hindrance, an ADHD diagnosis helped Ana Bastos to excel as a scientist.Credit: Antje Gildmeister

    Some years ago, an advert caught my eye: “become a bus driver”. I felt tempted. I was in my second postdoctoral programme, juggling several projects, my first supervision duties and teaching — all on top of adjusting to a new country and managing a long-distance relationship. I was exhausted. I told my doctor that my deepest wish was to fall asleep and never wake up. They said this wasn’t good news.

    I had been depressed before, so I knew I needed professional help. What I didn’t know was that there was a deeper reason for my permanent anxiety, troubled sleep and frequent cycles of feeling overworked and burnt out. I would find out only years later that these were signs of attention-deficit hyperactivity disorder (ADHD).

    This one piece of information changed everything for me, and helped me to see myself through the lens of neurodiversity: not as an outsider, but as someone whose mental make-up is different from that of many other people.

    But what could I do about it? Academia is an environment characterized by high competition and uncertainty, with permanent pressure to do more and work faster; social interactions are important, and stiff bureaucracy abounds. These aspects pose major challenges to motivation-driven, highly observant — and thus distractable — brains, such as mine, which often exhibit emotional dysregulation, poor impulse control and a low tolerance for frustration.

    While on sick leave, which I took following the burnout that led to my diagnosis, I had time to focus deeply on my thoughts. I realized that my ADHD was part of the reason I had progressed in my career, fuelling, for example, my desire to work across different disciplines. Over the decades of going through school and work undiagnosed, I had developed habits and tools that were very helpful for coping with the neurotypical world.

    Here are some of the aspects that helped me most on my way from postgraduate study to a professorship, but the list is far from exhaustive. I hope that these tips might help early-career scientists who are struggling like I did — and sometimes still do.

    Keep moving

    I realized that the periods when my mental health was at its best coincided with times when I was very physically active by doing sports and dancing, for example, and keeping my brain stimulated by pursuing hobbies such as learning new languages. I make an effort to work sports activities into my weekly schedule, and come up with athletic goals to keep myself motivated.

    Physical activity helps me to sleep better and reduces my anxiety. Sometimes, I even go to the pool to swim specifically to think about complicated problems or develop proposal ideas.

    Manage energy, rather than time

    I have lost count of the hours I have spent trying to implement standard time-management tools, only to ignore countless reminders to take a break while debugging code or staring at the screen, feeling nauseous, trying to ‘eat the frog’ — that is, do the hardest task first.

    Instead of managing time, I now manage my motivation by setting daily and weekly goals. On Monday, I add to my planner goals for each day of the week — no more than one big task per day, as well as smaller tasks, and mark the urgent ones. I avoid adding tasks that will require focus on days I know I’ll be prone to distraction. I switch non-urgent tasks between days if I’m just not in the mood to tackle them.

    I start my day early so that I can have some distraction-free time, during which I can hyperfocus on tasks I find most motivating, such as writing or analysing data, or cross urgent tasks off my list. The positive kick then helps to keep me going through the day.

    Use external motivators

    Some tasks still feel difficult without proper motivation. To get those done, I try to find other ways to motivate myself. I prepare a cup of tea and put on my reading glasses and headphones, and my brain is ready to go. Music keeps my brain stimulated and can induce certain moods. For example, I have a playlist I listen to when preparing proposal defences, and dance frantically for a few minutes before giving one.

    I assign rewards, such as going swimming or watching a film, for completing certain demanding tasks. I set my own deadlines for everything, typically days or weeks before the real ones, to ramp up urgency-driven motivation. Plus, the buffer time allows time for final revisions to fix mistakes.

    Organize

    Keeping an organized space and workflow reduces distractions and helps me to avoid mistakes and keep track of things I would otherwise forget. I always carry my weekly planner, and use it not just to manage tasks, but also to keep notes, ideas and travel plans in one place. I write down everything important — and when I know I will need a reminder about a deadline, I mark it in my planner and set a reminder on my mobile phone.

    I break large projects into small tasks, which feel less overwhelming, and then plan backwards, scheduling each task on the basis of the timeline I’ve set.

    Seek help and find allies

    Academia can be a lonely place when you are struggling with your mental health or are adjusting to new career demands. The pressure to shine, along with social stigmas about mental illness, can make it hard to be open about the challenges you’re facing. Institutional support for mental health is often missing or inadequate. Seeking advice from a physician is key to avoid spiralling. I am sure I would not made it this far in my career without medical help and therapy.

    Equally important was meeting trusted colleagues and mentors to whom I could turn for advice or to vent. They gave me the strength to keep going and showed me that, under the glitter of exciting news and success stories, there’s a less perfect but more authentic and empathetic side of academia.

    Take charge, one step at a time

    As your career progresses, new responsibilities and challenges will arrive. I now accept I need more than a year to adjust to new environments, and that, in that time, I will probably have to develop new habits to adjust to new settings.

    It is not always easy to identify the sources of stress, let alone determine what changes in behaviour or perceptions might help in adjusting to new situations. I try to be kind to myself when everything feels overwhelming or when I fail to keep up with expectations. I know that by patiently embracing this path, I will eventually, but slowly, regain my balance.

    I realize now that a career in science can be a great option for naturally curious, creative, observant, tenacious and highly energetic minds. But accommodating these individuals requires acknowledging diverse ways of thinking, working and communicating, and promoting inclusive working environments. All would benefit from this approach, neurotypical and neurodivergent alike.

    [ad_2]

    Source link