Tag: Lab life

  • Should I climb the career ladder as a manager, or will I regret leaving the lab bench behind?

    Should I climb the career ladder as a manager, or will I regret leaving the lab bench behind?

    [ad_1]

    Cartoon of a man jumping from a lab jacket into a business suit.

    Illustration: David Parkins

    The problem

    Dear Nature,

    I am a chemical engineer with a PhD, working in the food industry. I’m at a point in my career where I need to decide whether I want a managerial career path or should stick with technical, problem-solving work in research and development.

    My biggest worry is that, if I make the wrong decision, my career will go in an unsatisfying direction and I’ll regret it forever. I do not want to be in a situation where I have to spend a lot of time and energy to correct my path.

    I’m looking for some guidance and resources, such as published literature or personality tests, to help me choose. I’d rather spend time considering this now than spend the rest of my career kicking myself for not being more thoughtful in my decisions.

    Thank you — Chem Eng. at a Crossroads

    The advice

    You’re not alone. The transition from technical roles to management is a common theme in the careers of scientists and engineers who work in industry. Deciding whether, when and how to make the move is a serious undertaking.

    In industry, an individual contributor is someone ‘doing the work’ of research and development. They answer to a project manager or supervisor, but do not have anyone who answers to them. Although these jobs are what people tend to think of when they envision a scientist’s work in industry, companies often offer limited opportunities for promotion on this path.

    Lack of chances for advancement in technical or hands-on roles can lead mid-career engineers and scientists to transition to management, even when they don’t have the skills, working style or inclination to succeed in a leadership role. One 2008 study1 found that mid-career engineers who felt ‘derailed’ in their career paths tended to be reluctant, under-prepared managers. They felt passed over for further promotion, experienced little satisfaction in their work and had a reduced sense of personal effectiveness in their work.

    Nature reached out to three scientists for guidance on how to approach this kind of career crossroads.

    Know yourself

    Roni Wright is a molecular biologist who runs a laboratory group at the International University of Catalonia in Barcelona, Spain. She also runs workshops, courses and one-on-one training in career development for scientists at the Barcelona Biomedical Research Park, which brings together research institutes based in the city. The company you work for might have something similar — large organizations and research centres often offer career-development resources for their employees. Wright suggests that the first step is to carry out a self-assessment, reflecting on your skills, working style and values.

    You asked about personality tests. These are hotly debated scientifically, but can be helpful starting points for self-reflection, providing some insight into your behavioural patterns and decision-making style, and thus are often used by large employers to encourage such thinking. Wright suggests that the Myers–Briggs Type Indicator (MBTI) and DISC assessment are popular places to start. The MBTI was developed by US writers Katharine Cook Briggs and Isabel Briggs Meyers in 1944, inspired by the work of Swiss psychologist and psychotherapist Carl Jung. Through a series of about 90 questions, the MBTI evaluates the test-taker’s preferences in four aspects of personality (introversion–extraversion, sensing–intuition, thinking–feeling and judging–perceiving) and sorts them into one of 16 types.

    DISC assessments, based on the DISC personality theory developed by US psychologist William Moulton Marston in the 1920s, are specifically geared towards workplace interaction. They categorize the test-taker according to four personality profiles — which Marston called dominance, inducement, submission and compliance — to help them understand their own working style and develop strategies for engaging with others. Since the 1940s, various companies have published assessments based on Marston’s theory, including the publishing company Wiley, with its test Everything DiSC, and Truity Psychometrics in Roseville, California. Most companies update the model and adapt the acronym to their own terminology.

    Versions of both these self-assessments are available to take online for free.

    Honest conversation

    To get a clearer understanding of your own strengths and weaknesses, Nimrod Levin, a vocational psychologist and career-counselling specialist at the University of Lausanne, Switzerland, recommends getting an outside perspective. “Talk to people you trust at all levels of the organization — meaning people that are above you, at the same level and below you — and have an honest conversation about this career move,” suggests Levin. “How do they see it, what do they anticipate being a challenge for you and what do they see that would be an asset for you in one role or the other?” In this “360-degree reflective process”, some recurring themes are likely to reveal themselves.

    Your question alludes to another important, and often-underestimated, factor — the people you would be working with. Levin says that, in his experience, “it’s often more the interpersonal environment, than the specific tasks of the job, that determine to what degree the person is happy”. Instead of framing this as choosing between two job titles, you could look at it as a choice between configurations of co-workers: the groups of people you would be working with and how you would relate to them in either role.

    Personal situation

    Jennifer Hunt offers a personal perspective on career shifts in your field. Hunt is a chemical engineer who worked in research and development for 33 years, first as an individual contributor and then as a project manager for contracts to develop hydrogen fuel cells. When the opportunity arose, she transitioned out of research and into a more people-focused role in applications engineering at Unison Energy. That career move helped Hunt, who is based in California, to find the financial stability she needed at that stage of her life. “I had two small kids. I didn’t have another income coming in from a partner, and I didn’t know if I’d have a job after each contract was up,” she says. “I decided that I needed something else.”

    She continues: “Instead of the hamster wheel of always trying to find funding for the next project, I had a steady income. So that is something to ask yourself — how much of the decision is about finances? On the managerial path, you end up making more money.”

    But it’s not for everyone. “As a manager, you have responsibility over the livelihoods of the people on your team,” Hunt stresses. “They need you to be their guide. It’s a tricky role.” The best bosses, she says, are the ones who are able to teach without demeaning, learn from the people who work for them and act as mentors to their teams. If you can do that, you might find management very fulfilling.

    Hunt doesn’t regret taking the leap, but leaving the lab involved some sacrifice.

    “I will say that I loved working in the lab. I missed the high of being a player in the whole movement of knowledge,” says Hunt. “When you leave the bench, you’re still part of that movement, but in a different way. You get a different perspective on the field.”

    She has used that perspective to draw connections. The company she now works for is not in the business of research and development, but Hunt is using knowledge and connections from her past work to get the firm involved in research projects, kick-starting collaborations with research groups and introducing the company to funding opportunities with the US Department of Energy. These kinds of project helped her to recover some of the thrill and feeling of making a contribution that she loved about lab work. “It’s exciting to help bridge the gap between the technology of the future and the actual industry of today,” she says.

    All three advice-givers agree that there is no shame in pausing to recalibrate or change direction. “Careers are rarely linear,” says Wright. “Lives change, circumstances change, we change and, if we want to be both successful and happy, our careers change with us.”

    In Wright’s years of running career-development workshops, the panellists she has hosted have come from a wide array of scientific backgrounds and diverse career paths. But they tend to offer a certain piece of advice in common, she says. “It always strikes me how the main piece of advice is to follow what makes you happy, what you love doing. As scientists, we all have that passion. Make that first move, try something new, follow your passion and you will land on your feet.”

    [ad_2]

    Source link

  • How being multilingual both helps and hinders me and my science

    How being multilingual both helps and hinders me and my science

    [ad_1]

    Orange coloured ropes tangled in a knot on beige background

    Credit: MirageC/Getty

    When I first arrived in the United States as an international student from India, I was immediately struck by the steep learning curve involved in communicating effectively in English. I’m a former research fellow at the Indian Institute of Technology Delhi in New Delhi, and a Bengali speaker, as well as being fluent in Hindi and another common language in India, Telugu.

    My education from preschool onwards was conducted in English. But although I’m fluent, interacting with people who have English as their first language can still present challenges for me. I don’t always feel confident with the technical jargon, idioms and cultural references that they use.

    Now I study RNA biology as a postgraduate researcher at Yale University in New Haven, Connecticut, where the challenge I face is not just to master the technical jargon but also to find my voice in a language that feels foreign in social and professional settings. This language gap can feel like an invisible wall that keeps me slightly detached from others and can make me feel like an imposter, afraid that I’ll say the wrong thing or fail to fully convey my ideas.

    Public speaking, whether during lab meetings with peers, presenting my work at conferences, or giving lectures to visiting summer undergrads, often feels like a delicate dance. My mind scrambles to find the right words, leaving me replaying awkward moments long after the event has passed. At times, I hesitate to speak up, even when I have something valuable to contribute. But these moments have taught me the importance of patience — learning to navigate the challenge of expressing complex ideas while juggling languages is an evolving process for me as a scientist.

    Experiments have no accents

    Being reticent has its benefits, helping me to retreat into my own bubble — where the distractions of the outside world fade, and all that remains is the work. Sometimes, the lab can be a haven, a place where I don’t need to rely on perfect language skills. Experiments have no accents, and pipettes don’t care about vocabulary. It’s here that I find comfort and, in a way, fluency.

    My first language is the one I rely on most when a sudden burst of creativity or problem-solving takes me. I’ve often found myself thinking more clearly in my mother tongue. Something about being in quiet spaces, away from the pressure of speaking English, allows my brain to piece together solutions with clarity and focus. In those moments, it feels as if I’m giving my thoughts room to breathe — without the constraints of translation.

    But there are advantages to being multilingual, of course. Chief among them is the deep sense of collaboration it fosters in the lab. People who don’t speak English as a first language often gravitate towards each other, developing camaraderie. In our shared lab space, housing about 25 people, only a few have English as their first language, and the rest of us are international students. We might stumble over our words, but we understand each other’s struggle. By finding common ground, we help one another with experiments and ideas, and can even share a laugh about mutual frustrations.

    However, I still often feel isolated, not just because I’m far from home, but because I live in two linguistic worlds. In one, I’m confident, expressive and full of ideas; in the other, I’m an introvert, hesitant to speak up for fear of tripping over words or misinterpreting cultural cues. Being multilingual sometimes feels like having multiple personalities — each tied to a different language, with its own strengths and vulnerabilities. I can be brilliant in my native tongue and timid in English.

    But what’s clear is that my journey as a multilingual scientist has shaped not just how I work, but who I am. This balancing act has forced me to develop resilience, empathy and creative problem-solving skills — qualities I wouldn’t trade for anything. To others in the same position, I’d say: view your background not as a barrier, but as a unique foundation that empowers you to think differently and contribute meaningfully.

    Embracing my multilingualism

    Coming from a small town in southern India, I once questioned whether I’d belong in a place such as Yale. But the truth is, every challenge along the way has taught me that our backgrounds are not obstacles, but powerful tools that shape our perspectives. I’ve learnt that embracing my multilingualism allows me to contribute uniquely to the scientific community.

    For anyone embarking on a similar journey, no matter where you’re from or what languages you speak, I’d say that your experience equips you with unique strengths. Being multilingual is a superpower that allows you to bridge worlds and ideas. I’ve found that it gives me the tools to think critically and creatively in ways that others might not.

    For example, during a particularly challenging experiment, my colleagues and I were struggling to interpret some complex patterns in the data; the standard approaches weren’t providing clarity. I mentally translated the problem into my native language, breaking it down into simpler terms and concepts familiar to me. This process unveiled an overlooked variable that was affecting our results. When I shared this insight with lab mates, we adjusted our methodology accordingly, opening up fresh avenues for our research and leading to a successful outcome. It was a moment that highlighted how thinking in my mother tongue can solve problems that seem insurmountable.

    So, is being multilingual a disadvantage in science? Absolutely not. It’s a special aptitude that you’ll learn to master as you go, one that makes your journey all the more remarkable.

    [ad_2]

    Source link

  • Killer questions at science job interviews and how to ace them

    Killer questions at science job interviews and how to ace them

    [ad_1]

    An illustration showing a repeating patten of purple questions marks

    Credit: Getty

    Nature’s 2024 hiring in science survey

    This article is the third in a short series discussing the results of Nature’s 2024 global survey of hiring managers in science. The survey, created in partnership with Thinks Insights & Strategy, a research consultancy in London, launched in June and was advertised on nature.com, in Springer Nature digital products and through e-mail campaigns. It received 1,134 self-selecting respondents from 77 countries, based in academia, industry and other sectors, including industry responses provided in partnership with Walr, a market-research panel. The full survey data sets are available at go.nature.com/3bgpazn.

    Preparing for a scientific job interview? Knowing in advance the types of questions that recruiters love to ask can give you a considerable edge, and can buy you time to work on your answers. In this article, we’ll look at some of the favourite or most revealing questions that are used by hiring managers. These data were gleaned from Nature’s 2024 global survey of more than 1,100 laboratory heads and research leaders from academia, industry and other sectors.

    The questions listed below are designed to probe your technical knowledge, interest in a given research field, future ambitions and how you manage conflicts with colleagues or other challenges. By understanding these four question types — and the curveball questions you might also get — you’ll be better equipped to showcase your expertise and passion for science.

    Technical knowledge or experience

    Typical questions

    • Tell me about one of your recent research projects.

    • How would you tackle this [specific research question], and how does your background support your approach?

    Why they are asked. Most applicants will expect to answer interview questions about their research and experience. According to hirers who responded to the survey, these can be great starter questions to allow candidates to settle into the interview before facing something more challenging. Such questions provide insights into the applicant’s problem-solving ability, and they also allow the interviewer to gauge someone’s communication and presentation skills when speaking about something they should know well.

    Worth remembering. Hirers often spring technical questions on applicants to unmask anyone who might have exaggerated their skills. Tulio de Oliveira, who heads the Centre for Epidemic Response and Innovation at Stellenbosch University in South Africa, says asking technical questions helps him “separate who will be good at the job” from who is simply “good at doing interviews”. One engineer working in industry in France said that they like to use questions that are premised on ‘false’ or incorrect information. “If the candidate answers it like they know about it, I remove them from the shortlist of potential hires.”

    Curveball questions

    • “I ask a basic maths question. You’d be surprised how often people can’t answer them.” — Academic group leader in the biological sciences in the United Kingdom.

    • “Tell me a story about your best project so far, in five minutes.” — Associate professor in the biological sciences in Sweden.

    Interest in the team or field

    Typical questions

    • What aspects of our group’s research do you find especially interesting, and why?

    • What do you think has been the most important discovery in our field in the past five years?

    Why they are asked. Hirers like to see evidence that candidates have done their homework before an interview. Questions about the hiring lab are a way to test this, and they also help interviewers to understand applicants’ motivations — whether their chief desire is to find any job, or whether it’s this particular job that interests them.

    Worth remembering. Be prepared to talk about research that isn’t your own. Which study you choose might not matter as much as having something to say and how you talk about it. Glenn Geher, a psychology researcher at the State University of New York at New Paltz, says that if a candidate hesitates when asked to talk about other people’s work, they might be driven mainly by external rewards, seeing research as ”almost a chore needed to achieve certain outcomes like a degree or tenure”. But if the candidate “excitedly describes an interesting additional line of research”, their motivation is probably more intrinsic, he says.

    Curveball questions

    • “Having read our recent paper on [topic], what would you do next?” — Professor of medical science in Ireland.

    • “Describe the thing that you are best at that you think would be a key contribution to our team.” — Research-group head in the biological sciences at a non-governmental organization in the United States.

    Tulio de Oliveira and Dr Wonderful Tatenda Choga look at a computer in a laboratory

    Tulio de Oliveira (left) asks candidates questions that test their technical knowledge.Credit: Tommy Trenchard/Panos Pictures

    Tackling challenges and conflicts

    Typical questions

    • Describe a situation in which you faced a major challenge at work and explain how you solved it.

    • How would you handle a conflict with a colleague?

    Why they are asked. Interviewers ask about coping with failure to evaluate candidates’ levels of self-awareness and to gauge their conflict-resolving skills. Questions can be about something that actually happened, or can focus on a hypothetical scenario; it’s worth preparing for both of these possibilities.

    Worth remembering. Interviewers will be looking for evidence of introspection and learning, so bear that in mind when choosing which experiences to share. “Anyone with experience as an academic should be able to tell you multiple stories about things not going exactly according to plan,” says Geher. Candidates’ answers can reveal whether they are prepared to take responsibility for problems that emerged, or prefer to shift the blame to others, he says. “If they show signs that they genuinely know that they have a lot to learn — and welcome this fact — that is usually a good sign.” One programme manager in medical research reported giving candidates a ‘prioritization’ challenge, where the applicant must list a number of tasks in the order in which they’d choose to tackle them. One task involves a staff member wanting a five-minute private chat about a personal matter. “We prefer candidates that rank this first, as it demonstrates their humanity.”

    Curveball questions

    • “Research has its ups and downs; what skills do you have that will enable you to get through the tough days?” — Chemistry professor, country unknown.

    • “How would you manage work-related burn-out and health?” — Pharmaceutical lab head in Saudi Arabia.

    Future ambitions and goals

    Typical questions

    • Can you describe your career aspirations for the next five years?

    • How does this role align with your long-term goals?”

    Why they are asked. Given that many science jobs are short-term contracts, hirers often want to know what your plans are for when the job ends. For longer-term positions, such as tenure track or equivalent roles, these questions help recruiters to assess what you will bring to a broad department or division. Such questions also test whether candidates understand the demands of a scientific career. One principal investigator who responded to the survey said that the ability to chart a realistic course for career development is one of the skills that candidates nowadays most commonly lack, adding: “Grad school does not teach this.”

    Worth remembering. For short-term positions, there’s nothing wrong with seeing a job as a stepping stone, but make sure that you still explain how your experience and skills will contribute to the team’s success. Several hirers reported that they prefer candidates who express a long-term interest in their research area. That said, although clear long-term career visions might impress recruiters, it’s usually better to be honest if there are aspects of your future that you are unsure about. “It is easy to identify someone who’s not being honest when answering, and I personally prefer the ones that don’t shy away when saying that they don’t know something,” one astronomer working in academia in Chile said.

    Curveball questions

    • “If funding were unlimited, what research problem would you like to tackle?” — Biological sciences lab leader in the United States.

    • “What is your plan if you are not employed in our organization?” — Academic medical researcher in Iran.

    [ad_2]

    Source link

  • Why AI-generated recommendation letters sell applicants short

    Why AI-generated recommendation letters sell applicants short

    [ad_1]

    A close up photo of a man's hand writing on a piece of paper

    Maroun Khoury emphasizes the need to draw on personal interactions and experiences when writing letters of recommendation.Credit: Damircudic/Getty

    As a principal investigator in a research laboratory that specializes in advanced therapies, I am routinely asked to write recommendation letters for my students, colleagues and associates. I’m also often on the receiving end of such letters from candidates applying for job openings in our group. Over the past year, I have noticed an interesting but concerning development: many of these letters are seemingly being produced not by hand, but by using artificial intelligence (AI) tools such as ChatGPT.

    Such chatbots are undeniably remarkable. They can automatically generate letters of reference by drawing on data and patterns to create coherent and grammatically correct compositions. Researchers with limited time, or for whom English is not their first language, can use such aids to make the process of drafting more manageable, ensuring that recommendations are effectively communicated.

    Nevertheless, as someone who values personal relationships, I find that most of these AI-generated letters lack one key quality, which disadvantages the candidate: the personal touch.

    Here, I provide tips for writing an effective recommendation letter and highlight ways to use AI without short-changing the applicant.

    1. Get personal

    A meaningful reference letter requires you to reflect on your personal interactions and experiences with the person concerned. It involves sharing specific examples of instances when you witnessed the candidate’s competencies and behaviours, and incorporating those anecdotes into a narrative that accentuates their strengths and contributions.

    AI lacks that foundation, producing text that might be coherent and even accurate, but that lacks emotion or specificity. Instead of a personal endorsement, the result is a letter without passion or subtlety, because it is not based on first-hand experience. And that does the candidate a disservice, because it fails to capture the nuances of their achievements, character, strengths and potential.

    2. Provide a genuine assessment

    Your job in writing a recommendation letter is to go beyond what the candidate has achieved, and predict whether they will thrive in their next role. To do that, you must look beyond cold metrics to consider how they coped with a tough project, developed over time or strengthened the team — insights that can help to contextualize the candidate’s skills and attributes, and perhaps increase their marketability.

    For instance, suppose that one of your team members has demonstrated exceptional leadership and project-management skills. You might highlight how they successfully led the team to complete a major project under tight deadlines, detailing the specific challenges they faced. By providing concrete examples, you demonstrate not only their skills, but also their value to potential employers.

    A portrait of Maroun Khoury

    Referral letters should adopt a personal touch that goes beyond dry facts and emotionless prose, says Maroun Khoury.Credit: Center IMPACT/Rolando Oyarzun

    When I write about someone’s qualifications, I’m not just sharing facts and figures, I’m also sharing my confidence in them. This authentic endorsement is something that AI just cannot replicate. It’s missing that personal touch and emotional engagement that comes from having lived experiences and memories.

    Think about it from the recipient’s point of view: if you couldn’t be bothered to physically write a letter in support of the candidate, who might have been a member of your team for years, why would they want to hire them?

    3. Use AI – but sparingly

    Although ChatGPT and other AI tools possess remarkable capabilities, they are deficient in domains in which human input and discernment are essential. Certainly, you can use AI to polish your text, correct your grammar or turn detailed thoughts into prose that you can then refine. But, in the context of composing reference letters, the deficiency is not a lack of linguistic proficiency; rather, it is the absence of personal connection and authenticity that are derived from tangible human experience.

    The contemporary world is increasingly digitalized; nevertheless, professional interactions and personal relationships still require human involvement. An effective reference letter is predicated on genuine introspection and a personal recommendation that only you, the writer, can provide. When both requests and replies are driven by artificial intelligence, there is a risk that meaningful conversations could be reduced to mere copying and pasting.

    4. Help, my supervisor gave me an AI-generated letter!

    Although letters of recommendation are generally confidential, the candidate might still be able to obtain a copy — for instance, if they are asked to upload the letter themselves to the hiring site.

    If, as a candidate, you receive a generic or AI-heavy letter of recommendation, don’t hesitate to reach out to the hiring or funding committee to explain the letter’s limitations and supplement it with materials or personal experiences that highlight your strengths. Alternatively, consider asking for a recommendation from a different referee. Remember, you’ve put a lot of time and effort into your work; it’s OK to ask for a little of your supervisor’s time to support your application.

    Whatever your role, don’t let AI ruin an important career opportunity. Both candidates and referees benefit from mastering the art of personalized communication — without relying on algorithmic touches.

    Competing Interests

    M.K. is a full professor at the University of the Andes, Chile. He is chief scientific officer at Cells for Cells, a cell-therapy spin-off from the same institute; and also at Regenero, a publicly and privately funded Chilean consortium that develops therapies for osteoarthritis, pulpitis and cardiac failure. He reports grants from private and public funders, including the National Research and Development Agency of Chile, and has the following patents pending: WO2014135924A1, WO2017064670A2, WO2017064672A1 and WO/2019/051623.

    [ad_2]

    Source link

  • Can robotic lab assistants speed up your work?

    Can robotic lab assistants speed up your work?

    [ad_1]

    A photo showing Berkeley Lab researcher Yan Zeng looking at robot arm unloading a crucible filled with mixed powder precursors

    Researcher Yan Zeng looks over the machinery at the A-Lab, a fully automated laboratory at the Lawrence Berkeley National Laboratory in California.Credit: Marilyn Sargent/Berkeley Lab

    Stephan Noack’s official title is bioprocess engineer. In simple terms, he is a problem-solver. His colleagues at the Jülich Research Centre in Germany knock on his office door armed with some of their thorniest questions about the process of coaxing bacteria, algae and other microbes into mass-producing valuable chemicals, such as ethanol and amino acids. Optimizing such processes requires making tiny adjustments to several variables, including the microbes’ food source and growing temperature. It’s trial and error — mostly error. “During the set-up of these workflows, a lot of failure happens,” Noack says.

    Even the most efficient laboratories, with an ample amount of students to conduct the trials, can fail to complete the lengthy, laborious process. “It was a huge bottleneck,” he says. Noack and his engineers have therefore been turning to robotics and automation to speed up the process of growing microorganisms on plates of gelatinous agar. By combining a range of equipment, from robotic arms to liquid handlers, researchers have been able to swap out large single plates of agar for ones containing 96 or 384 tiny wells. This has increased throughput nearly 100-fold according to Noack.

    Although they are common in large industrial research facilities, robotics and automation have only begun to trickle down into smaller academic labs in the past five years, says Ian Holland, a postdoctoral researcher at the University of Edinburgh, UK. Historically, he says, academia has relied on large populations of students and postdocs to do the time-consuming work. But with scientific advances requiring ever-increasing amounts of data generation and analysis, lab workers can’t work quickly enough. But robots can.

    The advances include robotic arms that can pipette more accurately than can human scientists1 and fully automated ‘cloud’ labs that experimenters can access online and command a robot workforce to perform their instructions from anywhere in the world2. Researchers who are leaning towards automation hope that the shift will decrease cost, save time and generate fewer errors while improving reproducibility.

    But these changes don’t come without challenges. Scientists need a deep understanding of their experiments to program machinery and to prevent the propagation of errors. The equipment can be expensive and require hours of labour to fix and maintain. If done correctly, however, laboratory automation can transform science, according to Dennis Knobbe, a roboticist at the Technical University of Munich in Germany. “It’s not about excluding the human from these processes,” Knobbe says. “It’s instead about using robotics to enhance researchers’ capabilities.”

    Rise of the machines

    In 2012, Matheus Carvalho, a research technician and fisheries biologist at the Southern Cross University in Lismore, Australia, encountered AutoIt, a programming language originally created for automating Microsoft Windows tasks. Around the same time, he came across a toy robotic arm that could be controlled through a computer. Carvalho reasoned that if he could combine the toy robot with AutoIt, he could automate some of his tedious sampling tasks in the lab. Although the first robotic arm broke almost immediately, Carvalho convinced his supervisor to purchase a higher-quality, second-hand arm, which was built into an automated sampling machine that continues to operate more than a decade later. Carvalho was quickly sold on the idea of laboratory automation, which is the topic of a book he published in 2017.

    A yellow plastic robotic arm is seen on a desk

    Matheus Carvalho at the Southern Cross University in Lismore, Australia, created an automated sampling machine using a reprogrammed toy robotic arm.Credit: Matheus Carvalho de Carvalho

    He aimed to automate more lab procedures without precluding human involvement. His lab used non-radioactive isotopes to understand organic material in water samples — a process that requires weighing and measuring tiny amounts of powders, often to a fraction of a milligram. Every powder they tested had a different grain size and texture, which made it impossible to program a robot to measure out all the samples. Instead, Carvalho devised a protocol that allowed people and machines to each do what they were best at: a human lab technician weighed out the powder samples, and a small, mobile robot was programmed to retrieve containers and calibrate the scales. “It’s better to automate what is easy but leave the hard parts for us humans,” Carvalho says.

    In the 2010s, Dina Zielinski, who was then a technician at the Whitehead Institute in Cambridge, Massachusetts, faced similar challenges with automation while working on a different type of test. She wanted to sequence tissue samples from people with Parkinson’s disease to understand the genes contributing to the condition. The job required pipetting — a lot of pipetting. Zielinski saw the task in front of her as a fast track to repetitive strain injury.

    “Molecular biology essentially entails combining minuscule clear volumes with other miniscule clear volumes,” Zielinski says. “If you didn’t combine the right tiny volumes, you would have wasted a ton of money on sequencing.”

    Even worse, she says, these samples were rare and hard to obtain. Yaniv Erlich, who was then a principal investigator, and his late collaborator Susan Lindquist, a biomedical researcher at the Whitehead Institute, began investigating various robotics, including automated liquid handlers, to speed up the process and to save Zielinski’s hands from injury. But none of the robots they investigated could provide both the precision and flexibility that the lab needed. So, Zielinski, Lindquist and Erlich, who is now chief executive of Eleven Therapeutics in Cambridge, UK, decided to build something different.

    A photo showing iPipet in use

    The iPipet app can be used to illuminate sections of a 96-well plate and help researchers to ensure they combine the correct samples.Credit: Dina Zielinski

    The idea they came up with didn’t handle the pipetting itself. Instead, the team built an iPad app that users could program to help them pipette the correct samples into the correct position. The iPipet app illuminates sections of 96- or 384-well plates to enable a scientist to ensure they combine the correct samples3. When Zielinski pitted a researcher using iPipet against a top-of-the-line robot, the app-assisted human was the clear winner. “The error was much lower with human pipetting than with the liquid-handling robot,” she says.

    Mind the gap

    What makes efforts such as these so trailblazing isn’t their complexity but rather their simplicity. The goal is to find a middle ground between the expensive instruments that can perform every aspect of an experiment and the labour of a single student performing all their tasks manually, Holland says. Ideally, such technology would make it possible for researchers to spend time planning experiments and analysing results instead of pipetting samples.

    “If automation can take some of the load off you, you can do more things and be a better researcher,” Holland says. And the academic environment is well-suited for this melding of human and machine. “You’ve got engineering students looking for projects and we’ve got biologists who have problems that need solving.”

    However, the changes come at a cost says Holland. Since the dawn of the industrial revolution, people have invested time, resources and money into developing machinery to make products more quickly and cheaply. In commercial settings the benefits were clear, says Holland — investment in automation paid off because it allowed production of more commodities with low labour costs.

    Academia is different. Industry focuses on profit, whereas academic labs place a greater emphasis on training the next generation of scientists and producing knowledge. A steady flow of students who are willing to work long hours — some of whom have their own grants and stipends as salary — means that labour costs aren’t as important. What’s more, the focus on teaching and training means that many scientists have conventionally seen automation as anathema to their mission as educators.

    Julia Tenhaef (left) and Stephan Noack (right) stand in front of the “AutoBioTech” platform in their lab

    Postdoc researcher Julia Tenhaef and bioprocess engineer Stephan Noack at the Jülich Research Centre in Germany use an automated laboratory system called the AutoBioTech platform. Credit: Stephan NoackCredit: Stephan Noack

    “In academia, you could spend US$100,000 on this machine, but it’s only going to make your output a bit faster,” says Holland. “That’s a lot harder to justify.” As a result, many academic labs have much less robotic equipment than do commercial and industrial labs — something Holland refers to as the automation gap4.

    Joshua Pearce faced down these technological costs when he founded his lab at Michigan Technological University in Houghton in the mid-2000s. Now an engineer at Western University in London, Canada, Pearce was developing methods to build better photovoltaic systems to generate electricity from sunlight. He wanted to improve solar cells’ ability to absorb different wavelengths of light, but the automated filter wheel changer, which adjusted the wavelengths on his custom-built machine, broke. The replacement was $2,500 (an exorbitant price for a simple part) and had a five-month lead time.

    Pearce realized that he was at a university filled with budding engineers, so he hired some students to help him 3D print the necessary components. What resulted was a bespoke device crafted entirely from open-source hardware and software that cost $50 and did exactly what Pearce needed it to. “It was something that wasn’t available on the market,” Pearce says. “You can make really high-end equipment, exactly what you want, and do it fairly easily for extremely low cost.”

    With his equipment that could automatically adjust light wavelengths for his tests, Pearce began campaigning about the potential of open-source design as a cost-effective way to reap the benefits of lab automation5. He is now editor-in-chief of the journal HardwareX, a publication that allows researchers to share their code and blueprints — while also helping to bolster their CVs and tenure qualifications.

    Pearce’s experiences challenge the idea that investing in automation hampers a scientist’s ability to train students, along with the opinion that robotics are prohibitively expensive.

    Plain and simple

    When it comes to the future of lab robotics, Knobbe thinks that inventions such as those created by Pearce, Carvalho and Zielinski will be key: modular, multipurpose and budget-friendly. “We don’t want to just build a huge machine, like an encapsulated system,” Knobbe says. “We want to integrate these robotic systems into everyday laboratories.”

    He also imagines fully fledged robotic lab assistants that can perform basic experimental tasks with minimal supervision. Although this technology is nowhere near ready, Knobbe says, he thinks researchers will be able to deploy modular automated systems that can interact with each other and be controlled by a robotic assistant in the next ten years. One of the biggest challenges will be balancing robustness, flexibility, the ability to detect errors and asking for help.

    A robotic lab assistant in Dennis Knobbe’s lab at the Technical University of Munich in Germany uses finger-like appendages to pipette autonomously. Credit: Dennis Knobbe/TUM, 2024

    Building or buying a top-of-the-line machine that only does pipetting would force lab technicians to work around the machine. Knobbe wanted a robot that would work with his team, follow basic commands and scan the environment for obstacles. He is therefore building a robotic pipette with finger-like appendages. Early testing shows that this machine has met industry standards, he says.

    Although reducing variability and mistakes has long been one of the selling points of robotics and automation, Knobbe says that robots can also propagate errors1,4. Knobbe also speculates that robots might create types of catastrophic failure.

    A cautionary tale emerged in November last year, when a team of scientists from Google DeepMind in London, the University of California, Berkeley, and the Lawrence Berkeley National Laboratory in California teamed up to predict nearly 400,000 new compounds using artificial intelligence (AI) and then to synthesize these compounds in a fully automated laboratory, called A-Lab. The project was an endeavour to identify new high-performance, low-cost materials by automating both the physical synthesis of compounds and their subsequent analysis. A resulting Nature paper6 seemed to showcase the benefits of automation.

    “It was a high-risk, high-reward project,” says co-author Yan Zeng, a former researcher at the Lawrence Berkeley National Laboratory who started her own lab at Florida State University in Tallahassee this year. “It was a little bit crazy, to be fully automated.”

    Several weeks later, however, some scientists began raising questions about the AI’s ability to predict truly new materials. What seemed to be new in the computer’s modelling might have been different versions of known compounds. “This paper did not at all live up to its claims,” says Leslie Schoop, a chemist at Princeton University in New Jersey.

    To Zeng, however, the study was as much about the process — demonstrating how such a system could be built, operated and used by materials scientists — as it was about the results. In fact, Zeng says, the robotic synthesis aspects of the study performed exactly as expected. She concedes that the initial programming steps took months and required a team of technicians to troubleshoot the process. But they quickly recouped the lost time as the robots required minimal human contact.

    Zeng is now working to automate parts of her lab in Florida. Her first target is hydrothermal synthesis — a process that requires high temperatures and pressurized tubes. It’s a complex project, but her time at Berkeley gave her valuable experience in breaking down complex robotics into more manageable steps, and she hopes to begin automating this process as she scales up her lab.

    Despite the scepticism over A-Lab, she remains optimistic about automation. Robotics could provide the key to future breakthroughs, she says, equipping researchers with the freedom and flexibility to think up the experiments of tomorrow. “This is a rising field, and it’s rising up pretty fast,” says Zeng.

    [ad_2]

    Source link

  • Can AI review the scientific literature — and figure out what it all means?

    Can AI review the scientific literature — and figure out what it all means?

    [ad_1]

    When Sam Rodriques was a neurobiology graduate student, he was struck by a fundamental limitation of science. Even if researchers had already produced all the information needed to understand a human cell or a brain, “I’m not sure we would know it”, he says, “because no human has the ability to understand or read all the literature and get a comprehensive view.”

    Five years later, Rodriques says he is closer to solving that problem using artificial intelligence (AI). In September, he and his team at the US start-up FutureHouse announced that an AI-based system they had built could, within minutes, produce syntheses of scientific knowledge that were more accurate than Wikipedia pages1. The team promptly generated Wikipedia-style entries on around 17,000 human genes, most of which previously lacked a detailed page.

    Rodriques is not the only one turning to AI to help synthesize science. For decades, scholars have been trying to accelerate the onerous task of compiling bodies of research into reviews. “They’re too long, they’re incredibly intensive and they’re often out of date by the time they’re written,” says Iain Marshall, who studies research synthesis at King’s College London. The explosion of interest in large language models (LLMs), the generative-AI programs that underlie tools such as ChatGPT, is prompting fresh excitement about automating the task.

    Some of the newer AI-powered science search engines can already help people to produce narrative literature reviews — a written tour of studies — by finding, sorting and summarizing publications. But they can’t yet produce a high-quality review by themselves. The toughest challenge of all is the ‘gold-standard’ systematic review, which involves stringent procedures to search and assess papers, and often a meta-analysis to synthesize the results. Most researchers agree that these are a long way from being fully automated. “I’m sure we’ll eventually get there,” says Paul Glasziou, a specialist in evidence and systematic reviews at Bond University in Gold Coast, Australia. “I just can’t tell you whether that’s 10 years away or 100 years away.”

    At the same time, however, researchers fear that AI tools could lead to more sloppy, inaccurate or misleading reviews polluting the literature. “The worry is that all the decades of research on how to do good evidence synthesis starts to be undermined,” says James Thomas, who studies evidence synthesis at University College London.

    Computer-assisted reviews

    Computer software has been helping researchers to search and parse the research literature for decades. Well before LLMs emerged, scientists were using machine-learning and other algorithms to help to identify particular studies or to quickly pull findings out of papers. But the advent of systems such as ChatGPT has triggered a frenzy of interest in speeding up this process by combining LLMs with other software.

    It would be terribly naive to ask ChatGPT — or any other AI chatbot — to simply write an academic literature review from scratch, researchers say. These LLMs generate text by training on enormous amounts of writing, but most commercial AI firms do not reveal what data the models were trained on. If asked to review research on a topic, an LLM such as ChatGPT is likely to draw on credible academic research, inaccurate blogs and who knows what other information, says Marshall. “There’ll be no weighing up of what the most pertinent, high-quality literature is,” he says. And because LLMs work by repeatedly generating statistically plausible words in response to a query, they produce different answers to the same question and ‘hallucinate’ errors — including, notoriously, non-existent academic references. “None of the processes which are regarded as good practice in research synthesis take place,” Marshall says.

    A more sophisticated process involves uploading a corpus of pre-selected papers to an LLM, and asking it to extract insights from them, basing its answer only on those studies. This ‘retrieval-augmented generation’ seems to cut down on hallucinations, although it does not prevent them. The process can also be set up so that the LLM will reference the sources it drew its information from.

    This is the basis for specialized, AI-powered science search engines such as Consensus and Elicit. Most companies do not reveal exact details of how their systems work. But they typically turn a user’s question into a computerized search across academic databases such as Semantic Scholar and PubMed, returning the most relevant results.

    An LLM then summarizes each of these studies and synthesizes them into an answer that cites its sources; the user is given various options to filter the work they want to include. “They are search engines first and foremost,” says Aaron Tay, who heads data services at Singapore Management University and blogs about AI tools. “At the very least, what they cite is definitely real.”

    These tools “can certainly make your review and writing processes efficient”, says Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark in Odense, who trains academics in AI tools and has designed his own, called Research Kick. Another AI system called Scite, for example, can quickly generate a detailed breakdown of papers that support or refute a claim. Elicit and other systems can also extract insights from different sections of papers — the methods, conclusions and so on. There’s “a huge amount of labour that you can outsource”, Bilal says.

    Laptop screen with an AI-powered tool called Elicit with papers' summary.

    Elicit, like several AI-powered tools, aims to help with academic literature reviews by summarising papers and extracting data.Credit: Nature

    But most AI science search engines cannot produce an accurate literature review autonomously, Bilal says. Their output is more “at the level of an undergraduate student who pulls an all-nighter and comes up with the main points of a few papers”. It is better for researchers to use the tools to optimize parts of the review process, he says. James Brady, head of engineering at Elicit, says that its users are augmenting steps of reviewing “to great effect”.

    Another limitation of some tools, including Elicit, is that they can only search open-access papers and abstracts, rather than the full text of articles. (Elicit, in Oakland, California, searches about 125 million papers; Consensus, in Boston, Massachusetts, looks at more than 200 million.) Bilal notes that much of the research literature is paywalled and it’s computationally intensive to search a lot of full text. “Running an AI app through the whole text of millions of articles will take a lot of time, and it will become prohibitively expensive,” he says.

    Full-text search

    For Rodriques, money was in plentiful supply, because FutureHouse, a non-profit organization in San Francisco, California, is backed by former Google chief executive Eric Schmidt and other funders. Founded in 2023, FutureHouse aims to automate research tasks using AI.

    This September, Rodriques and his team revealed PaperQA2, FutureHouse’s open-source, prototype AI system1. When it is given a query, PaperQA2 searches several academic databases for relevant papers and tries to access the full text of both open-access and paywalled content. (Rodriques says the team has access to many paywalled papers through its members’ academic affiliations.) The system then identifies and summarizes the most relevant elements. In part because PaperQA2 digests the full text of papers, running it is expensive, he says.

    The FutureHouse team tested the system by using it to generate Wikipedia-style articles on individual human genes. They then gave several hundred AI-written statements from these articles, along with statements from real (human-written) Wikipedia articles on the same topic, to a blinded panel of PhD and postdoctoral biologists. The panel found that human-authored articles contained twice as many ‘reasoning errors’ — in which a written claim is not properly supported by the citation — than did ones written by the AI tool. Because the tool outperforms people in this way, the team titled its paper ‘Language agents achieve superhuman synthesis of scientific knowledge’.

    Group of scientists standing and sitting posing in the FutureHouse office with a bird drawing on the wall. The team is behind PaperQA and WikiCrow AI tools..

    The team at US start-up FutureHouse, which has launched AI systems to summarize scientific literature. Sam Rodriques, their director and co-founder, is on the chair, third from right.Credit: FutureHouse

    Tay says that PaperQA2 and another tool called Undermind take longer than conventional search engines to return results — minutes rather than seconds — because they conduct more-sophisticated searches, using the results of the initial search to track down other citations and key phrases, for example. “That all adds up to being very computationally expensive and slow, but gives a substantially higher quality search,” he says.

    Systematic challenge

    Narrative summaries of the literature are hard enough to produce, but systematic reviews are even worse. They can take people many months or even years to complete2.

    A systematic review involves at least 25 careful steps, according to a breakdown from Glasziou’s team. After combing through the literature, a researcher must filter their longlist to find the most pertinent papers, then extract data, screen studies for potential bias and synthesize the results. (Many of these steps are done in duplicate by another researcher to check for inconsistencies.) This laborious method — which is supposed to be rigorous, transparent and reproducible — is considered worthwhile in medicine, for instance, because clinicians use the results to guide important decisions about treating patients.

    In 2019, before ChatGPT came along, Glasziou and his colleagues set out to achieve a world record in science: a systematic review in two weeks. He and others, including Marshall and Thomas, had already developed computer tools to reduce the time involved. The menu of software available by that time included RobotSearch, a machine-learning model trained to quickly identify randomized trials from a collection of studies. RobotReviewer, another AI system, helps to assess whether a study is at risk of bias because it was not adequately blinded, for instance. “All of those are important little tools in shaving down the time of doing a systematic review,” Glasziou says.

    The clock started at 9:30 a.m. on Monday 21 January 2019. The team cruised across the line at lunchtime on Friday 1 February, after a total of nine working days3. “I was excited,” says epidemiologist Anna Mae Scott at the University of Oxford, UK, who led the study while at Bond University; everyone celebrated with cake. Since then, the team has pared its record down to five days.

    Could the process get faster? Other researchers have been working to automate aspects of systematic reviews, too. In 2015, Glasziou founded the International Collaboration for the Automation of Systematic Reviews, a niche community that, fittingly, has produced several systematic reviews about tools for automating systematic reviews4. But even so, “not very many [tools] have seen widespread acceptance”, says Marshall. “It’s just a question of how mature the technology is.”

    Elicit is one company that says its tool helps researchers with systematic reviews, not just narrative ones. The firm does not offer systematic reviews at the push of a button, says Brady, but its system does automate some of the steps — including screening papers and extracting data and insights. Brady says that most researchers who use it for systematic reviews have uploaded relevant papers they find using other search techniques.

    Systematic-review aficionados worry that AI tools are at risk of failing to meet two essential criteria of the studies: transparency and reproducibility. “If I can’t see the methods used, then it is not a systematic review, it is simply a review article,” says Justin Clark, who builds review automation tools as part of Glasziou’s team. Brady says that the papers that reviewers upload to Elicit “are an excellent, transparent record” of their starting literature. As for reproducibility: “We don’t guarantee that our results are always going to be identical across repeats of the same steps, but we aim to make it so — within reason,” he says, adding that transparency and reproducibility will be important as the firm improves its system.

    Specialists in reviewing say they would like to see more published evaluations of the accuracy and reproducibility of AI systems that have been designed to help produce literature reviews. “Building cool tools and trying stuff out is really good fun,” says Clark. “Doing a hardcore evaluative study is a lot of hard work.”

    Earlier this year, Clark led a systematic review of studies that had used generative AI tools to help with systematic reviewing. He and his team found only 15 published studies in which the AI’s performance had been adequately compared with that of a person. The results, which have not yet been published or peer reviewed, suggest that these AI systems can extract some data from uploaded studies and assess the risk of bias of clinical trials. “It seems to do OK with reading and assessing papers,” Clark says, “but it did very badly at all these other tasks”, including designing and conducting a thorough literature search. (Existing computer software can already do the final step of synthesizing data using a meta-analysis.)

    Glasziou and his team are still trying to shave time off their reviewing record through improved tools, which are available on a website they call the Evidence Review Accelerator. “It won’t be one big thing. It’s that every year you’ll get faster and faster,” Glasziou predicts. In 2022, for instance, the group released a computerized tool called Methods Wizard, which asks users a series of questions about their methods and then writes a protocol for them without using AI.

    Rushed reviews?

    Automating the synthesis of information also comes with risks. Researchers have known for years that many systematic reviews are redundant or of poor quality5, and AI could make these problems worse. Authors might knowingly or unknowingly use AI tools to race through a review that does not follow rigorous procedures, or which includes poor-quality work, and get a misleading result.

    By contrast, says Glasziou, AI could also encourage researchers to do a quick check of previously published literature when they wouldn’t have bothered before. “AI may raise their game,” he says. And Brady says that, in future, AI tools could help to flag and filter out poor-quality papers by looking for telltale signs such as P-hacking, a form of data manipulation.

    Glasziou sees the situation as a balance of two forces: AI tools could help scientists to produce high-quality reviews, but might also fuel the rapid generation of substandard ones. “I don’t know what the net impact is going to be on the published literature,” he says.

    Some people argue that the ability to synthesize and make sense of the world’s knowledge should not lie solely in the hands of opaque, profit-making companies. Clark wants to see non-profit groups build and carefully test AI tools. He and other researchers welcomed the announcement from two UK funders last month that they are investing more than US$70 million in evidence-synthesis systems. “We just want to be cautious and careful,” Clark says. “We want to make sure that the answers that [technology] is helping to provide to us are correct.”

    [ad_2]

    Source link

  • Your dissertation is your business card!

    Your dissertation is your business card!

    [ad_1]

    Stacks of bound papers lie on a desk

    Credit: Romieg/Shutterstock

    Finishing your doctoral dissertation is a challenging and often stressful process — at least, it was for me. Your focus is on finalizing your research, polishing your writing and preparing your defence, but also on getting a job. Ideally, the result is a job offer, but it’s also a book: years of doctoral research condensed into a (usually) dry, bound thesis on a library shelf. What if, instead, we could use our dissertations as an opportunity to reach a wider audience and increase our chances in the job market?

    My dissertation concerned the cognitive decision-making processes of entrepreneurs. As I was writing it up, I noticed a colleague posting a very appealing photo of their dissertation on the social-media site LinkedIn, full of charming fonts and colours. This version helped her to increase her visibility and stand out from the crowd. As scientists, we are taught never to judge a book by its cover, but I realized that it is not just the inside that matters; first impressions do count. I wanted to have a personalized dissertation, too.

    Ways to personalize your dissertation

    I started with some online searches. There are platforms out there, such as the freelance marketplace Fiverr, that can help you to identify talented creatives who can design affordable covers, figures, websites and more. I reviewed the portfolios of several designers, considering which style would be appropriate for a scientific publication, and selected the designer I thought could best bring to life what I had in mind.

    After discussing what I was looking for, my designer and I tried several designs before deciding on the final cover. I then edited the text, images and tables to make them clearer and more compact and reformatted everything to a larger size. I also developed a proper introduction, expanded on the practical implications through a longer conclusion, and extended my literature review so that the dissertation became a smooth story with more than just the mandatory essays. On the back, I added an executive summary and a short biography, including a photo of myself.

    Within a few weeks, I managed to have my dissertation designed in a style that really appealed to me. I paid about €150 (US$160) for the designer, plus €500 for 50 printed copies. In the months that followed, I would take one of my personalized copies with me whenever I went to meet colleagues or have a conversation about a possible job opportunity. The reactions were overwhelmingly positive (“Almost too timely!”, “I can’t wait to read it”). I had announced several months earlier on LinkedIn that my dissertation was available online, but my post about the personalized version seemed to have a greater impact. (It amassed 314 reactions and 48 comments, including 10 requests for copies of my thesis, compared with 257 reactions, 27 comments and 2 requests for copies in response to the original post.) I have heard anecdotally that this version helped me to stand out to job-search committees.

    If you’re looking to promote your thesis work to a wider audience, here are a few things you can try.

    Downsize your dissertation

    One way to make your dissertation more appealing is to create a downsized version. In my case, I went from an A4 paper format (210 × 297 mm) to A5 (148 × 210 mm) with larger font sizes and figures, but you might even consider a version that could fit in your pocket.

    The cover of Bob Bastian’s thesis "Entrepreneurship Under Radical Uncertainty"

    The front cover of Bob Bastian’s PhD thesis.Credit: Bob Bastian

    Another option is reducing your dissertation to a more accessible version with shorter stories or chapters. For example, you could craft summaries that feature the essential aspects of your research, with its key points and findings, and emphasize how the research can be applied in real-world scenarios. You could also share that more-accessible digital version in e-mail signatures, for instance with a hyperlink. In my case, I handed out hard copies of my work personally to people I met, and shared a PDF version with those who contacted me online.

    Make the most of your channels

    Using your dissertation as a business card also means making the most effective use of the promotional platforms you have available. These could include social media, a personal website, blogs and any other communication tools you use to reach your audience. In my case, by posting pictures of the personalized dissertation on LinkedIn, I gained visibility, expanded my network and was invited to collaborate with a researcher whose work I admire.

    I launched a website to promote my dissertation research and used the designed cover for branding and visual identity. I was also invited by a network for professionals and educators to blog about my research and to talk about my insights during several webinars.

    Talk it out

    Rather than read your research, some people might prefer to listen to you talk about it. So, consider discussing your work over several episodes of a podcast. Podcasting can be seen as an innovative way of educating your audience, and can demonstrate marketable skills (read: ones that are attractive to employers).

    In the past, when people asked me about my research, I would ‘pitch’ it in a couple of sentences. But this approach rarely leaves a lasting impression, especially with those outside your specialized field. By creating a personalized dissertation, I turned that dynamic upside down: people would reach out to me, flip through the pages and ask me questions. The dissertation served, in a way, as a business card.

    Of course, in the end, my personalized dissertation was not the only reason that I got a new academic position, but it undoubtedly helped to increase my visibility, extend my network and, perhaps most importantly, distinguish myself as a young researcher.

    Graduate students generally work independently, spending long hours on their research projects, and sometimes don’t know how to discuss their work with their colleagues. Or maybe they experience impostor syndrome. All of this can hinder job opportunities, but personalizing my dissertation helped me to overcome these barriers. Ultimately, whether your dissertation ends up on a desk or on a bookshelf, customizing it increases the chances that it will really stand out.

    [ad_2]

    Source link

  • The antibodies don’t work! The race to rid labs of molecules that ruin experiments

    The antibodies don’t work! The race to rid labs of molecules that ruin experiments

    [ad_1]

    Carl Laflamme knew what protein he wanted to study, but not where to find it. It is encoded by a gene called C9ORF72, which is mutated in some people with the devastating neurological condition motor neuron disease, also known as amyotrophic lateral sclerosis. And Laflamme wanted to understand its role in the disease.

    When he started his postdoctoral fellowship at the Montreal Neurological Institute-Hospital in Canada, Laflamme scoured the literature, searching for information on the protein. The problem was that none of the papers seemed to agree where in the cell this mysterious molecule operates. “There was so much confusion in the field,” Laflamme says.

    He wondered whether a reagent was to blame, in particular the antibodies that scientists used to measure the amount of the protein and track its position in the cell. So, he and his colleagues decided to test the antibodies that were available. They identified 16 commercial antibodies that were advertised as able to bind to the protein encoded by C9ORF72. When the researchers put them through their paces, only three performed well — meaning that the antibodies bound to the protein of interest without binding to other molecules. But not one published study had used these antibodies. About 15 papers described experiments using an antibody that didn’t even bind the key protein in Laflamme’s testing. And those papers had been collectively cited more than 3,000 times1.

    Laflamme’s experience isn’t unusual. Scientists have long known that many commercial antibodies don’t work as they should — they often fail to recognize a specific protein or non-selectively bind to several other targets. The result is a waste of time and resources that some say has contributed to a ‘reproducibility crisis’ in the biological sciences, potentially slowing the pace of discovery and drug development.

    Laflamme is part of a growing community that wants to solve the problem of unreliable antibodies in research. He teamed up with molecular geneticist Aled Edwards at the University of Toronto, Canada, to set up Antibody Characterization through Open Science (YCharOS, pronounced ‘Icarus’), an initiative that aims to characterize commercially available research antibodies for every human protein.

    There are also efforts under way to produce better-performing antibodies, to make it easier for researchers to find them and to encourage the research community to adopt best practices when it comes to choosing and working with these molecules. Antibody vendors, funding agencies and scientific publishers are all getting in on the action, says Harvinder Virk, a physician–scientist at the University of Leicester, UK. “It’s hard to imagine that a problem that has been going on so long will suddenly change — but I’m hopeful.”

    Putting antibodies to the test

    The immune system produces antibodies in response to foreign substances, such as viruses and bacteria, flagging them for destruction. This makes antibodies useful in laboratory experiments. Scientists co-opt this ability by using them to mark or quantify specific biological molecules, such as a segment of a protein. To be effective, these molecular tags need to have both specificity — a strong affinity for the target — and selectivity — the ability to leave other proteins unmarked.

    For decades, scientists created these antibodies themselves. They injected proteins into animals, such as rabbits, whose immune systems would generate antibodies against the foreign molecules. To create a longer-term, more consistent supply of antibodies, researchers extracted immune cells from animals and combined them with immortalized cancer cells. When reagent companies began the mass production of antibodies in the 1990s, most researchers shifted to purchasing antibodies from a catalogue. Today, there are around 7.7 million research antibody products on the market, sold by almost 350 antibody suppliers around the world.

    In the late 2000s, scientists began reporting problems with both the specificity and selectivity of many commercially available antibodies, leading researchers to call for an independent body to certify that the molecules work as advertised. Over the years, a handful of groups have launched efforts to evaluate antibodies.

    What sets YCharOS apart is the level of cooperation that it has obtained from companies that sell antibodies. When Laflamme and Edwards set out to start YCharOS, they called every single vendor they could find; more than a dozen were interested in collaborating. YCharOS’s industry partners provide the antibodies for testing, free of charge. The partners, along with the funders of the initiative (which include various non-profit organizations and funding agencies), are given the chance to review characterization reports and provide feedback before they are published.

    YCharOS tests antibodies by comparing their specificity in a cell line that expresses the target protein at normal biological levels against their performance in what’s called a knock-out cell line that lacks the protein (see ‘Ways to validate’).

    Ways to validate: graphic that shows three ways to test antibodies to ensure their efficacy.

    In an analysis published in eLife last year, the YCharOS team used this method to assess 614 commercial antibodies, targeting a total of 65 neuroscience-related proteins2. Two-thirds of them did not work as recommended by manufacturers.

    “It never fails to amaze me how much of a hit or miss antibodies are,” says Riham Ayoubi, director of operations at YCharOS. “It shows you how important it is to include that negative control in the work.”

    Antibody manufacturers reassessed more than half of the underperforming antibodies that YCharOS flagged in 2023. They issued updated recommendations for 153 of them and removed 73 from the market. The YCharOS team has now tested more than 1,000 antibodies that are meant to bind to more than 100 human proteins.

    “There’s still a lot of work ahead,” Laflamme says. He estimates that, of the 1.6 million commercially available antibodies to human proteins, roughly 200,000 are unique (many suppliers sell the same antibodies under different names).

    “I think the YCharOS initiative can really make a difference,” says Cecilia Williams, a cancer researcher at the KTH Royal Institute of Technology in Stockholm. “But it’s not everything, because researchers will use these antibodies in other protocols, and in other tissues and cells that may express the protein differently,” she says. The context in which antibodies are used can change how they perform.

    Other characterization efforts are trying to tackle this challenge. Andrea Radtke and her collaborators were part of a cell-mapping consortium called the Human BioMolecular Atlas Program when they set up the Organ Mapping Antibody Panels (OMAPs). OMAPs are collections of community-validated antibodies used in multiplex imaging — a technique that involves visualizing several proteins in a single specimen. Unlike YCharOS, which focuses on conducting rigorous characterizations of antibodies for various applications in one specific context, OMAPs is looking at a single application for the antibodies, but in several contexts, such as in different human tissues and imaging methods. To do so, OMAPs recruits scientists from both academia and industry to conduct validations in their own labs.

    “Vendors cannot test all possible applications of their antibodies, but as a community we can say ‘let’s try this’,” says Radtke, who now works as a principal scientist at the instrumentation company Leica Microsystems in Bethesda, Maryland. “People are testing things that you would never think you could test.”

    Expanding the toolbox

    Even if good antibodies are available, they are not always easy to find. In 2009, Anita Bandrowski, founder and chief executive of the data-sharing platform SciCrunch in San Diego, California, and her colleagues were examining how difficult it was to identify antibodies in journal articles. After sifting through papers in the Journal of Neuroscience, they found that 90% of the antibodies cited lacked a catalogue number (codes used by vendors to label specific products) — making them almost impossible to track down. To replicate an experiment, it’s important to have the right reagents — and proper labelling is crucial to finding them, Bandrowski says.

    After seeing that a similar problem plagued other journals, Bandrowski and her colleagues decided to create unique, persistent identifiers for antibodies and other scientific resources, such as model organisms, which they called research resource identifiers, or RRIDs. Catalogue numbers can disappear if a company discontinues a product — and because companies create them independently, two different products might end up with the same one. RRIDs solve this.

    In 2014, Bandrowski and her team started a pilot project3 with 25 journals, in which they asked authors to include RRIDs in their manuscripts. In the years since, more than 1,000 journals have adopted policies that request these identifiers. “We currently have nearly one million citations to RRIDs from papers,” says Bandrowski.

    Ultimately, the hope is that authors of every journal article will clearly label the resources they used, such as antibodies, with RRIDs, Bandrowski says. “That won’t change reproducibility by itself, but it is the first step.”

    In addition to being able to track down antibodies, researchers need a way to choose which ones to use. In 2012, Andrew Chalmers, who was then a researcher at the University of Bath, UK, co-founded CiteAb, a search engine to help researchers find the most highly cited antibodies. Over the years, the platform has grown to include more than seven million antibodies — and now also includes, when available, information regarding validations. In May, CiteAb began integrating YCharOS’s characterization data onto its site.

    “The big challenge is that antibodies are just used in so many different ways, for so many different species that you can’t tick off that an antibody is good or bad,” Chalmers says. Many say that knock-out validation is key, but less than 5% of antibodies on CiteAb have been validated in this way, either by suppliers or through other independent initiatives, such as YCharOS. “There’s a long way to go,” Chalmers says.

    Stakeholders get involved

    Like many others, Virk developed an interest in antibody reliability after a personal experience with bad antibodies. In 2016, Virk received a big grant to study the role of a protein called TRPA1 in airway inflammation. But one of his colleagues mentioned that, on the basis of his own experience, the antibodies he was working with might not be reliable.

    When Virk put TRPA1 antibodies to the test, he discovered that his colleague was right: of the three most-cited antibodies used to study TRPA1, two didn’t detect the human protein at all, and the other detected several other proteins at the same time. “That was a shock,” Virk says. “At that point, I wanted to leave science — because if things are really this unreliable, what’s the point?”

    Instead of leaving academia, Virk co-founded the Only Good Antibodies (OGA) community last year, with the aim of bringing together stakeholders — such as researchers, antibody manufacturers, funding agencies and publishers — to tackle the problem of poorly performing antibodies. In February, the OGA community hosted its first workshop, which included individuals from these various groups to discuss how to improve the reproducibility of research conducted with antibodies. They were joined by NC3Rs, a scientific organization and funder, based in London that focuses on reducing the use of animals in research. Better antibodies means fewer animals are used in the process of producing these molecules and conducting experiments with them.

    Currently, the OGA community is working on a project to help researchers choose the right antibodies for their work and to make it easier for them to identify, use and share data about antibody quality. It is also piloting an YCharOS site at the University of Leicester — the first outside Canada — which will focus on antibodies used in respiratory sciences. The OGA community is also working with funders and publishers to find ways to reward researchers for adopting antibody-related best practices. Examples of such rewards include grants for scientists taking part in antibody-validation initiatives.

    Manufacturers have also been taking steps to improve antibody performance. In addition to increasingly conducting their own knock-out validations, a number of suppliers are also altering the way some of their products are made.

    The need to modify antibody-production practices was brought to the fore in 2015, when a group of more than 100 scientists penned a commentary in Nature calling for the community to shift from antibodies generated by immune cells or immune–cancer-cell hybrids, to what are known as recombinant antibodies4. Recombinant antibodies are produced in genetically engineered cells programmed to make a specific antibody. Using these antibodies exclusively, the authors argued, would enable infinite production of antibodies that do not vary from batch to batch — a key problem with the older methods.

    A few manufacturers are shifting towards making more recombinant antibodies. For example, Abcam, an antibody supplier in Cambridge, UK, has added more than 32,000 of them to their portfolio. “Facilitating the move towards recombinants across life-science research is a key part of improving reproducibility,” says Hannah Cable, the vice-president of new product development at Abcam. “That’s something that antibody suppliers should be doing.”

    Rob Meijers, director of the antibody platform at the Institute for Protein Innovation in Boston, Massachusetts, a non-profit research organization that makes recombinant antibodies, says that this shift simply makes more business sense. “They’re much more reproducible, you can standardize the process for them, and the user feedback is very positive,” he says.

    CiteAb’s data have revealed that scientists’ behaviour around antibody use has shifted drastically over the past decade. About 20% of papers from 2023 that involved antibodies used recombinants. “That’s a big change from where we were ten years ago,” says Chalmers, who is now CiteAb’s chief executive.

    Although the ongoing efforts to improve antibody reliability are a move in the right direction, changing scientists’ behaviour remains one of the biggest challenges, say those leading the charge. There are cases in which researchers don’t want to hear that an antibody they’ve been using for their experiments isn’t actually doing what it’s meant to, Williams says. “If somebody is happy with the result of an antibody, it’s being used regardless, even if it’s certain that it doesn’t bind this protein,” Williams says. Ultimately, she adds, “you can never get around the fact that the researcher will have to do validations”.

    Still, many scientists are hopeful that recent efforts will lead to much needed change. “I’m optimistic that things are getting better,” Radtke says. “What I’m so encouraged by is the young generation of scientists, who have more of a wolf-pack mentality, and are working together to solve this problem as a community.”

    [ad_2]

    Source link

  • I had to let a student go and I feel as though I failed them — how do I do better next time?

    I had to let a student go and I feel as though I failed them — how do I do better next time?

    [ad_1]

    Cartoon showing a scientist climbing a ladder made of DNA cutting a climbing rope with a hand reaching up from below.

    Illustration: David Parkins

    The problem

    Dear Nature,

    A PhD student in my laboratory was consistently unmotivated and failed to do the most basic things that I consider essential for research, such as keeping an up-to-date notebook. This is one of the requirements outlined in the lab manual and I ask all members of my team to sign an agreement committing to abide by it.

    I tried to help the student, but nothing seemed to get through to them, and after giving them many warnings I asked them to find training elsewhere.

    I feel it was the right thing to do for the sake of the lab, but I’m also left with feelings of guilt and personal failure. I’m a woman of colour and have regularly faced colleagues who didn’t give me a fair opportunity to develop my research and advance my career. As a result, when I started my own lab more than a year ago, I was determined that I would not hinder anyone’s progress. I’m now left with a sense that I contributed to the same gatekeeping I experienced.

    Was the way I handled the situation wrong? Could I have done more to support the student? And how can I do things differently next time to ensure that I don’t feel this way again? — A rueful molecular biologist

    The advice

    Nature asked two careers advisers and a research-group leader to answer your questions. They all agree that letting go of a lab member who is unmotivated and not responding to your efforts to help is the best thing for everyone involved. However, they did have some advice on how you might prevent the situation arising again and ensure that you are doing all you can to support the student, even after they leave.

    Harmit Malik, a geneticist at the Fred Hutch Cancer Center in Seattle, Washington, makes sure that his team members meet any prospective addition to the lab and can give their opinions on a candidate’s suitability and attitude. “It is our job to look past what they’ve achieved before — because it could be a function of privilege or something else — and really focus on motivation, their interest in the lab and their curiosity for science,” says Malik.

    Malik adds that it can be hard to stay mindful of these factors as a new principal investigator. “The oppressive nature of an empty lab means that you’re dying to fill it with people,” he says. But, at this stage, it’s even more important to be vigilant: taking on someone who needs a lot of attention and monitoring will add unnecessary stress. “Hiring the wrong person is worse than hiring no person at all,” Malik says.

    Making your expectations clear is the next step. Raquel Salinas, director of student affairs and career development at the MD Anderson Cancer Center in Houston, Texas, says that putting together a lab manual and asking any new lab members to read and sign it, as you did, works well. “This just outlines what your expectations are as a faculty member, and what the student should expect from you.” She says this should be explicit, achievable and in clear language. New team members must also feel able to discuss any aspects they are unsure about in an open, non-intimidating environment.

    If a lab member isn’t fulfilling the responsibilities that they have agreed to, you need to consider all the potential reasons why. Ashley Ruba, based in Seattle, works as a careers consultant for PhD students. She says that some students who are struggling might be finding it hard to navigate what she terms the ‘hidden curriculum’ of an academic research career: the social and cultural norms and responsibilities, which might not be explicitly taught, such as building a professional network, developing a research compass and maintaining a healthy work–life balance.

    Salinas says that supervisors have a responsibility to ask whether there are any external factors that might be influencing a student’s work, such as their mental or physical health. This can be a difficult topic to discuss, both for you and the student. “We might frame it as ‘Is there something I should know about that’s affecting your work?’ or ‘Can I help connect you with some resources that might help?’,” says Salinas. The student doesn’t have to share anything, and you shouldn’t expect them to, but asking shows that you recognize that problems arise and that you’re open to discussing them.

    Ultimately, when a lab member fails to meet expectations, you need to have an open and honest discussion, which you did. However, simply asking someone “How can I help you to succeed?” places the onus on them and is unlikely to result in effective suggestions, says Salinas.

    Malik says that having standardized paperwork can make these conversations easier, and ensure that both you and the student are clear about what the problem is and what you expect from each other going forwards. Having a written record of these discussions will also help further down the line when assessing how well the student has achieved the goals you agreed on.

    For these discussions, Malik uses a sheet from the individualized development plan developed by Angela DePace, a systems biologist at Harvard Medical School in Boston, Massachusetts, and her colleagues1. This has sections covering accomplishments, research goals and professional and personal targets. Whenever anyone commits what Malik considers to be a serious breach of lab protocol, he works through it with them. “When issues come up, that form is our default option in terms of discussing what went wrong and what I would like the student to do, and then we both sign it,” he says.

    If a student repeatedly fails to meet the expectations set, then you are entitled to ask them to leave the lab, Malik says. Having the humility to recognize that this situation isn’t necessarily any fault of your own is the best way to avoid feelings of guilt or failure. Salinas also suggests helping the student to find a lab that might be a better fit. “You can just acknowledge that ‘I don’t think I’m the right mentor match for you, but I want to help you transition on to a lab that might be a better scientific fit or a better working-style fit’,” she says.

    Nevertheless, Salinas says that if you have to let someone go, it’s always a good idea to question the reasons why. “The faculty member, being new, is right to reflect on their practices,” she says. Ruba adds that if you do find yourself letting go of more people in the future, you should seek help, advice and feedback on your mentoring style.

    Salinas says that mentoring should be “a two-way street”, and you should always be receptive to feedback from those you supervise. Everyone can improve, and it’s important to be self-critical in a constructive manner without being saddled with guilt. Ruba is more direct: “If it’s just one student, it might not be you,” she says. “But if it’s multiple students in your lab who are leaving, then it probably is you.”

    [ad_2]

    Source link

  • Nine reasons we love our spooky, kooky model organisms

    Nine reasons we love our spooky, kooky model organisms

    [ad_1]

    Halloween, celebrated on 31 October, originated with the ancient Celtic festival of Samhain, during which people lit bonfires and wore costumes to ward off spirits. Today, it’s a holiday synonymous with not just witches and ghouls, but also crows, bats, owls, snakes and other ‘spooky’ creatures. Nature asked nine scientists what inspired them to study unorthodox animals and plants and what they want the world to know about their favourite organisms, and gave them the chance to correct misconceptions around the much-maligned reputations of these flora and fauna.

    IVO JACOBS: The ‘playful genius’ of crows

    Ivo Jacobs studies the evolution of cognition at Lund University in Sweden.

    A photo of Ivo Jacobs holding a raven

    Ivo Jacobs says that ravens compete to participate in his group’s research on cognition.Credit: Ivo Jacobs

    Corvids, birds in the crow family, have quite the mythological status — from ominous tricksters to playful geniuses — across diverse cultures. I have always been fascinated by how creatures with walnut-sized brains and no hands have cognitive capacities similar to those of great apes1, despite the evolutionary gap of 320 million years. This suggests that complex cognition has evolved independently several times. Cognition is a solution to buffer against environmental changes that occur faster than evolution. I examine the problem-solving abilities that help corvids, such as the common raven (Corvus corax), to adjust to changing conditions.

    Misconceptions about corvids are rife: they are often viewed as unpleasant and dangerous birds, exhibiting behaviours such as pecking out of frustration. However, they are more likely to gently preen your eyebrows than to try plucking out your eyes. My chances of leaving the aviary with a peck mark are lower than the likelihood of them undoing my shoelaces, stealing my hat or stashing food in my pockets. I once had the surreal experience of explaining to airport security why a piece of rotten liver had fallen out of my jacket.

    Our corvids enjoy participating in research — essentially playtime for food rewards — so much that they compete to enter the testing room. Another widespread myth is that corvids have a proclivity to steal shiny things. Our research revealed that they prefer round objects that are not shiny2. Sometimes, they will even forgo food to take a small wooden ball. Their extensive play with objects fuels their innovative tool use.

    LINFA WANG: Bats help to unravel infectious diseases

    Linfa Wang studies zoonotic diseases and bat immunology at Duke-NUS Medical School in Singapore.

    A Cave Necar Bat sticks its tongue out to recieve watermelon juice

    Studying viruses in bats can help to better understand human disease, says Linfa Wang.Credit: Dr Feng Zhu/Duke-NUS

    I became interested in bats because the viruses that my research group studies, including Hendra virus, Nipah virus and SARS-CoV-1, are transmitted by them. The question of why bats can carry so many viruses without showing signs of disease propelled me to study bat genomics and immunology. The more I study these creatures, such as the cave nectar bat (Eonycteris spelaea), the more fascinated I am by their unique traits — from their relatively long lifespan to their resistance to cancer. We are motivated to discover bat-inspired therapeutics to treat disease in people.

    Many people mistakenly blame bats for viral outbreaks of severe acute respiratory syndrome and COVID-19. I try to emphasize that it is not the bats’ fault. They have co-existed with these viruses for millions of years, and it is human activities that have led to these spillover events.

    This field was very small 15–20 years ago, and it was hard to get funding and standard research tools. As a pioneering ‘batman’, I worked to expand the field. Here are some tips to do so for your favourite organism: first, focus on gathering essential tools and reagents, such as cell lines, a breeding colony, sequenced genomes and organism-specific antibodies. Second, share your resources with as many groups as possible. Third, actively promote and participate in activities in your organism’s research community, including workshops and symposiums. Fourth, convince funding bodies to expand this area of research. And finally, solicit commercial companies to support your efforts to develop products inspired by the organism.

    DAVID HU: Physics behind cat tongues and wombat poo

    David Hu is a biophysicist at the Georgia Institute of Technology in Atlanta.

    A close up image of the surface of a cat's tongue

    Cat tongues look creepy in a close-up, and inspire better brush designs.Credit: Alexis Noel and David Hu

    I study the physics of animal form and movement, with the goal of designing bio-inspired robots and devices. I have studied how a cat’s tongue functions as a cleaning brush, how wombats produce cube-shaped poo and how elephants reach with their trunks. My work gathers inspiration from a variety of unconventional model organisms, and they have a key role in bio-inspired design of complex materials and soft adaptable robots.

    Many people think bio-inspiration is simply about searching for ideas by watching animal videos on the Internet. Long ago, biologists taught me to work with animals in person, filming them and collecting biological samples. I have since worked with zoos, aquariums, museums and field stations to find animal subjects. This process takes longer, but leads to so many discoveries.

    Animals do not usually do what you want. Studying each species requires techniques that I have learnt from animal specialists. An hour of networking can save days or months in the laboratory or field, turning seemingly impossible experiments into achievable ones.

    A big misperception about animals is that scientists already know everything about them. The simplest-looking movements, such as the leap of a cat or shake of a dog, cannot be robustly replicated by robots. Understanding how animals can move so well in unpredictable environments will help us to build devices that can do the same.

    MOYUAN CAO: Cacti offer a prickly way to collect water

    Moyuan Cao is a materials scientist at Nankai University in Tianjin, China.

    After millions of years of evolution, plants and animals have developed superb abilities to manipulate and collect fluids. These processes are efficient, energy-saving and diverse — and offer numerous ideas to improve technology to collect water in arid environments. My work focuses on fluid-transport processes on bio-inspired interfaces that mimic cactus spines. In cactus clusters, a water droplet on the conical spine moves from the spine tip to its root. When the droplet reaches the root, the hydrophilic trichome — a fine outgrowth on the surface — rapidly absorbs it into the stem.

    My team has designed a fog collector, which acts like a cactus spine. These cactus-like devices are useful for collecting water in arid, foggy regions with no surface water, for example deserts near the sea, such as the Namib desert in southern Africa, and coastal mountain ranges, such as those near Antofagasta, Chile. Nature might have already found the best solution for unique environments, and my role is to identify the environmental conditions where those solutions can be best applied.

    DRIES KUIJPER: Debunking the false folklore about wolves

    Dries Kuijper is an ecologist at the Mammal Research Institute of the Polish Academy of Sciences in Białowieża.

    Centuries ago, the balance between humans and wolves was different — there was more wilderness, more wolves and fewer people. Back then, most wolf–human accidents involved wolves with rabies, whose behaviour is very unnatural, and this led to the folklore that wolves are dangerous. Interestingly, a review of all documented cases of wolves attacking humans from 2002 to 2020 shows that, despite large increases in wolf numbers in human-dominated landscapes of Europe, there has not been an increase in attacks (see go.nature.com/4iuop). No fatal attacks, and only a few bite incidents, have been documented in the past 20 years in Europe. Wolves are not dangerous to us, but people should respect the boundaries needed to keep wolves wild.

    I study how grey wolves (Canis lupus) affect the functioning of ecosystems in the Białowieża forest in eastern Poland and in other places in Europe. I was inspired by the reintroduction of wolves in Yellowstone National Park in Montana, Idaho and Wyoming, and how it caused trophic cascading effects: the wolves decreased the density of prey species, such as deer, which reduced the deer’s feeding on young trees, and that facilitated tree regeneration. But I realized pretty quickly that the Białowieża forest is very different from Yellowstone’s vast wilderness. Outside the Białowieża National Park, the forest hosts plenty of human activities. People live and hunt in the forest, and many tourists visit it. This directly or indirectly influences the behaviour of deer, which the wolves prey on, and how wolves use the landscape to generally avoid humans.

    Wolves are not afraid of human-dominated landscapes in Europe and have recolonized many countries. That has resulted in more human–wolf conflicts — especially due to livestock predation — but it also raises the scientific question how the presence of wolves can reshape their environment. In human-dominated landscapes that have been modified and restructured, wolves often engage in interactions with other species in different ways, which can have different influences on ecosystems3.

    RIZMOON N. ZULKARNAEN: Ghost orchids ‘haunt’ the forest

    Rizmoon N. Zulkarnaen is a plant conservationist at the National Research and Innovation Agency in Jakarta, Indonesia.

    A photograph of a Ghost Orchid growing in the wild

    The ghost orchid floats above the forest floor — a fragile sentinel of a healthy ecosystem.Credit: Rizmoon Nurul Zulkarnaen

    As a plant conservationist, I am fascinated by the biodiversity of endemic and threatened plants, and recognize the urgent need for conservation, particularly in Indonesia. I study Didymoplexis pallens, known as the ghost orchid because of its pale, ethereal appearance, which makes it look as though it is floating. These orchids are entirely leafless, lack chlorophyll and often grow on decaying plant matter in dense forests.

    Ghost orchids are epiphytic plants, meaning they grow on the surface of other plants: they depend on relationships with mycorrhizae, or symbiotic fungi, for nutrients. In the past few years, bamboo litter in their habitat, an important source of organic material for mycorrhizae, has significantly decreased or even disappeared. This was due to land clearing in the Bogor Botanical Gardens in Indonesia, for beautification and land management. As a result, the population of ghost orchids has drastically declined.

    I found that ghost orchids have a significant role in their ecosystem, acting as indicators of environmental health owing to their reliance on specific conditions, such as soil type, humidity and the amount of light, to grow. I do fieldwork with students to foster a community centred on plant conservation. We have reframed ghost orchids from merely rare plants to fragile components of a larger ecosystem that are crucial for biodiversity conservation.

    JANE HILL: Corpse plants reflect nature’s cleverness

    Jane Hill is a chemical engineer at the University of British Columbia in Vancouver, Canada.

    Jane Hill stands in front of a Corpse plant

    Jane Hill with a corpse plant, which can mimic odours produced during human and animal diseases.Credit: Jane E. Hill

    The rare flowering of the corpse plant is a wonderful example of the artistry and cleverness of nature. Plants must attract creatures to help them reproduce, and the corpse plant (Amorphophallus titanum) uses striking colours and odours to attract insect pollinators. My research team investigates molecules related to metabolism in human and animal disease, which we test as potential diagnostic biomarkers. We discovered that, during certain human infections, the odours that people emit are similar to those of corpse plants. We want to know how the plant evolved to mimic the smells of humans and other animals.

    Very few people study this rare plant, which grows in the tropical rainforest in Sumatra, Indonesia. Currently, my team studies the corpse plant as a hobby, using our tools to discover volatile molecules that give rise to odour. Although my team is not highly connected to the botanists studying corpse plants, our curiosity about which volatile molecules attract which insects, for example, has led to discussions with people studying insects, genetics and ecology. These fruitful exchanges stimulate our work, which helps us to better understand the smelly, odour-causing molecules produced during human disease. Those molecules might one day allow us to develop tools to diagnose diseases more quickly.

    DANIEL RABOSKY: Protect lizards to preserve ecosystem diversity

    Daniel Rabosky is an ecologist at the University of Michigan in Ann Arbor.

    Dan Rabosky kneels on the ground with a yellow-spotted monitor in the Australian Outback

    Ecologist Daniel Rabosky, holding a yellow-spotted monitor (Varanus panoptes), says we understand very little about most lizard species.Credit: Alison Davis Rabosky

    I became obsessed with reptiles, especially snakes, as a child. My parents did not like snakes, but they took me to local swamps to catch them — and they even helped me to house my collection of live snakes and turtles. Just before graduate school, I read work by the late ecologist Eric Pianka on the spectacular diversity of lizards in Australian deserts. Those environments have more species of lizard than anywhere else on Earth, even tropical rainforests. This is a striking outlier, and I wanted to study what regulates species diversity in time and space — a very important issue now, given the fast pace of global ecological change.

    A common misunderstanding is that, because lizards are vertebrates, scientists have a good handle on their basic biology and ecology. But the truth is that we have an incredibly poor understanding of the natural history of most lizard species. We are wiping out populations around the world and losing crucial information needed to understand and preserve the biodiversity of these reptiles.

    One group that I lean on tremendously for support is the natural-history collection and museum community. Its members are passionate about building global knowledge infrastructure to support basic biodiversity science, including my group’s research on snakes and lizards. These folks inspired my choice to serve as a museum curator, and I hope I can continue mentoring the next generation of biodiversity scientists.

    YORAM GUTFREUND: Owls find their way in the dark

    Yoram Gutfreund is a neuroscientist at the Technion — Israel Institute of Technology in Haifa.

    To truly understand the brain, we must learn how animals adapt their behaviours to natural settings. The barn owl (Tyto alba), a nocturnal predator, excels at detecting and capturing small prey in very-low-light environments, making it a great model to study sensory-based responses. Barn owls have been a focus of neuroscience research since the 1970s, but we still don’t know how they integrate information from several senses to guide their behaviour and filter out irrelevant sensory inputs.

    My group and others have shown that barn owls’ senses are surprisingly similar to people’s. Their stereo vision (the ability to perceive depth using two eyes), the auditory frequency range they can perceive and the way their brains interpret sound are more akin to traits of people than of most other mammals studied in neuroscience. Moreover, like humans, barn owls rely on vision as their dominant sense in cases of sensory conflict. Their sensory attention, like ours, is drawn to salient events and objects in their surroundings. While viewing an alert barn owl perched on a branch, scanning its environment with a human-like gaze, it is easy to imagine a ‘wise’ creature considering its next move.

    The barn owl research community is small but close-knit, with only about six labs worldwide. We all know each other well, and there is a strong spirit of collaboration. My work on the hippocampus brain region and spatial processing connects me to a much larger field, but being the only researcher focusing on this topic in barn owls allows me to offer a fresh, unique perspective.

    [ad_2]

    Source link