Can AI solve cancer’s diagnostic woes?

In brief

Catching cancer early is one of the best ways to ensure a good prognosis, but current diagnostic tools are imperfect, and hunting for new alternatives is a slow process. Artificial intelligence could speed things up; the technology can rapidly scan genetic databases and image files to identify cancerous patterns that humans miss. But AI is also flawed, sometimes amplifying racism and sexism. Computer scientists and oncologists are working to find ways to capture AI’s promise while limiting its biases. The hunt for new biomarkers is now firmly in the digital world.

Cancer, in all its pernicious forms, kills about 10 million people each year, roughly the population of Michigan. And the incidence rate is projected to rise by 55% between 2020 and 2040, according to the nonprofit Cancer Research UK. Resisting this trend relies on early detection, which in turn rests on good screening and diagnostic tools. But innovative techniques have not emerged as quickly as cancer experts would like.

This matters because most cancers are treatable. Diagnostic biomarkers can provide physicians important information about a person’s cancer. The same goes for prognostic biomarkers that suggest the likely health outcomes for people with cancer. If doctors could paint a fuller picture of someone’s cancer, they could help that person make more informed treatment decisions.

One proposed solution is to put artificial intelligence on the case, both to identify new predictive biomarkers for cancer screening tests and to help in that screening itself.

That idea has garnered a lot of interest among scientists. A Google Scholar search for artificial intelligence cancer biomarkers returns more than 200,000 results. But researchers have yet to identify how to eradicate societal biases, such as misogyny and racism, from algorithms. Computer scientists and clinicians alike are battling with the question of how to use AI to improve cancer management without furthering prejudice.

Credit: Wellcome Images/Science Source

Scanning electron micrograph of a cluster of prostate cancer cells. Each cell is approximately 10 µm in diameter.

The search for solutions

“Cancer is quite a formulaic disease. You get a patient, you classify them, and then you apply the right treatment. AI can help us with all that,” says physician Stephen Hughes, a senior lecturer at Anglia Ruskin University’s Medical Technology Research Centre. “But sometimes AI gives us the wrong answers, and that’s the last thing you want in medicine.”

Unless you are careful with how you train your algorithm, Hughes says, the results can be unhelpful or even actively harmful. A model trained on uncurated information, for example, “just sees what’s out there on the internet, and what you end up getting back is what’s reflected on the internet and not necessarily what’s in the literature. What you need is a clinical model that’s based on clinical evidence,” he says.

Cancer is quite a formulaic disease. You get a patient, you classify them, and then you apply the right treatment. AI can help us with all that.

Stephen Hughes, senior lecturer, Anglia Ruskin University

To counter this accuracy issue, one of Hughes’s medical students, Talha Mehmood, is developing an AI model that uses only data from the UK’s National Institute for Health and Care Excellence (NICE), an official advisory body that reviews new health-care technologies for the National Health Service and publishes data and official advice. The algorithm is trained on data and acts as a patient avatar, which medical students can practice taking medical notes from. But Hughes hopes it will go on to do much more. “We’re initially limiting this to minor ailments, but we could extend this to cancer if it’s successful,” he says.

Diagnosis starts with people consulting their primary care physicians, Hughes notes, and tools like the one his student is developing could help streamline the beginnings of the process by suggesting how to triage people that might have cancer—a sort of digital second opinion. But the algorithm’s validity would depend on the reliability of the input data and the common sense of the human using the output.

Ultimately, doctors are hoping that AI programs will not only improve current cancer diagnostic and triage protocols but also help create entirely new ones by finding novel biomarkers for disease hidden in existing data.

“The potential of AI in discovering new biomarkers lies in its ability to process and learn from complex biological data by integrating data from multiple sources,” says Ganna Pogrebna, executive director of the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University. The discovery of new biomarkers is especially pressing because cancer diagnostic and prognostic tests are less accurate than we might like to think.

Calculating the concerns

Take prostate cancer, for example. In a 2018 study led by Philipp Dahm at the University of Minnesota Twin Cities, the research team looked at data from more than 700,000 men who had been screened for prostate-specific antigen (PSA) and found that about 15% of those with a PSA level below 4 ng/mL had false-negative results—meaning the test failed to detect their cancer. The false-positive rate was even more concerning—approximately two-thirds of men whom the test flagged didn’t have cancer (BMJ 2018, DOI: 10.1136/bmj.k3519).


PSA test problems

The prostate-specific antigen test has a 66% false-positive rate and a 15% false-negative rate.

Two-thirds of positive prostate-specific antigen tests are false positives, and 15% of negative prostate-specific antigen tests are false negatives.

Source: BMJ 2018, DOI: 10.1136/bmj.k3519.

These papers specifically referred to the study participants as men as opposed to people with prostates. Consequently, this makes it hard to know whether these trends hold true for transgender and nonbinary people.

Physicians use the test, which measures levels of PSA in the blood, as an indicator that someone might have prostate cancer. High PSA levels in the blood usually lead to a prostate biopsy, which is a pretty invasive and painful procedure that comes with the risk of adverse effects such as rectal bleeding, urinary retention, and fever.

The problem of false positives and unnecessary extra tests isn’t isolated to prostate cancer. A 2020 review by researchers at University College London, the University of Tübingen, the Mount Vernon Cancer Centre, and the University of Hertfordshire looked at the accuracy of commonly used biomarkers for bladder cancer diagnoses (Urol. Oncol.: Semin. Orig. Invest., DOI: 10.1016/j.urolonc.2020.08.016). The researchers concluded that the biomarkers approved by the US Food and Drug Administration “almost uniformly suffer from high false positive rates as a result of benign inflammatory conditions.”

Another study has meanwhile shown that about 11% of women in the US receive false-positive results from breast cancer screenings. And as many as 20% of patients will get false-negative results from X-rays that screen for lung cancer, blood tests that measure ovarian cancer antigens, and fecal tests that look for colorectal cancers. The list goes on.

Searching for better

None of this means we shouldn’t trust current testing methods; despite being imperfect tools, they are still the best options available. When used in concert with other tests and observations, existing tests give doctors valuable data. But the question is whether things can be improved.

A microscopy image of a slice of tissue stained purple.

Credit: Shutterstock

Sections of prostate tissue showing prostate cancer

“It’s better than nothing,” Hughes says. “Before the PSA, it was just the clinician’s finger. When it was introduced, PSA was a pretty damn good test, but it can commonly be falsely positive.”

It has been 38 years since the FDA approved the test for PSA, and oncologists and urologists now feel that the field is overdue for another leap.

There are already trials underway that seek to use AI to improve the way doctors diagnose prostate cancer. Pathologists at Oxford University Hospitals are using image analysis AI technology to help visually read prostate biopsy slides. The software alerts the pathologists of any suspicious imagery that they might want to pay particular attention to. But while this approach might improve the diagnostic methods that doctors currently have at their disposal, it’s not quite the breakthrough that people are expecting.

The real goal is to find entirely new diagnostic and prognostic tools that would reduce unwarranted and intrusive biopsies. In the US, the Early Detection Research Network consortium within the National Cancer Institute has been working to combine liquid biomarker data with genetic information and image processing to personalize prostate cancer treatment plans for people with the disease.

AI algorithms can identify patterns and relationships that may not be evident to human researchers.

Ganna Pogrebna, executive director of the Artificial Intelligence and Cyber Futures Institute, Charles Sturt University

In an earlier example of a study that tried to find new diagnostic biomarkers, researchers at the University of Nottingham and Nottingham Trent University led by Desmond Powe used machine learning to trawl through a publicly available database of genes that prostate cancer tumors express. The algorithm returned with a suggestion. It noticed high levels of expression of the DLX2 gene, meaning that the DLX2 protein—which is important in cell proliferation—could therefore be a potential biomarker (Br. J. Cancer 2016, DOI: 10.1038/bjc.2016.169).

The Nottingham scientists screened 192 prostate tumor samples and discovered that the presence of DLX2 in a biopsy was a statistically significant predictor of a cancer’s metastasis probability. “AI algorithms can identify patterns and relationships that may not be evident to human researchers,” Pogrebna says.

That Nottingham study was published back in 2016, and work is still ongoing to refine our understanding of DLX2 protein concentrations and how doctors might best use them to understand a person’s prognosis. A 2022 study, for example, looked at how DLX2 expression can help doctors understand not only the chances of a cancer’s spreading but also how patients might respond to specific cancer therapies (Dis. Markers, DOI: 10.1155/2022/6512300).

This timeline highlights that although AI can speed up the early stages of discovery, it’s still a long road from the initial study to hospitals using the findings. Agencies like NICE and the FDA understandably need a lot of data before they offer advice or approve new medical tests.

Beginning to use AI-identified biomarkers will also be pricey because they will likely be subjected to additional scrutiny. “Translating AI-discovered biomarkers into clinical practice requires rigorous validation and regulatory approval, which can be time consuming and challenging,” Pogrebna says.


AI diagnostic dollars

The market for artificial intelligence in oncology is big and set to get bigger.

The 2024 market size for artificial intelligence in oncology is $4.18 billion; the forecast for 2030 is $19.17 billion.

Source: Grand View Research.

The cost of that validation is one that investors, tech firms, and pharmaceutical companies are probably willing to shoulder because the potential payoff is substantial, Hughes says. Analysts expect the global oncology market to grow from $147 billion in 2022 to $312 billion by 2032—about the same as the entire gross domestic product of Finland.

“I don’t know what Google, etc. are working on; you can’t know,” Hughes says. “But I have no doubt commerce and industry are working on [AI diagnostics] right now.”

Several AI start-up companies have already emerged to hunt for new cancer biomarkers, which shows that investors are willing to invest in the potentially profitable area. Harbinger Health, based in Cambridge, Massachusetts, is one example. It uses AI in a bid to detect cancer at earlier stages, and the company has a clinical trial underway to examine blood samples of people who are deemed to have a high risk of developing cancer.

Santa Barbara, California–based Artera, which is trying to create better prostate cancer tests with AI, is another firm attracting investment. The firm launched in March 2023 with $90 million in funding, and it has since raised an extra $20 million.

The sand in the microchip

For all the promise and hyperbole that surround AI, there are several serious caveats. Not least are human attitudes toward trusting an anonymous algorithm. “Ultimately, practitioners are responsible for the diagnosis, and they have significant behavioral barriers to adopting views they get from black box algorithms,” Pogrebna says. “We have not resolved these trust issues.”

That’s a valid concern; if doctors are making life-and-death decisions based on AI, they have a responsibility to question the technology. “This is why in our AI and Cyber Futures Institute we never just have the digital health team working on a particular solution. We often have sprint teams, which would have digital health experts alongside engineers, lawyers, and psychologists,” Pogrebna says. “Practitioners need to understand where an AI diagnosis is coming from and how the algorithm works.”

If those trust issues are ever to be resolved, the input data will need to be improved, Hughes says. And Pogrebna agrees—saying it’s a racial justice issue as much as anything else.

In 2020, the FDA released a report detailing the demographics of participants in clinical trials that took place that year. That report stated that 75% of trial participants were White; 11%, Hispanic; 8%, Black; and 6%, Asian. “We do not have enough data on [people of color], and they may have important differences to the rest of the population,” Pogrebna says. This lack of data is especially problematic when it comes to diseases that disproportionately affect people of color.

Take the prostate cancer example: Black men are twice as likely to develop the disease as other men, according to Prostate Cancer UK, but they are underrepresented in the data and therefore less likely to influence the algorithms that are hunting for novel biomarkers. And so the imbalance can only worsen.

Figuring out how to harness the power of AI in an equitable way is not an easy task.

Translating AI-discovered biomarkers into clinical practice requires rigorous validation and regulatory approval.

Ganna Pogrebna, executive director of the Artificial Intelligence and Cyber Futures Institute, Charles Sturt University

In February, Google released an AI-powered chatbot known as Gemini. It soon became apparent that Gemini was refraining from showing White people when it was asked to generate images, even though it would have been correct to do so—for example, in response to requests for images of the US’s founding fathers or German soldiers from World War II. After widespread backlash, the company apologized and paused the program.

Media reports have since suggested that Google might have been trying to counter the bias against people of color and women that users reported in rival AI platforms, but it ended up going too far the other way. Whatever the cause, the Gemini episode highlights a substantial bug. AI models run the risk of incorporating some of society’s most cancerous biases. The question of how to deal with this problem while deploying the technology in the medical world is troublesome and as yet without a clear answer.

Benjamin Plackett is a freelance writer based in London.


Source link

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts