How trustworthy AI can fuel innovation in medicine
Medical imaging has the power to reveal diagnoses, unveil new therapies and save lives. But analyzing imaging data at scale can be cumbersome, tying up resources and taking up too much time. That’s where artificial intelligence (AI) comes in. AI offers a tremendous opportunity to automate both routine imaging tasks and imaging analysis for medical researchers and providers. But, as a researcher or clinician, how can you ensure you’re using trustworthy AI?
In this blog, we explore the importance of AI in imaging research as well as the concept behind trustworthy AI and its implementation.
The value of AI in medical imaging research
It’s true that the logistics of imaging data management can weigh down researchers and impede their ability to work efficiently. In fact, researchers say they spend only 20% of their time analyzing data, while the rest of their time is spent on more menial tasks, including finding, curating and organizing data. With AI in place, researchers and clinicians can spend more time deriving insights from their data, which is the more crucial — and enjoyable — part of data work.
It’s not just the time savings of AI that are appealing: If used correctly, AI has the power to increase the accuracy and quality of insights derived from medical imaging, a capability that can prove to be meaningful in a variety of health contexts. For instance, researchers using Flywheel were able to use an AI model to distinguish between COVID-19 and non-COVID-19 pneumonia 94% of the time versus 85% by thoracic radiologists.
Furthermore, AI can accelerate the drug-discovery process to identify cures for complicated illnesses more quickly and comprehensively, reducing clinical development costs and saving more lives. This value transfers to patients, helping to reduce medical costs while expanding treatment potential.
Investors seem to have taken note: Optimistic forecasts anticipate that AI in medical imaging can grow nearly 20 times over the next decade, reaching a market size of $14.3 billion by 2032.
Potential risks with AI
Although the growth prospects for medical imaging remain exciting, becoming overly reliant on AI software without putting the right safeguards in place could generate adverse outcomes. Here are the key risks to consider when implementing AI into medical imaging workflows:
-
Data gathering and analysis bias: We’re living in a society that is in a constant state of transformation. As demographics evolve, it’s critical to update the data sets that AI algorithms use in tandem with these changes. Failure to do so can lead to inaccurate predictions and imprecise drug development, treatment protocols or diagnoses among patient cohorts.
-
Oversimplification of themes: Because AI lacks the key critical thinking abilities that humans have, the software occasionally struggles to deliver insights with depth. Acknowledging patient nuances and external life factors can offer valuable context about the “why” behind a patient’s condition.
-
Lack of oversight and accountability: AI is an autonomous software, which – at times – can limit user input and be opaque about how it arrived at certain outcomes. The combination of these forces can limit some providers’ trust in models.
Proactively recognizing the potential shortcomings in model development and implementing technical tools that promote transparent algorithms can help to avoid these issues.
Defining trustworthy and transparent AI
For an AI model to be considered trustworthy and transparent, it must satisfy the following criteria:
-
The AI algorithm needs to be well-trained, having had exposure to data sets that are diverse and fully representative of the patient populations that the algorithm is ultimately designed to treat. It must also enable the user to use the algorithm in whatever way they see most fit to solve the healthcare cases at hand.
-
Transparent AI must accurately provide end-to-end visibility of the data it is trained on, and it must also disclose the weighting of variables that it considers.
Safeguards to consider
To ensure that AI is (and remains) trustworthy, constant monitoring of algorithms’ techniques and the data they use will be necessary – as data evolves, algorithms’ validity and accuracy can fluctuate. To evaluate the trustworthiness and transparency of an AI algorithm, consulting the following framework can act as a valuable guide:
-
Use representative data sources: Finding rich, diverse data sources that represent the diversity of patients can train AI to extrapolate more realistic and relevant insights. Moreover, diverse data can also hedge against outlier data points.
-
Ensure models are explainable: As a researcher or provider, you should be able to understand and explain how an AI model arrived at its conclusion. This is important both to minimize errors and to learn from the model for professional growth.
-
Adhere to privacy requirements: Ensure the data you use is anonymized, pseudonymized and used only with patient consent under strict protocols. Implement privacy-enhancing technologies, and be sure to keep employees up to date on privacy training to minimize risks of breaches.
-
Monitor models: Given the dynamic nature of healthcare, AI software models should be monitored – and tweaked – to ensure calibration with current environments. Changing demographics should signal developers to modify the data used to train AI and ensure optimal impact.
-
Score their performance: Evaluate outcomes compared both with other medical imaging models and with existing clinical processes. Evaluations should offer substantive feedback — with key takeaways and actionable insights — to drive meaningful improvements.
Employing trustworthy AI with Flywheel
Flywheel offers researchers ways to aggregate disparate datasets, increase efficiency, and speed the path to new treatments and therapies. Our medical imaging AI platform empowers teams from academic institutions to pharma to develop trustworthy AI in several ways:
-
Collect, curate and standardize your data through Flywheel Core.
-
Incorporate vetted AI algorithms through Flywheel Gears.
-
Supplement your existing data with datasets from likeminded institutions through Flywheel Exchange.
-
Develop and compare AI models against your data using integrations with Azure Machine Learning Studio and NVIDIA MONAI.
-
Create audit trail reports with our Validated Core.
To learn more about how Flywheel works and how we can help you enable trustworthy AI, get in touch.