Artificial intelligence, or AI, is a hot topic across many industries as new technologies emerge each day that simulate human intelligence. In healthcare, AI is making cutting-edge advancements possible and transforming patient care. However, as the current and potential uses of AI in healthcare are explored, there is still much to learn about its benefits as well as ethical, privacy, and regulatory considerations.
The term AI is everywhere right now, but what exactly is artificial intelligence? Dr. Crotty explained that it is a form of advanced computing and mathematics that presents, at face value, indistinguishably from a human being, designed to think and reason as a person.
Modern AI can be classified into two categories: narrow AI, which is purpose-specific, and artificial general intelligence, or robots that can think and react similarly to humans. In the healthcare world, there is currently a lot of narrow AI in use, supporting tasks such as electrocardiogram (EKG) interpretation and analyzing medical imaging. AI is also behind important developments like computer-assisted mammograms and diagnostics for conditions, such as stroke risk assessment.
The possibilities for the use of artificial general intelligence for healthcare are what Dr. Crotty suggests are gaining a lot of attention and excitement recently. An eventual AI technology of this nature in healthcare could, for example, process the information provided in medical charts, medical images, and DNA information to build an understanding of a complicated clinical case, which could someday be available to healthcare professionals—perhaps in the not-so-distant future.
Considering the future of AI in healthcare, Dr. Crotty referenced how AI is already being used to improve patient care and make healthcare more accessible and efficient. One such use is remote monitoring, where AI can assist patients in managing conditions like diabetes and high blood pressure in the comfort of their homes—with human oversight, of course. In another example, AI can be used to enhance diagnostics by analyzing images of the eye to predict cardiovascular risk and other health indicators with high accuracy.
Ambient intelligence—AI that processes the audio input of a physician-patient interaction—is a commonly used AI tool in clinical settings, and one Dr. Crotty adopted early on. “For the most part, patients are appreciative and welcoming of it, and of course, it has all of the security and electronic data safeguards as the rest of their electronic health information, but generally, patients are welcoming because it’s helping me do my job to care for them better.”
Patient privacy is critical as AI becomes more integrated into healthcare. Dr. Crotty shared that information privacy is a common and understandable concern for patients, and that there are administrative and legal protections, as well as technical protections, intended to protect patient information.
“There are a lot of legal safeguards around healthcare data but there are also legally permissible ways to share data, such as with insurance companies, but they have very strict legal agreements around who can share what, when, and for what purpose,” Dr. Crotty said.
As the use of AI continues to grow in healthcare, the topic of bias is important to consider. As Dr. Crotty explained, AI technologies rely on the data they are trained with, and healthcare datasets often contain biases, such as underrepresentation of certain populations or skewed towards those who are already sick.
To address these biases, efforts are being made to ensure inclusivity in research data collection. The All of Us Research Program at Froedtert & the Medical College of Wisconsin, for example, aims to build a large and diverse health database that is representative of more of the population.
Additionally, organizations should assess their AI algorithms, locally, to make sure they apply to the specific patient populations being served. Patients and clinicians can also advocate for unbiased healthcare by questioning how AI models and predictions relate and apply to their specific cases, and encourage more inclusive data collection practices.
As AI in healthcare quickly evolves, the federal government is actively developing new regulations for its use in healthcare, and various agencies are providing guidance on how AI should be implemented. Dr. Cate emphasized the importance of transparency, validation, and ongoing assessment and monitoring of AI systems. “We want to move at a pace that is responsible… and at the same time, these (AI technologies) can provide help today, so we want to make sure we’re not too slow because they can be very helpful to people not in an abstract sense, but in a very real sense. We need to strike that balance.”
Want to hear more from Dr. Crotty about the use of AI in healthcare? Watch the full episode of Coffee Conversations with Scientists here: