The AI Prescription: Innovation, Responsibility, and the Future of Patient Care
Source: YaleNews
Author:
Artjom Gavryshev
Editor:
Aya Samaha
At some point in life, we’ve all been caught by a sudden sharp pain in the chest, prompting a frantic Google search that convinces us of having multiple untreatable diseases and little chance of survival. While this scenario seems humorous, it highlights a broader issue–the generalization of symptoms and inaccurate input that produce an inferior information output. While we place our trust in the healthcare system, the reality is unsettling: in the U.S., nearly 1 in 4 hospitalized patients experience a harmful event, and about one-third of patients over 70- as well as more than half of patients over 85-leave the hospital in a worse state than they arrived. These challenges underscore the urgent need for innovative solutions to enhance healthcare efficiency and accuracy, especially in the U.S. healthcare system.
Artificial Intelligence (AI) could be a viable response to such inefficiency. AI is transforming medicine by improving early disease detection and making hospital administration more efficient. Despite heavy skepticism, AI is becoming increasingly sophisticated at executing tasks traditionally performed using human intelligence. In medicine, AI has a great potential to enhance efficiency, accuracy, and scalability. From early-stage disease detection to drug development and administrative automation, AI is set to reshape the landscape of modern medicine, with its market projected to reach nearly $200 billion by 2030, and holds the potential to revitalize healthcare.
Current Landscape
Current trends of implementation of AI in medicine suggest its impact on shifting healthcare from a reactive to a proactive model, i.e., enhancing the capacity to predict medical emergencies, chronic diseases, and other complications comparatively more in advance, giving medical staff a time advantage to respond adequately. Wearable devices, such as Apple Watches and Fitbit trackers, collect real-time health data, which AI can analyze to detect abnormalities and encourage healthier behaviour. For example, AI-driven analytics can identify individuals at high risk of developing sepsis, allowing healthcare systems to take preemptive measures such as moving high-risk patients to high-acuity units before symptoms worsen.
Current Administrative Shortcomings
Another acute problem in healthcare is administrative burdens, which impose a significant strain on healthcare providers but also contribute to physician burnout and subsequent underperformance and higher turnover. Annually, this burden is estimated to cost the U.S. healthcare system in excess of $260 million. In 2022, the U.S. spent $4.5 trillion on healthcare, with administrative spending estimated to be circa $1 trillion while administrative costs cause 30% of excess spending. In parallel, studies indicate that physicians would require 26.7 hours per day to fulfil all clinical and administrative responsibilities—a stark reflection of the untenability of current healthcare workflows. With 90% of clinicians indicating that excess administration in healthcare contributes to burnout, revitalization should be considered.
The current U.S. administrative system in healthcare is visibly flawed. Per capita, the U.S. spends $937 on admin in healthcare, 4 times more than any other nation. The higher spending is matched by the complexity of the insurance system and the multi-layered system of healthcare coverage, such as Medicare or Medicaid. With roughly 320,000 health plans in the U.S. and 14-step payment processing for each transaction, the system requires immense human input.
AI-Powered Wearables: Proactive Healthcare Monitoring
Broader and more sophisticated implementation of AI in wearable devices. These devices not only collect data, but often analyze it with built-in algorithms to recognize worrying trends or anomalies in real time. In a study by the American Heart Association, using wearables for heart monitoring led to earlier detection of atrial fibrillation – a serious arrhythmia that increases stroke risk fivefold. Catching AFib early through wearable alerts allowed patients to get treatment “sooner, preventing severe complications and reducing the burden on healthcare systems.” In hospital settings, studies have shown that using wearables to automatically log patient vitals significantly eases staff workload. For example, during the COVID-19 pandemic a Vietnamese hospital deployed wearable pulse oximeters for continuous oxygen monitoring – this automated vital sign collection reduced healthcare staff workload by eliminating frequent manual checks. This proactive monitoring means clinicians spend less time reacting to crises and more time intervening early. In short, AI-driven wearables improve patient outcomes through real-time surveillance, while also reducing the downstream administrative load of managing advanced illness or emergency care that might have been avoided through early detection
AI’s Role in Streamlining Administrative and Clinical Workflows
AI-driven automation offers a solution by streamlining administrative functions, optimizing resource allocation, reducing clinician workload, and eliminating the need for overarching manual oversight and procedural assessments. For instance, in Northwell Health, a major New York hospital, leaving the AI to resolve patient scheduling has proven to decrease scheduling conflicts by 20% and boost staff satisfaction by 15%. Moreover, AI-driven electronic health record (EHR) systems automate documentation, enabling clinicians to allocate more time to direct patient care. Mount Sinai Hospital in Toronto uses AI-automated medical record transcription, resulting in an average of 30 extra minutes per patient. Allowing AI to access and analyze patient data–from appointment scheduling to record transcription for patients, staff, and supplies–boosts efficiency and productivity. A broader shift to the usage of AI in healthcare administration could save up to 30% of administrative costs, numerically up to $300 billion within the next 5 years.
AI can also be a solution to the widening gap in the global healthcare workforce. In 2013, the European Union (EU) reported a shortfall of 1.6 million healthcare workers, a deficit projected to reach 4.1 million by 2030 if the system remains at status quo. Achieving the necessary growth rate of more than 2% annually has proven challenging, making AI an essential tool in augmenting healthcare delivery. AI-enabled clinical decision support systems (CDSS) can assist less experienced practitioners in diagnosing and managing conditions with greater accuracy, making sure that patients continue to receive deserved service even in understaffed medical facilities.
The administrative burden may also span into the medical emergency response time. AI also holds the potential to enhance emergency response efficiency, often a determining factor in patient survival in medical emergencies. According to a study, a 10 km straight-line increase in distance from a person’s home to the nearest hospital is represented by a 1% absolute increase in mortality. Reconfiguration of digital emergency services could better understand patients’ symptoms, possibly alerting nearby ambulances in advance. Each minute of ambulance delay increased the mortality risk by 8-17% in all emergencies. By leveraging AI to analyze distress signals, prioritize cases, and alert emergency responders preemptively, healthcare systems can significantly reduce preventable deaths.
Regulatory, Ethical, and Bias Challenges in AI Healthcare Adoption
While AI has the potential to be transformative in patient care, it comes with an array of challenges that inhibit its widescale implementation.
In the U.S. policy framework, the Food and Drug Administration (FDA) oversees the approval and monitoring of AI-driven medical technologies. The FDA’s current regulatory pathways were built for traditional medical devices and drugs, not for rapidly evolving AI software. The agency itself has noted that its “traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies,” meaning even minor algorithm updates might trigger new reviews under existing rules. This can lead to lengthy approval timelines for AI-driven tools, slowing their deployment. For example, an AI diagnostic system may require iterative improvements, but each change could necessitate fresh FDA clearance, creating delays and discouraging frequent updates.
In parallel, AI models may reflect biases present in the training data, leading to disparities in diagnosis and treatment, particularly for underrepresented groups. AI systems need immense data pools to learn from, in which, groups of human population can be misrepresented or absent. For instance, based on the serious flaws in Optum’s AI, a health innovation company owned by UnitedHealth Group, algorithm, the AI suggested Optum AI learned to suggest that black patients be given half the amount of care as white patients. Reinforcement of systemic biases only exacerbates existent inequalities and hinders healthcare service delivery to marginalised groups, on a catastrophic scale.
When it comes to representation in data and large-scale data-pool utilisation and training, another problem that arises is privacy and ethical concerns. Acknowledging that AI thrives on data, we must carefully analyse its sourcing and cleanliness. Health Insurance Portability and Accountability Act (HIPAA), written in 1996, fails to cover health data collected by modern AI tools, allowing tech companies to collect and use personal health metrics with relatively few restrictions, if they are not a medical provider. While HIPAA regulates how healthcare providers handle patient data, it does not apply to consumer health apps, wearable AI, or third-party tech companies. This means companies like Google, Apple, and Fitbit can legally collect and use personal health metrics—such as heart rate, sleep patterns, and exercise data—with relatively few restrictions if they are not a medical provider. Subsequently, the collection of personal data by various companies increases the likelihood of the data being leaked. For instance, an unsecured database exposed over 61 million records related to fitness trackers from Apple and Fitbit, compromising user data privacy. Such screw-ups highlight the broader necessity of Data Protection Act refinement.
Steps Toward Responsible AI in Healthcare: Responsible Innovation
To unlock AI’s full potential while minimizing risks, clear proactive steps in policy, regulation, and technological safeguards must be made
The FDA must catch up. An expansion of current regulatory frameworks, such as Total Product Life Cycle and Predetermined Change Control Plans for AI software can allow developers of the software to pre-specify potential changes to their algorithm after FDA approval, reducing massive waiting times that come with new submissions. These amendments must be considered on a federal level, to reduce the current inconsistency of state laws on data consent.
Policymakers must expand HIPAA’s scope to cover AI-driven health technologies and introduce stricter protections for personal health data, as well as establish a clear map of who and why is accessing the patient data.
While HIPAA allows to use of de-identified patient data without consent, AI can potentially re-identify the anonymised data if given enough cross-reference information. To address this challenge, we must require explicit patient consent for using even de-identified data in AI development – an extra safeguard not mandated today. However, obtaining millions of patient consent is logistically difficult, thus, a streamlining sophistication must be utilised to address this logistical hurdle. For instance, standardized digital consent mechanisms—such as a universal opt-in framework for AI use in healthcare—should be developed, allowing patients to control how their data is used without excessive administrative burdens.
To address biases present in the datasets that AI software is trained on, proper introduction of checks and balances and audit of data is necessary before its deployment. Independent fairness audits should be required before AI deployment, assessing whether models perform equally well across different racial, socioeconomic, and gender groups. One such initiative focused on eliminating racial biases in datasets is STANDING Together, which aims to curate datasets used in healthcare towards more equal and fair representation. Expanding such programs and establishing regulatory standards for bias mitigation would ensure AI-driven decisions do not reinforce existing disparities but instead promote a more inclusive and fair healthcare system.
Lastly, when it comes to end-of-life patient care, the use of AI raises serious ethical dilemmas which question the morality of decision-making processes. Machine learning models can now predict the likelihood of a patient dying within a certain timeframe – potentially prompting earlier hospice referrals or discussions about withdrawing aggressive treatment. This may both demoralise the patient and the staff, essentially producing a poor outcome. To navigate these challenges, AI models used in end-of-life care should be treated as decision-support tools rather than determinative predictors, and, at least, for now, should be viewed as a helping hand or a shoulder to the medical professionals, rather than a stand-alone decision-maker.