Artificial Intelligence (AI) promises to transform industries — and in the rapidly digitizing healthcare industry, it could transform lives. While AI’s potential to drive medical advances, streamline processes, and reduce costs is impressive, it also presents challenges that require careful consideration.
A recent JAMA study of 450 doctors, nurses, and physician assistants highlighted the potential benefits and pitfalls of AI. Clinicians were shown cases of patients hospitalized with acute respiratory failure. On their own, clinicians diagnosed these cases with about 73% accuracy, but this figure saw a 4.4% increase when using a standard AI model, demonstrating how AI can be leveraged to improve patient outcomes.
However, the same study also revealed potential shortcomings of AI. When clinicians reviewed cases with systematically biased AI model predictions, their accuracy decreased by 11.3% from baseline. Even though these cases included explanations about the factors AI considered, the model’s bias ended up impacting potential patient diagnoses.
AI holds a promise to fundamentally change the continuum of care for providers, payers, doctors, and patients alike. However, to ensure these changes are beneficial, we must establish ethical guidelines for the safe, secure, and effective use of AI. While we are excited for the advancements and improvements on the horizon, it's important that organizations take their time implementing AI into their processes. By mapping out ethical boundaries during implementation, organizations are set up to remove potential bias and inaccuracies.
Here at InterSystems, we believe that upholding transparency, responsibility, and explainability is integral to safeguarding patient safety and welfare in AI-powered healthcare products. By creating transparency around AI-created content and source data, including where modifications are made to AI-generated content, we believe this will instill trust and ensure clarity around the use of AI in healthcare settings. To that end, InterSystems has created its own framework for advancing responsible AI across our business, guided by principles that foster safe and effective innovation:
Guarding Privacy and Data Ownership:
AI development and deployment must prioritize privacy and data ownership. Quality AI relies not only on good data but also on ethically managed data that respects privacy and ownership rights while meeting compliance requirements. These principles maintain data protection standards in AI development, building trust among stakeholders.
Transparency and Responsibility:
In AI integration, transparency, responsibility, and explainability are crucial. Transparent AI usage in healthcare products fosters clarity and trust, ensuring patient safety. Stakeholders must be able to fully understand AI-generated outcomes and their context, safeguarding patient welfare.
Augmenting Human Potential:
AI should enhance human capabilities, not replace them. Distinct education and deployment strategies should focus on improving human creativity and decision-making. This approach supports human workers, mitigating negative impacts on employment dynamics.
Avoiding Harmful Bias and Discrimination:
All working with AI must combat harmful bias. Diverse data sources and prioritizing fairness can mitigate harm to individuals or groups. Continuous identification of biases ensures technological improvement over time.
Accountability for Safety and Security:
Rigorous governance is crucial, particularly in AI. Companies should validate AI through evidence-based approaches and collaboration, ensuring patient care while upholding safety and security standards.
As we harness the power of AI to revolutionize healthcare, we should remember that advancements should be made for the benefit of patients, providers, and society, not simply chasing a new hype cycle. Let's navigate this transformative journey together with integrity and purpose, ensuring that AI serves as an advisor, an assistant, and an enabler in healthcare innovation for years to come.