A point-of-view on how AI is transforming healthcare and the resulting ethical implications.
Along with predicting epidemics, diagnosing disease, and counselling patients, artificial intelligence (AI) is finally proving its worth in healthcare delivery to enable a better patient experience.
From making sense of the unwieldy mass of medical data trapped in creaky healthcare systems, to tapping into the collective knowledge gathered from several thousand healthcare providers and millions of patient visits; doctors can now start to analyze which treatments work best, and when.
Now, AI can recommend a line of action even in clinically-challenging situations, assisting radiologists in analyzing simple cases, prescribing a first line of treatment to patients before they see a doctor and helping to monitor health and medication in chronic conditions.
Changes in healthcare delivery and its ever-evolving expectations are forcing greater adoption of technology to manage patient information and assist in the decision-making process.
The need for data mining and improved analytics and decision-making has put AI at the heart of this transformation. Healthcare is becoming a lead beneficiary of the greater accessibility, relevance and actionability of information that AI brings.
A recent Infosys study of 1,600 IT and business decision-makers confirms the view of AI’s transformational power in healthcare: the pharmaceuticals and life sciences sector leads all other industry groups in current AI adoption.
The healthcare sector, which faces particularly-tough challenges with technology advances including concerns around privacy, data silos and ethics, has AI adoption levels ahead of industries such as retail and financial services. And the deployment of AI is projected to grow dramatically.
The prognosis is that pharmaceutical companies, medical device companies, providers, caregivers, and technology companies will eventually come together to improve overall healthcare beyond selling their individual offerings. When this happens, AI will be a key driver of ‘connected healthcare’, addressing patient needs end to end, and even helping to prevent people from falling ill.
According to the Infosys study, 52% of pharmaceutical respondents are building AI into the fabric of their companies and are beginning to see results. New innovations are changing the face of drug discovery by leveraging AI and machine learning for search and analysis, research, simulation studies and even hyper targeting to potentially crash the long lead time for launching a new drug in the market.
Purposefulness from ethics
However, as machine intelligence continues its inexorable march, the ethics of allowing machines into decision making is never far behind. Nowhere is this truer than in healthcare as the industry learns to face tricky dilemmas in adopting AI in medical diagnosis.
As long as algorithms are used to make recommendations based on input symptoms and histories to the medical practitioner, we are on firm ground. But, if actual, non-curated medical advice is given by the machine, we are moving into potentially-shaky territory.
Also, what happens if the algorithm makes a mistake, such as seeing a false pattern where none exists, or acquiring a bias because its input data is skewed or incomplete?
Finally, where does one draw the line? For instance, just because we now have a machine that can predict with 80% accuracy when a person with a defective heart will succumb to it, should we pass on that information?
In addition, while AI and machine learning relies on millions of analyses of data alone for diagnosis, a healthcare practitioner, in addition to data, also uses intuition and experience in making a medical decision. Going forward are we willing to let go of the value that offers? These are tough questions that we must eventually work our way through.
In our earlier-mentioned survey, 53% of pharmaceuticals and life sciences respondents said that their organization has considered the ethical implications of AI fully.
In my view, many ethical issues can be confronted by setting the boundaries for what AI can and cannot do. The key principle is to leverage the full power of AI in clinical settings to assist, enable and co-work with human professionals, who continue to remain in charge of big decisions, such as diagnosis and treatment.
It will benefit society to maintain an open mind about how decision-makers in healthcare organizations can work alongside AI and selectively rely on it to inform and improve care.
As an early leader in adopting the technology, it may help dispel the prejudices and myths surrounding AI, and build basic awareness and education among working professionals in the medical field and beyond.
The industry also needs to establish ethical standards and obligations for the organization, as well as metrics to assess the performance of AI systems.
As people displaced from their current roles by automation are being retrained and reskilled to perform new ones, redirecting a significant section of that talent to operate and manage the ethics charge will prove beneficial.
Finally, practitioners must allow an adequate period of time for any issues in the system to surface. This period of time is based upon many factors, including the maturity of the organization and the complexity of the technologies being deployed.
These measures will go a long way in ensuring that AI fulfills its promise to transform healthcare delivery not just efficiently, but purposefully.
Michael Breggar is a managing partner at Infosys Consulting, and runs the firm’s life sciences practice in North America. He has 25+ years of industry experience, having run advisory work for some of the biggest firms in the world in this space. Mike can be contacted via LinkedIn or at email@example.com.