This article was originally published on Medium and reprinted with permission by the author.
Also published on The Future Society’s site: http://www.thefuturesociety.org/perspectives/
Artificial Intelligence (AI) unlocks enormously beneficial innovations in healthcare. Personalized and precision medicine, more accurate and faster diagnostics, and accessible health apps increase access to quality medical care for millions. Chatbots offer 24/7 free therapy, wearables monitor biometric data in real time, and robotic devices improve surgical outcomes.
Leveraging data from doctors visits, digital devices and wearables, AI systems are able to consider unique patient history, genetics, lifestyle, diet, environment, and even bacterial composition of the gut. At Memorial Sloan Kettering Hospital, IBM Watson uses machine learning to learn to suggest personalized medical treatments based on analysis of scores of medical research, including a myriad of drug interactions, and treatment outcomes for patients with similar genetic makeup, background, and cancer strains.
This rise of AI ‘Healthtech’ is enabled by developments in machine learning algorithms, proliferation of digital and biometric data, increasing computing power and advances in biological and medical sciences including in genomic sequencing.
New rewards, new risks
However, AI brings new policy and ethical dilemmas. Stakes are high and human health and lives are at risk. Through healthcare we can observe the trade-offs in the “AI revolution”: How to support beneficial innovation while minimizing risks?
- Equity and inclusion: Will AI democratize healthcare or serve elites, leading to inequality or even biological superiority?
- Data privacy and security: How should we trade off private and secure health data for medical breakthroughs and innovations?
- Interpretable and explainable AI: Is it necessary for ‘black box’ algorithms to be interpretable and explainable when lives are at risk?
- Algorithmic bias and representative data: How to ensure data is representative so machine learning algorithms are generalizable and safe for all?
- Liability and accountability: Who is responsible when harm is caused by an AI system or machine?
- Building Trust in AI: how can we have confidence in treatment recommendations based on complex or ‘black box’ algorithms?
- Automation and unemployment: What is the impact of automation on employment for doctors, nurses, and medical professionals?
- Ethics of labor unions: Is it unethical for medical offices or hospitals to block technologies that are safer, better, or cheaper, just to protect jobs?
- Right to access: Where possible, should doctors be required to offer AI-enabled diagnostics or treatments if safer, better, or cheaper?
So far, there are limited regulations or concrete policies across any country to manage any of these questions. The EU General Data Protection Regulation (GDPR) may help address data privacy and interpretability of machine learning algorithms, but it has loopholes and its robustness is yet to be tested.
Let’s zoom in on three polarizing debates:
Equity and inclusion: AI for Democratization or Elites-Only?
AI can democratize access to high quality and affordable healthcare for millions, including rural and low-income communities and developing countries.
Remote diagnostic applications allow users to upload photos of snake bites or skin cancers for real-time prediction of diagnoses and treatments, without having to travel distances to see a doctor. In developing countries, approximately 330 million people live with heart diseases. Livecare, a Romania based startup, has developed a small wearable patch that, simply taped onto the chest, uses machine learning (specifically an LSTM recurrent neural network algorithm) to monitor and provide feedback for heart diseases. Rural and low-income folks can avoid travelling potentially hundreds of kilometers and paying high costs to see a cardiologist.
Innovation can boost inclusion and quality of life for more people. Low-cost or free technologies include therapy chatbots (e.g. Woebot), emotional therapy robots for the elderly (e.g. PARO), computer vision tools for the visually impaired, and exoskeletons and robotics for the physically impaired (e.g. ROBEAR). Whereas the number of doctors is limited and public medical systems over-stretched, healthcare delivered over digital devices is scalable to reach millions of people.
Yet readers of Noah Yuval Harari’s Homo Deus are aware of a host of new technologies that, if accessible to an elite, can segment the human species in new ways. For the first time in human history, elites may be faster, smarter, stronger — biologically, genetically, or ‘bionically’ superior to others.
Life extension treatments (e.g. at Calico Labs or Human Longevity Inc.), gene editing, synthetic biology, and human enhancement and bionics can separate the physical and intellectual capabilities of humans. The same technologies that correct human deficiencies can be used for upgrading healthy bodies. Technologies to enhance human traits can push the brink of humanity into trans humanism, opening the doorway to the next phase in evolution, one where humans are in control of our own augmentation.
Data privacy and security: Is healthcare worth the cost?
Sensitive health data, including genetic testing & bio metrics, are used to train the machine learning algorithms behind new drug discovery and cures, more accurate diagnostics, and personalized treatments. But at what price? How should policymakers trade-off risks to privacy and security with opportunities for healthcare breakthroughs?
Data is susceptible to hacking and privacy breaches, such as the 2017 cyber attack on the United Kingdom’s NHS. Meanwhile, biometric data collected from wearables can be hacked or sold to public or private sectors actors to target advertising or real and “fake news” for political or social campaigns. Anonymization and data protection regulation are a start but fall far from guaranteeing security. Blockchain technologies offer a more robust solution to protect data from tampering.
Should public health records and data be provided, without monetary compensation, for the public benefit? At the World Economic Forum Davos summit in January 2018, historian Noah Yuval Harari asked the audience:
“Does my data about my DNA, brain, body, life, belong to me, a corporation, government, or the human collective?”
A ‘Right’ to AI?
In 2015, The American Society of Anesthesiologists campaigned against an FDA-approved, safe and cheap AI-enabled anesthesiology machine. Johnson & Johnson’s Sedasys could effectively deliver anesthesia to patients for $150-$200, compared to $2,000 for an anesthesiologist, one of healthcare’s highest-paid specialties in the United States.
Should hospitals be required to offer patients the option of an AI diagnostic or treatment program that is statistically safer, more accurate, cheaper, or all of the above?
Is it unethical for hospitals or medical offices to fail to offer technologies shown to be safer, cheaper, or better, because they threatens human jobs? The anesthesiologists argued that the machine could not replace their skills, yet it is a possibility that the threat of losing their jobs was a main contributor to the campaigns. Skill degradation for doctors losing out on opportunities to practice medicine is another concern that can perpetuate over-reliance on machines.
AI raises a host of new risks to manage alongside its innumerable benefits. The risks and benefits are particularly pronounced in healthcare where human lives are at stake. Governing the rise of AI in healthcare and beyond involves trade-offs between risk and rewards that put ethics, safety, and human values at stake.
I originally presented this at City.AI’s Bucharest.AI meetup; more information from the other AI & Health speakers here!
Thank you for reading! Please comment if you liked the article and help share it!
Feedback is welcome:
Yolanda@ai-initiative.org // Twitter: @YolandaLannqist //
European Commission Press Release, (25 April 2018), “Artificial intelligence: Commission outlines a European approach to boost investment and set ethical guidelines,” http://europa.eu/rapid/press-release_IP-18-3362_en.htm.
Google Corporate Communications, “Google announces Calico, a new company focused on health and well-being,” https://googlepress.blogspot.com/2013/09/calico-announcement.html.
PwC, (2017), What doctor? Why AI and robotics will define New Health.
Wadhwa, Vivek. The Driver in the Driverless Car: How Our Technology Choices will Create the Future. HarperCollins Publishers India.
Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow, 2015.
Yolanda is an AI Policy Researcher at The Future Society, a non-profit "think-and-do tank" incubated at Harvard Kennedy School to manage the rise of emerging technologies, including artificial intelligence, blockchain, IoT, bioethics, among others. She has a Master in Public Policy from the Harvard Kennedy School and a Bachelor’s degree in Economics and European Studies from Barnard College, Columbia University, with Phi Beta Kappa honors.