By Panos Constaninides and David Fitzmaurice
The introduction of digital technologies such as robotic implants, home monitoring devices, wearable sensors and mobile apps in healthcare has produced significant amounts of data, which need to be interpreted and put to use by physicians and healthcare systems across disparate fields.
Most often, such technologies are implemented at the patient level, with patients becoming their own producers and consumers of personal data, something which leads to them demanding more personalised care.
This digital transformation has led to a move away from a ‘top-down’ data management strategy, which, as Anthony Chang, of CHOC Children`s UC Irvine School of Medicine, says, “entailed either manual entry of data with its inherent limitations of accuracy and completeness, followed by data analysis with relatively basic statistical tools… and often without definitive answers to the clinical questions posited”.
We are now in an era of a ‘bottom-up’ data management strategy that involves real-time data extraction from various sources (including apps, wearables, hospital systems, etc.), transformation of that data into a uniform format, and loading of the data into an analytical system for final analysis.
All this data, however, pose a serious challenge for physicians: the challenge of limitless choice. According to a white paper by Stanford Medicine “the sheer volume of healthcare data is growing at an astronomical rate: 153 exabytes (one exabyte = one billion gigabytes) were produced in 2013 and an estimated 2,314 exabytes will be produced in 2020, translating to an overall rate of increase of at least 48 per cent annually.”
With so much data on the daily decisions of millions of patients about their physical activity, dietary intake, medication adherence, and self-monitoring (for example, blood pressure and weight), to name but a few, physicians are at a loss as to which data to focus on, to search for what, and for which desired outcome?
Increased data storage, high computing power and exponential learning capabilities together enable computers to learn much faster than humans and address the challenge of limitless choice.
Artificial intelligence (AI) is the development of intelligent systems, capable of taking what AI think tank Future Advocacy says is “the best possible action in a given situation”.
To develop such intelligent systems, machine learning algorithms are required to enable dynamic learning capabilities in relation to changing conditions.
As Pedro Dominguez points out in his book The Master Algorithm, machine learning takes many different forms and is associated with many different schools of thought, including philosophy, psychology, and logic (with learning algorithms based on inverse deduction), neuroscience and physics (with learning algorithms based on backpropagation), genetics and evolutionary biology (with learning algorithms based on genetic programming), statistics (with learning algorithms based on Bayesian inference) and mathematical optimisation (with learning algorithms based on support vector machine).
Each of these schools of thought can apply their learning algorithms for different problems. However, none of these algorithms are perfect in solving all possible problems, and none have reached a level of ‘superintelligence’ - as Nick Bostrom, of Oxford University, would describe it - that will be able to predict, diagnose and give recommendations for treating complex medical conditions.
Still, when competently combined - and provided they are fed the appropriate data to learn from - these algorithms can generate what has been called a ‘master algorithm’, which could potentially solve much more complex problems than humans can.
How will machine learning help deal with cardiovascular disease?
Machine learning can positively impact cardiovascular disease prediction and diagnosis by developing algorithms that can model representations of data, much faster and more efficiently than physicians can.
For example, currently, a physician who wishes to predict the readmission of a patient with congestive heart failure needs to screen a large but unstructured electronic health record (EHR) dataset, which includes variables such as the International Classification of Diseases (ICD) billing codes, medication prescriptions, laboratory values, physiological measurements, imaging studies, and encounter notes.
Such a dataset makes it extremely difficult to decide a priori which variables should be included in a predictive model and what type of methods should be applied in the model itself, as Kipp Johnson, of Mount Sinai Health System, and his team found.
Such predictive models can be produced with ‘supervised learning’ algorithms that require a dataset with predictor variables and labelled outcomes.
For example, a recent study investigated the predictive value of a machine-learning algorithm that “incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes”.
The study’s results showed a positive impact of machine-learning algorithms in assisting in “the discrimination of physiological versus pathological patterns of hypertrophic remodelling… for automated interpretation of echocardiographic images, which may help novice readers with limited experience”.
A separate set of algorithms used in cardiology are called ‘unsupervised learning’ algorithms, which focus on discovering hidden structures in a dataset by exploring relationships between different variables.
For example, one study investigated the use of such learning algorithms to identify temporal relations among events in electronic health records; these temporal relations were then examined to assess whether they improved model performance in predicting initial diagnosis of heart failure.
Thus, results from unsupervised learning algorithms can feed into supervised learning algorithms for predictive modelling.
A third set of algorithms are reinforcement learning algorithms, which Johnson says “learn behavior through trial and error given only input data and an outcome to optimize”.
Designing dynamic treatment regimens, such as managing the rates of re-intubation and regulating physiological stability in intensive care units, is one area where the application of reinforcement learning algorithms may hold great potential.
What are the benefits of using AI in cardiology?
Evidently, the potential benefits of AI in cardiology are enormous. However, such benefits are not without challenges.
First, there are clear benefits for improving work productivity. There are currently fewer physicians to care for an ever-increasing ageing population.
AI can support, rather than replace physicians, generating time and cost-saving benefits for them and their patients, and enabling more compassionate and thorough interactions.
However, as more tasks become automated, there are possibilities that fewer physicians will be required to work, or that fewer will do so on a full-time basis, since many tasks could be delivered through platforms by part-time, freelancer physicians.
This, according to the Taylor review, may impact the relationship between patients, physicians and administrative staff in healthcare systems.
Second, as discussed earlier, machine-learning algorithms can scan through larger volumes of health data enabling faster identification of predictive, diagnostic, as well as treatment options for different cardiovascular diseases.
This feeds into the current demand for more personalised care. At the same time, however, many patients now express the need for more transparency about the types of data shared, who it is used by and for what purpose.
With the General Data Protection Regulation (GDPR) now in full force across Europe, there are important implications for the security and privacy of data that machine-learning algorithms need to keep evolving.
The recent scandal involving Google DeepMind and the Royal Free London NHS Foundation Trust, which led to the transfer of identifiable patient records across the entire trust without explicit consent, is a case to be avoided.
As a study has shown, the architecture of the digital infrastructure supporting AI and machine learning across different localities and between applications and platforms needs to be carefully designed, in order to maintain the security and privacy of healthcare data.
Beyond the issue of seeking consent before any access and use of data, there are also issues around the transparency of algorithmic objectives and outcomes (how do algorithms work and to what end) and of the accountability for the potential misuse of data.
Will AI help physicians reduce mistakes?
As a recent report has pointed out, informed consent by all possible patients may not always be possible because of the way data is shared across platforms and for different purposes; algorithmic transparency, even though sought for, may be difficult to achieve because of the dynamic learning and evolution of algorithms; and accountability for data use may raise challenging ethical questions if in the end such data use leads to improved patient outcomes. What matters the most is the clinical efficacy of algorithms and their use of data.
Finally, although both AI and physicians can make errors in their clinical judgment, either because of not having seen a particular case before or because of bad training, in combining the two - AI and human expertise - the number of clinical errors can be reduced.
In this context, there are opportunities for revisiting the training of individual physicians, as well as multi-disciplinary teams, to learn to interact with AI.
We believe this is of paramount importance and new policies should be developed towards an improved and enhanced training of physicians, which will also enable more effective and efficient clinical judgment.
In conclusion, it is important that we avoid placing ‘exaggerated hope’ on the potential impact of AI, but also not to fall victims of ‘exaggerated fear’ because we cannot identify with the technology.
As Joanna Bryson and Philip Kime say in their paper: “The real dangers of AI are no different from those of other artefacts in our culture: from factories to advertising, weapons to political systems.
"The danger of these systems is the potential for misuse, either through carelessness or malevolence, by the people who control them.”
The possibilities of improving clinical efficacy and healthcare outcomes through AI are enormous, but we need to be aware of the associated risks and challenges and try to minimise those through multi-disciplinary research, and renewed legal and ethical policies.
Panos Constantinides is Associate Professor of Digital Innovation and Academic Director of the AI Innovation Network. He lectures on Strategic Global Outsourcing and Offshoring on the Distance learning MBA, Digital Innovation in the Healthcare Industry on the Executive MBA and Digital Business Strategy on the MSc Management of Information Systems & Digital Innovation.
Follow Panos Constantinides on Twitter @C_Panos.
David Fitzmaurice is Professor of Cardiorespiratory Primary Care at Warwick Medical School
For more articles like this download Core magazine here.
To register for our healthcare newsletter click here.