When it comes to increasing the accuracy of medical diagnoses, reducing worker burnout, and providing cheaper universal healthcare, AI seems like a natural solution. AI appears to have secured a prominent role in the medical industry as both entrepreneurs and policymakers extol the immense potential in incorporating machine learning and deep neural networks into a doctor’s daily routine. Decades worth of medical data collected from every appointment, procedure, and survey sit untouched in databases while algorithms wait hungrily for training data.

Prominent applications of AI in predictive diagnostics lie in image-based diagnostics and preemptive predictions through machine learning. Amidst the bustling excitement over the applications of AI in healthcare, the medical industry maintains its slow and sluggish pace in adopting new technologies. The fear of medical professionals becoming obsolete, lack of clear financial incentives for healthcare providers, blurred boundaries outlining which entity takes on the blame for failures, black-box nature of AI algorithms, and concerns with data privacy are largely to blame for the hesitation of medical professionals.

Image-based diagnostics are primarily carried out by radiologists and pathologists who visually examine scans and samples to discern abnormalities. Computer vision technologies have allowed for enhanced prediction, accounting for the minute errors and abnormalities even a trained specialist might miss. An algorithm recently developed by Stanford researchers, CheXNet, diagnosed pneumonia, as well as up 14 additional conditions, far better than its radiologist counterparts. Computer-aided detection (CAD) has played a growing role in hospitals for the past 40 years.

However, as CAD increases in accuracy and usage, radiologists and pathologists become increasingly obsolete. AI can provide cheaper and more accurate predictions but in turn, the healthcare industry loses in the financial gains of added procedures and loses jobs. Additionally, doctors are still held accountable for and are obligated to handle the repercussions of a misdiagnosis. Since medical professionals face large consequences for any failure, the healthcare industry is slow to place full trust in AI.

Currently, CAD plays a supporting role to the medical professional. It adds additional insight into the diagnosis from a pathologist or radiologist without replacing their expertise. While this system ensures healthcare workers remain somewhat relevant and are validly culpable for a misdiagnosis, it undermines the financial benefits of an AI solution. Image-based diagnosis costs remain high for the patient. In order the maximize the benefits of AI solutions which are steadily outdoing their human counterparts, the healthcare industry must focus on accepting the role of AI in diagnostics and redirect its workers towards roles which require a human touch.

Additional theories have proposed radiologists and pathologists can shift their roles to that of “information specialists” who are equipped manage data instead of conducting analysis. Patients and policies must separate the process of diagnosis from treatment, holding the AI developers accountable for the algorithms and hospitals responsible for treatment.

There is a growing excitement surrounding the use of machine learning on medical data in the preemptive diagnosis of diseases. Models can be trained with the patient’s medical history, family medical history, genetic sequences, and data from patients with similar symptoms to provide accurate predictions long before a health issue arises. Facial recognition software has been employed in recognizing rare genetic diseases correlating with certain phenotypes and deep learning techniques can be used in detecting cancerous tissue early on. Early detection plays a key role in accurately diagnosing a health issue and in the survival of patients. However, massive data leaks, inconsistencies in data entry between healthcare providers, and the black-box nature of these algorithms dampen the zealous flame of AI algorithms in preemptive diagnostics.

Medical data is by nature sensitive and healthcare providers are already under fire for internal failures in managing such data. Medical professionals are reluctant to loosen their grip on such data in the face of frequent data breaches. Policies should enforce higher standards for security and privacy related to healthcare data and ensure a larger amount of resources are funnelled towards this purpose. Additionally, different healthcare providers have separate, often incongruent methods in their data entry which places additional limitations on the usability of data. New tools such as Google Cloud’s Healthcare API work to counteract this barrier, but such technologies are still in their infancy.

Lastly, algorithms involving neural networks operate largely as black boxes where solutions are output with little to no explanation on the reasoning behind their construction. Both doctors and patients have little insight into the reasoning behind a diagnosis, so the incentive for trusting AI-driven predictions is low. In order to counteract the black-box nature of such algorithms, entities like the European Union are enforcing policies which protect a patient’s right to an explanation behind any AI-driven diagnosis. While this policy makes it harder for developers to output effective AI quickly, it ensures the users have a foundation of trust when using the software.

AI will inevitably play a defining role in healthcare, but the steps taken to effectively utilize such software must be carefully calibrated to AI’s limitations. In light of a new medical system where doctors operate hand-in-hand with AI, both policies and attitudes must be readjusted to separate the process of diagnosis from the process of treatment. Healthcare can effectively use the benefits of AI while minimizing its limitations by ensuring that software is transparent in the reasoning behind its predictions and data is securely used. Cheaper, faster, and more accurate diagnostics lie ahead in a future where the medical industry not only uses AI but also comes to embrace it.

Read original article here.

Linkedin
Disclaimer

Views expressed above are the author's own.

END OF ARTICLE