Select Page

Artificial intelligence (AI) uses technology and computers to stimulate critical thinking and intelligence behavior comparable to humans. One of the companies that use AI algorithms in medicine is epic systems. It is among the most prominent electronic health records organizations maintaining medical information for over 180 million patients throughout the United States. The organization has a portfolio with around 20 proprietary artificial intelligence algorithms that identify various illnesses or predicting hospital stay periods.

However, users do not have an appropriate way of knowing if this AI program is unreliable or it is just a marketing strategy. For instance, details in the Epic’s black boxes are guarded secretly with scarce independent tests. One of the most crucial AI systems is the algorithms used in predicting sepsis. The illness is a leading cause of death in hospitals. It occurs when an infection causes the human body to overreact, sending chemicals into the patient’s bloodstream leading to organ failures or tissue damages. If detected early, lives can be saved, but it is rare to detect it in the early stages.

AI claims do not match the actual medical situation. For example, the Epic Sepsis Model is between 76% and 83% accurate. However, there is no reliable, independent test done on any of its algorithms. An article published by JAMA internal medicine shows that the AI algorithms indicated 843 patients out of 38,455 had sepsis. However, a medical examination report shows that 2,552 patients had sepsis, meaning that the AI algorithms missed 67% of patients with sepsis.

 It was also shocking to notice that 88% of the patients indicated to have sepsis by the AI algorithms did not have the illness. This is a serious issue because it can result in misdiagnosis. For example, in this case, 88% of the patients diagnosed with sepsis as per AI algorithms would have been treated while not having a condition. On the other hand, 67% of patients with the disease would have been missed out. Therefore, it is always advisable to be cautious when dealing with this AI technology to avoid misdiagnosis.

Another recent investigation done by STAT shows that health-oriented news associated with Boston Globe reviewed a similar conclusion. The investigators published an article titled ‘Epics AI algorithms,’ discussing how a firewall guards the AI technology against any scrutiny. The report further stated that the algorithms are delivering unreliable and inaccurate information on critically ill patients.