Study of Medical AI Boasts Impressive Accuracy, But Doesn’t Tell the Full Story

A new study published recently in Nature Medicine and covered in Quartz suggests that AI systems may be able to someday take the diagnostic reins from physicians, at least when it comes to the diagnosis of common childhood diseases. The study’s deep-learning system was so successful, in fact, that it outperformed some doctors in correctly identifying a range of conditions. The study, however, (though promising) is not without its limitations.

As anyone familiar with how these models work will tell you, these systems are ultimately only as good as the data upon which they’re trained; and in this instance, the data came entirely from one medical center in China. Sure, it was able to successfully find diagnostic patterns when subsequently put to the test among this very specific community, but can we really assume it would be just as successful in, say, Manhattan (NY, not Kansas), having had no training on this vastly different population? There are certainly models out there – like this one I recently wrote about – that perform quite well in zero-shot environments, but the amount and variety of the data required to make this happen is staggering.