Dozens of AI disease-prediction models were trained on dubious data

Researchers have discovered that some artificial intelligence models used to predict stroke and diabetes risks are being trained on questionable data sets. These data sets, found on the platform Kaggle, lack clear origins and contain anomalies suggesting they might be fabricated. Despite this, some models have been used in hospitals in Indonesia and Spain, and even appear in a medical-device patent application. Experts warn that using unreliable data can lead to incorrect medical predictions, potentially causing harm by leading to inappropriate treatments. They urge that data sources be disclosed and that journals reject studies using dubious data. The study highlights the importance of transparency and accuracy in medical AI applications to ensure patient safety. QUESTION: How might the use of unreliable data in AI models impact trust in technology and healthcare? 

Discover more from News Up First

Subscribe now to keep reading and get access to the full archive.

Continue reading