After permission from regulators, Stanford Medical School is now testing an AI system to diagnose eye disease without ever accessing any personal details. The app utilizes technology from Oasis labs, a start-up out of Berkeley, which offers a guarantee that no data can be leaked or misused. There is huge potential for AI to diagnose, understand and cure diseases, but to maximize its capabilities, significant amounts of medical data are needed to train its machine learning algorithms, a lot of which is sensitive private information.
If brought to scale, this would greatly accelerate and improve our ability to advance drug efficiency, disease prevention and personalized medicine. There are significant legal barriers and justified privacy concerns around making medical data accessible and these must be thoroughly discuessed and addressed before larger access is granted. Many countries, such as the UK’s NHS, have the majority of their countries health data centralized. This would be of huge value for the systems, but equally poses greater risk for potential harm if hacked.
This technology doesn’t only offer promise in healthcare. If successful, its methods could be applied to improve privacy in other spaces with sensitive data such as finance, user buying habits or our search history.
What new risks could come to light?