In hearings yesterday and today at the Howard University Law School in Washington, D.C., the Federal Trade Commission is gathering expert perspective on the current state of artificial intelligence technologies and techniques: how they're being put to work in real-world practice; what ethical and consumer protection issues might be at play; how industries are being reshaped by them, and how federal policy should evolve accordingly.
WHY IT MATTERS
Speaking at the event on Nov. 13 were Teresa Zayas Cabán, chief scientist at the Office of the National Coordinator for Health IT, and Dr. Michael Abramoff, founder and CEO of IDx Technologies. His is the first company to get clearance from the FDA for an autonomous AI diagnostic device: its IDx-DR, which uses an algorithm to assist in detection of diabetic retinopathy and has already been rolled out at some health systems.
Their panel discussion, Understanding Algorithms, Artificial Intelligence, and Predictive Analytics Through Real World Applications, included leaders from companies such as Adobe and Visa, which showed how they're using AI (improving consumers' creative experience and protecting against fraud, respectively, among others), but later turned to healthcare applications.
As an ophthalmologist and longtime proponent of AI, Abramoff said his nickname among his clinical peers used to be "The Retinator."
"In 2010, my colleagues were thinking, 'He's like a Terminator, he will destroy jobs and he's also not being safe for patients,'" he said.
"Now, they think very differently, but that shows you how this fear of AI is not new. It's there and it's real and we need to manage it," said Abramoff.
But his presentation – which traced AI's evolution year by year over the past decade-plus, gaining technological maturity and a social and regulatory acceptance along the way – made the case that, properly managed and deployed, with rigorous assurances that it meets "standards of safety and efficacy," the benefits of autonomous AI can outweigh the risks.
Abramoff pointed to a quote from a recent New Yorker article about a former Google engineer, who's at the center of a big controversy around the AI behind self-driving vehicles: "If it is your job to advance technology, safety cannot be your No. 1 concern," said the engineer. "If it is, you’ll never do anything."
That's simply not true, said Abramoff. In fact, the opposite is.
"Technology used in a lab does not directly transfer to what we do in healthcare," he said. "Patient safety is paramount. And if we don't do it right, (the technology) will be pushed back and we will lose all of the advantages that AI can have in healthcare, for better quality, lower costs and better accessibility."
THE LARGER TREND
Healthcare AI investment is occurring at a rapid pace, fast exceeding $1 billion industry-wide, with more than half of hospitals either deploying or planning deployments of various shapes and sizes in the near future. But major ethical questions and patient safety concerns remain.
With those in mind, there are several healthcare-specific challenges that need to be addressed in the years ahead, said Zayas Cabán, who cited the findings of a 2018 JASON report that listed them:
- Acceptance of AI applications in clinical practice will require immense validation
- Ability to leverage the confluence of personal networked devices and AI tools
- Availability of and access to high-quality training data from which to build and maintain AI applications in health
- Executing large-scale data collection to include missing data streams
- Building on the success in other domains, creating relevant AI competitions
- Understanding the limitations of AI methods in health and healthcare applications
ON THE RECORD
Despite those hurdles, there's clearly just as many possibilities for AI, "emerging applications in health and healthcare, from public health to clinical health, as well as prevention and treatment," said Zayas Cabán.
ONC's role, she said, is to "work with other agencies to identify what those possibilities are. Our focus is on making data interoperable to be able support the development of AI, understanding the data infrastructure issues and what kinds of standards are needed to support this vision."
"Not all AI is created equal, and we need to educate the public on how to assess different types of healthcare applications," added Abramoff in a separate statement. "It is critical that autonomous AI applications are thoroughly validated and developed in a clinically safe and explainable way that builds trust with patients and clinicians. Safety needs to be the number one concern for responsible AI companies."
Email the writer: [email protected]m
Source: Read Full Article