Biden Administration calls for feedback on AI safety, R&D

The White House this past week publicized new efforts to safely advance the development and deployment of artificial intelligence, and is calling on the public to help shape the government’s AI strategies.

The Biden Administration announced new efforts focused on AI research and development, with emphasis on a new updated roadmap from the White House Office of Science and Technology Policy.

This National AI R&D Strategic Plan – with key updates from OSTP for the first time in four years – describes the priorities and goals for federal investments, and draws on expertise from across the federal government and the public.

“This plan makes clear that when it comes to AI, the federal government will invest in R&D that promotes responsible American innovation, serves the public good, protects people’s rights and safety, and upholds democratic values,” said administration officials. “It will help ensure continued U.S. leadership in the development and use of trustworthy AI systems.”

Meanwhile, the White House is also asking for feedback from the public input on some critical issues related to AI safety and efficacy.

OSTP this past week put out a request for Information requesting perspectives on how to prioritize mitigation efforts for AI safety risks while also capitalizing on technology innovation. This RFI is part of an “ongoing effort to advance a cohesive and comprehensive strategy to manage AI risks and harness AI opportunities,” said the Biden Administration.” It complements work happening across the federal government to engage the public on critical AI issues.”

The Biden Administration also recently published from the the U.S. Department of Education’s Office of Educational Technology focused on the opportunity and risks of AI for teaching and learning. AI and machine learning tools are already in place at medical schools and CME programs nationwide, but many med students fear artificial intelligence – especially its potential to disrupt job prospects for pathology, radiology, anesthesiology and more.

These efforts come as more attention is focused on the potential enormous risks posed by runaway artificial intelligence development. On May 30, the New York Times reported that leaders from top AI companies such as OpenAI and Google DeepMind are warning that the technology might “one day pose an existential threat” to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” said a statement, published by the nonprofit Center for AI Safety, and signed by the CEOs of those companies and 350 other AI experts.

The White House has been paying attention to these issues for some time – as with its recent blueprint for an AI Bill of Rights and other executive actions focused on AI’s potential and potential risks.

But in recent months, healthcare experts have been sounding the alarm that the healthcare industry must take a more proactive approach to AI safety, and needs to devise and deploy guardrails to ensure it’s deployed transparently in clinical settings.

“AI is one of the most powerful technologies of our time, with broad applications,” said the White House statement. “President Biden has been clear that in order to seize the opportunities AI presents, we must first manage its risks.

Toward that end, the Biden Administration “has taken significant action to promote responsible AI innovation that places people, communities, and the public good at the center, and manages risks to individuals and our society, security and economy.”

Mike Miliard is executive editor of Healthcare IT News
Email the writer: [email protected]

Healthcare IT News is a HIMSS publication.

Source: Read Full Article