Biden administration issues executive order regulating artificial intelligence

November 03, 2023

His research focuses on AI assurance, security, fault-tolerant computing, distributed systems, and testing. What do we need to know about the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence? It can also be used for the next waves of scientific discovery and for advances in health care. There's little doubt that researchers can develop the tools and methodologies to ensure that AI performs safely, ethically, securely, and equitably. What happens next as a result of this new executive order?

Name
Jill Rosen
Email
[email protected]
Office phone
443-997-9906
Cell phone
443-547-8805
Twitter
JHUMediaTeam

Anton (Tony) Dahbura is co-director of the Johns Hopkins Institute for Assured Autonomy, the executive director of the Johns Hopkins University Information Security Institute, and an associate research scientist in computer science at the university's Whiting School of Engineering. His research focuses on AI assurance, security, fault-tolerant computing, distributed systems, and testing.

What do we need to know about the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence?

The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is the first major step the federal government has taken to regulate AI by establishing standards for AI safety and security, and by imposing the requirement that the most powerful AI systems need to be extensively tested by third parties to reduce the chance of unintended consequences. The EO also includes, among numerous initiatives: provisions for building up American expertise in AI to remain competitive; increased funding for AI research; help for small businesses and start-ups to commercialize AI; and mechanisms for promoting international cooperation in AI.

What does this mean for the average American? What are the implications for researchers developing AI technologies?

AI is an incredibly powerful set of technologies that is making its way into virtually all aspects of our lives, from how we drive our cars to how we are approved for loans, are diagnosed and treated for medical conditions, and even use our phones. It can also be used for the next waves of scientific discovery and for advances in health care. Furthermore, there are more controversial application areas such as using AI in the criminal justice system for singling out suspected criminals in surveillance video or making recommendations for prison sentences.

The vast promise of AI is tempered by its many potential pitfalls. We're in the research stage of understanding what can go wrong with AI and how to overcome those challenges to successfully integrate AI into society. For instance, societal bias is something that is very easy to creep into AI-based systems and is notoriously difficult to detect and remediate; AI-based systems can inadvertently leak information about individuals or present other forms of security vulnerabilities; AI systems can mass-produce disinformation and scams; AI-enabled applications can present new ethical dilemmas; and of course, there will always be edge cases that will cause AI to behave in unpredictable and undesirable ways.

There's little doubt that researchers can develop the tools and methodologies to ensure that AI performs safely, ethically, securely, and equitably. However, it will take time and require a concerted effort by academia, private industry, all levels of government, and philanthropic organizations to achieve those lofty goals.

What happens next as a result of this new executive order? When will we see an impact from these guidelines?

The EO comes at a critical time as companies large and small barrel forward to introduce all manner of AI-enabled technology to the market. While the EO is impressively comprehensive, this journey is just beginning. It's going to take ongoing proactive leadership across the government over the long haul to keep up with the astonishing pace of AI development and ensure that AI is deployed responsibly. There will be unanticipated twists and turns that require flexibility, vision, and the ability to bring all sides together so that we can rightly trust AI.

The source of this news is from Johns Hopkins University

Popular in Research

China is using the world's largest online disinformation operation to harass Americans

Nov 15, 2023

Child sexual abuse survivors lend their voice to support others

Nov 15, 2023

Print on demand business with Printseekers.com

Sep 6, 2022

Cost of living pressures sees social cohesion hit record low

Nov 15, 2023

Professor Emeritus Walter Hollister, an expert in flight instrumentation and guidance, dies at 92

Nov 15, 2023

Cool Course: City as Text

Nov 15, 2023