AI benefits/risks

Biden’s AI executive order: A small step in the right direction

Biden EO on AI

The White House released a new executive order (EO) this week that seeks to increase federal oversight of rapidly expanding AI systems, promote the safety and security of AI development, and reduce its risks for consumers and national security.

The EO has arrived at a critical time, as artificial “general” intelligence has become a reality faster than many expected. Lots of people were surprised this year by the transformational power of ChatGPT, but advances in AI promise to be exponentially more powerful in the year ahead. The implications of these intelligent systems are world-changing – in good ways and bad – and the government needs to act fast if it hopes to effectively manage those impacts.

The release of the EO stands as an important step in the right direction, setting us on a path to effectively harness the enormous potential of AI to make our lives better, while also keeping security and safety top of mind. It introduces several components that are certain to improve the way we create and interact with AI, but it also presents a few gaps and areas for continued development as the order’s guidelines are actioned by public and private sector companies.

The EO’s advantages

Enhanced AI safety and security is perhaps the greatest and most obvious benefit that the executive order brings to the table, but there are several additional components buried beneath these that create other kinds of positive impacts. To name a few:

  • Greater protections for consumers: Machine learning models rely on vast amounts of data to generate useful outputs, but these large data volumes come with inherent privacy risks. Consumers are becoming increasingly aware and concerned about how their data gets collected, stored, and used. The pressure on organizations to respect consumer privacy will only continue to mount, so it will be important to have more stringent guidelines in place to drive transparency around how data privacy gets prioritized to build consumer trust in AI. Additionally, by establishing watermarking guidelines that help distinguish AI-generated content, the EO encourages greater consumer protections against AI-enabled fraud and deception. Today’s widespread use of ChatGPT means that we are constantly encountering – and sometimes unwittingly using – content that’s inauthentic. These guidelines should help promote greater transparency around content origins, minimizing misinformation and fraudulent activity.
  • An AI talent surge: Though the use and adoption of AI has accelerated, the pace of AI skills and talent development has lagged behind. Studies have shown that more than half of organizations don’t have the right mix of skilled AI talent, and that a lack of skilled talent has become the leading barrier to progressing their AI initiatives. The EO should start to reverse this trend by expanding the nation’s pool of skilled AI workers, through grants to develop domestic talent as well as initiatives to attract and retain foreign talent.
  • Increased demand for AI-native products in the tech industry: While the EO mainly targets federal agencies, technology companies in the private sector will soon see benefits trickle down. Once centralized government funding gets allocated against the EO’s guidelines, federal agencies can acquire specific AI products and services faster and more cheaply through rapid and efficient contracting. This should bolster demand for AI-native products and ultimately increase both productivity and security for the agencies that will rely on these tools.

Where the EO falls short

The industry will likely find it challenging to determine the level of transparency around red-teaming. The EO sets more rigorous standards for red-team testing before the public release of AI models. Most notably, the EO stipulates that developers share their safety test results with the U.S. government. While developers may agree to sharing how they are tackling vulnerabilities, few may want to proactively share what those vulnerabilities are to avoid exposing their organization to risk or scrutiny.

While the EO effectively covers ways to promote safe AI development, what’s missing is a component around promoting protections to defend against adversarial AI. As an email security vendor, we have a front row seat to how attackers have already weaponized generative AI to scale the volume of their attacks, and government agencies are highly attractive targets given their access to sensitive data and control over critical infrastructure. The EO does little to acknowledge the risks that result from bad actors using these tools, or prevent them from doing so.

By the same token, there are untapped opportunities to proactively use AI for good. The EO largely focuses on minimizing the risk of “bad AI,” but there’s enormous potential for “good AI” to help in this fight. As cybersecurity defenses become increasingly AI-enabled, government bodies should consider ways to nurture the development of offensive AI.

At the end of the day, the EO is a set of guidelines, rather than a permanent law, which makes it difficult to effectively enforce. While it’s helpful for steering the AI industry in the right direction, we’ll still need to develop a practical implementation framework.

Additionally, we have to watch that we don’t overregulate. Too much regulation could slow the progress of AI innovation, particularly for AI start-ups that may not have the capital to meet extensive testing and regulatory requirements like the AI giants can. Over-regulation could cause a lack of grassroots innovation in AI that will continue to give adversaries the upper hand.

It will be interesting to see what tangible impacts the EO will have as federal agencies begin to take action on its guidelines. While only a first step, we can bet that it’s a step in the right direction overall. There has never been a more critical time to rally the entire technology ecosystem around building stronger safety, security, and trust in AI systems.

Mike Britton, chief information security officer, Abnormal Security

Mike Britton

Mike Britton, chief information security officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

LinkedIn: https://www.linkedin.com/in/mrbritton/

X: https://twitter.com/AbnormalSec

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.