The EO contains eight goals along with specifics of how to implement them, which on the surface sounds good. However, it may turn out to be more inspirational than effective, and it has a series of intrinsic challenges that could be insurmountable to satisfy. Here are six of my top concerns:
First off, it’s great that the EO is a coordinated, federal government-wide approach to using AI safely and securely. But given the broad collection of agencies and the difficulty of coordinating even a small subset of them in the past on tech policies, this may be an impossible task.
The EO creates a special AI Council to be run by the White House, consisting of more than two dozen cabinet secretaries and others such as the chairman of the Joint Chiefs of Staff and various other executive branch policy directors. That’s a lot of folks to coordinate anything simple, let alone something as complex as AI policy.
For example, the Department of Commerce has to coordinate best AI safety practices over the next nine months among three other cabinet agencies. The goal is to develop a generative AI equivalent to the NIST AI 100-1 document, among other reports. Typically, these documents are consensus-based and take years to produce, although this particular working group was created this past summer, so it has a head start. But there are dozens of agencies and various working groups that require coordination throughout the EO.
Second, ensuring that the private sector can produce and maintain ethical and equitable AI products and services won’t be easy. Creating the right combinations of regulations and incentives could go the way of failure and neglect – as what has happened with ethical and equitable social media services.
One hint of the challenges ahead is this document on a federal ethics framework, which was created in the pre-AI era and hasn’t aged well. Another challenge is having a group of civil rights lawyers who can be conversant with understanding the operation of AI systems, and they’ll find it tough to suss those out in any meaningful way.
Third, one bright opportunity is that the feds can take a de facto leadership position in how they use AI tech themselves. There are sections of the EO on how to best do this. It calls for designation of chief AI officers in each agency within the next two months, along with the creation of AI governance boards to coordinate each agency’s AI activities, as well as other best practices and standards. Whether this will result in any effective use of AI technology or become another bureaucratic burden remains to be seen.
StackArmor Chief Executive Gaurav Pal told SiliconANGLE that “the federal government can influence the development of safe and secure AI in two dimensions – as a regulator and as one of the largest buyers of AI technologies.” He cites the Federal Risk and Authorization Management Program that has been used to regulate government use of cloud technologies as a potential template.
Part of this process is how AI tech can be properly tested before and during its use. Ian Swanson, CEO of Protect AI Inc., was glad to see some of this language in the EO. “In order to build and ship AI that is secure and trusted, organizations must rigorously test their AI and understand the total composition of elements used to create that AI,” he told SiliconANGLE.
And hiring all this talent to manage AI’s various specialties, including security details, tech policy, regulatory action, ethical operations and legalities, is also a tall order. There are proposed ways to streamline government hiring practices, especially for non-U.S. candidates and changes to various work visas that would include modernizing the H-1B program. Good luck with all of that, especially given the political football that immigration policy has become.
Fourth, although it’s great to see some attention on how to ensure privacy in the new AI-based world order, the EO may have set its bar on privacy expectations too high. One security researcher was bullish on the fact that the EO mentions such terms as differential privacy and red teaming efforts.
But contrast these hopes with how privacy regulations in the before AI times have been disappointing up until now. There is still no single federal privacy law, and the more state-level laws that are passed, the more complex it is to resolve the various regulatory requirements and penalties. AI-related legislation will be even more difficult to craft.
Michael Leach, a compliance manager at cybersecurity firm Forcepoint LLC, said that the EO “will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements.” Nevertheless, stackArmor’s Pal thinks the EO is moving in the right direction and can offer helpful guidance on privacy regs.
Fifth, will there be actual teeth in the EO’s enforcement? It has dozens of collections of different regulations that will first need to be formulated in how the federal government operates, but it’s short on what will be required from the private sector.
For example, it suggests new regulations for identifying and reporting on foreign resellers and AI providers. None of these laws has been written, and getting them passed in the next six months, as stated in the EO, might be a tall order. “Enforcement is an open question for the private sector, and any powerful change in government position on AI would likely require intervention from the legislative branch,” said David Brauchler, principal security consultant at NCC Group.
And that could spell trouble, given that the current Congress has trouble with passing a budget, let alone agreeing on policy issues. “We still need Congress to consider legislation that will regulate AI and ensure that innovation makes us more fair, just and prosperous, rather than surveilled, silenced and stereotyped,” said Maya Wiley, CEO of The Leadership Conference on Civil and Human Rights.
Veridas Technologies LLC CEO Eduardo Azanza also noted that “the White House has taken a global perspective necessary for implementing regulations that account for risks and benefits, security, privacy, innovation and nondiscrimination.”
But others are less enthused about the intended government role outlined in the EO. “It would be a mistake for the federal government to try to centralize assessment and AI licensing for all of the industry,” said Frame AI CEO George Davis.
For his part, Brauchler is sitting on the fence. “Time will tell whether this order will accomplish what the government aims to achieve,” he said. “I’m wary of the risk that the government could damage competition in the AI space or the open source community, along with user privacy, but this EO isn’t necessarily a positive or negative step in either direction.”
Tech blogger Shelly Palmer brought up the contrarian view when he wrote in his recent newsletter that the EO ”is misguided political theater, adding bureaucratic overhead without efficacy. It may stimulate industry conversation, but it’s unlikely to yield substantive regulation.”
Finally, one of the biggest weaknesses of the EO is that it assumes government is here to help advance the cause of AI and that it will be effective. Those are open issues.
Speaking of which, the EO ignores the world of open systems, the place where much initial AI work originated. “I do wish we had seen something on open source and open science in the EO,” saidMark Surman, president of the Mozilla Foundation. “Openness and transparency are key if we want the benefits of AI to reach the majority of humanity, rather than seeing them applied only to use cases where profit is the primary motivator.”
Also missing is any mention of public/private partnerships that can bring about its numerous goals, policy directions and good ideas. One exception is the mention of the expansion of the National AI Research Institute program, which has 25 academic institutions that are funded by the National Science Foundation. Will any private AI providers be part of this program? That isn’t clear, but they should be.
And then there’s the belief by some Big Tech critics that this amounts to “regulatory capture.” That is, is all this regulatory superstructure put there thanks to Big Tech, which can afford to deal with regulations, so they can keep the AI field to themselves and stifle innovation? Perhaps, but some aren’t so sure.
“Effective regulation can actually accelerate progress,” said Standard AI CEO Jordan Fisher. “By ensuring smaller companies can participate, democratizing access, and putting safeguards in place, productization can move faster, and more confidently.”
But as Azanza of Veridas said, “It is paramount that we strike a balance between reaping the benefits of AI and mitigating its potential downsides.” I agree, but the balancing act will certainly be difficult.
All in all, the EO is still a good initial step toward understanding AI’s complexities and how the feds will find a niche that balances all these various — and sometimes seemingly contradictory — issues. If it can evolve as quickly as generative AI has done in the past year, it may succeed. If not, it will be a wasted opportunity to provide leadership and move the industry forward.
Image: Just_Super/Getty Images
Your vote of support is important to us and it helps us keep the content FREE.
One-click below supports our mission to provide free, deep and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.