tanarch - stock.adobe.com

Education will be key to good AI regulation: A view from the USA

Computer Weekly sat down with Salesforce’s vice-president of federal government affairs, Hugh Gamble, to find out how the US is forging a path towards AI regulation, and how things look from Capitol Hill

It is a balmy, perfect April day on the streets of Washington DC, although the air already carries the unmistakable fecund scent of the humid East Coast summer that is coming. Inside a downtown convention centre, Salesforce executives on their World Tour whip up a frenzy of whooping, shouting out attendees who have commuted in from the capital’s vast suburban hinterland in the surrounding states of Maryland and Virginia.

With US federal agencies figuring out how to work with the terms of last year’s Executive Order (EO) on use of artificial intelligence (AI) from the White House, and the upper house of the American legislature, the Senate, expected to soon release a report or whitepaper on AI regulation, talk at this year’s DC edition of the World Tour was dominated by AI – especially on how its use will be controlled and regulated within the government.

On the record, Salesforce is hopeful that progress can be made in Congress before things start winding down over the summer ahead of the contentious November 2024 presidential election – although conversation in the halls of its DC event suggests this may be a forlorn hope.

Hugh Gamble, Salesforce vice-president of federal affairs, reckons Biden’s EO as a good start. Gamble, who started out as a software engineer before going to law school and then spending the best part of a decade steeped in US political culture as counsel to senators from Mississippi and Georgia, describes it as a roadmap, but points out that it’s currently not much more than that.

“The EO was a great first step and it was nice to see the US leading a little bit. But an EO has limitations – it is not legislation, it is dictating what the Executive Branch will do,” he explains.

“It told Executive Branch agencies how to, essentially, approach the problems, looking at it through potential harms, ways that they should analyse products in the future, and making sure they do have the skills and individuals necessary to evaluate those products.

“In addition to their own procurement of products and use of products they have to think about their charge, what their mission is and how those products could potentially be used in their arena.

“What we are in right now is a period where each of these agencies is quickly ramping up as fast as they possibly can and trying to get to a place of competency with looking at their own internal work in the future, but also how they will handle it so far as they are a regulatory agency.”

Ultimately, even though the Executive Branch is a huge consumer of IT and the decisions it takes on procurement and rules regarding safeguards and protections will have a market impact, says Gamble, it will take more than just one EO to move the needle.

“Realistically, we need Congress to act to pass something that is applicable outside of the scope of the Executive Branch,” he says, with the concern being that not to do so is to risk every federal agency and body in the US developing its own approach, which would be unhelpful.

“The government always runs into that problem,” says Gamble. “There’s a programme called FedRAMP and each agency treats it differently and that’s been a bone of contention at times in the public sector. But there’s at least some fundamental understanding and cooperation that you’re working off of similar guidelines.

“I think that’s what you’re we’re hoping for at this point. Each agency has a different mission. And so we understand that they will interpret and apply what’s been told to them in different ways, and that’s the nature of government. What we would hope for is legislation to come out of Congress that will provide for some guardrails into the private sector, so that we can provide some confidence in the technology products that people are using.”

Gamble cannot yet point to any real world examples of what that might look like, simply because the work is ongoing, but he is hopeful that the various bodies involved are eager to collaborate on it.

“What we are seeing is that they are paying attention, they’re communicating, they’re learning from each other. They are using similar terminologies and understanding of the technology. And so when they see something being done in a smart way, they will in some ways, learn from it and either iterate on that or they will incorporate it in totality,” he says.

“But…I really do think that the Senate whitepaper is going to be our first real indication of how Congress is looking at the issue and how they think that they’ll start to tackle the issue there.”

There will clearly be extensive debate following its publication, but ahead of time, Gamble says he is pleased not to have seen any huge disagreements, even though “they’re coming and they’ll come in places we don’t expect them”. For now, everybody on the Hill appears to be working in good faith to get things as far along as possible.

“Once you start drafting a bill, that’s when you start counting – it becomes a math problem at that point and you want to get it right. You’ve got to get to 60 votes in the Senate, you’ve got to get to a majority in the House [of Representatives]. And that’s when political compromises will become part of that conversation,” he says.

For now, Congress has been playing its cards close to its chest in terms of what it might recommend, but according to Gamble, those involved have been “very thoughtful” and talking to the right people. This includes Salesforce, which is keen to be in the room where it happens because its customers will run screaming if it isn’t.

“The part of the tech industry we occupy requires us to have a level of accuracy and fidelity to truth – our customers are not going to put up with 95% accuracy, so we hold ourselves to a higher standard that puts us in a different position than companies out there moving fast and rolling out new products to perfect later on,” he explains.

“We go in and talk about certainty, privacy, a risk-based framework that looks at the utility of AI, and we can feel confident that if they follow those guidelines we’re going to clear the hurdles they put out there to demonstrate we are proficient.”

AI not just for enterprises

However, AI is not just an enterprise play. It affects consumers, and unlike Salesforce, these consumers are often registered voters.

As such, one thing Gamble is alert to in his conversations with politicians is the possibility of a cyber incident involving AI deepfakes or disinformation during this year’s contentious presidential election. Such an incident risks throwing public opinion and forcing the next administration, particularly if it is led by Donald Trump, down a path of overly restrictive regulation.

Salesforce is an enterprise software company and clearly does not sell consumer tech products or services, but knowing a second Trump presidency is a real possibility at the time of writing, this is one area Gamble and his team have been focusing on, helping politicians understand that it is unwise to view AI, or the tech industry, as a monolith.

“The side of technology that we occupy is enterprise technology and that is separate and distinct in various ways from some of the more consumer-facing technology products that are out there. We understand there’s a danger of conflation there, but that’s why we’ve been going in and having the conversations over the past year to make sure that we make people understand the distinction between the two and that there’s not collateral damage if there is something that causes kneejerk action,” says Gamble.

Having these conversations has been no mean feat. With 535 members, 100 senators and 435 representatives in Congress, each one with a different level of understanding, it’s been a bespoke operation.

“But we’ve been working with industry associations and committees and leadership to make sure there’s a baseline understanding among the people who hold the pen in such circumstances,” says Gamble.

“We’ve put an awful lot of effort towards that. It is exhausting, but it’s the job. Our job is one of education and advocacy, and right now, to be a good advocate for good AI policy.”

Global collaboration

Gamble’s focus is on the federal government, but of course, the US government does not operate in isolation, and global consensus-building is just as important as consensus-building within the corridors of power in Washington.

Gamble is alert to the need for grace and respect with regard to the fact that different governments will have different approaches, but believes that things are moving in the right direction.

“What we have [also] encouraged lawmakers and the Executive Branch to do is to at least make sure we have some commonality with international partners on things like definitions and understanding of the AI landscape, so that we’re not doing an apples to oranges comparison when we look at what the EU, or UK, is doing, and what the US tries to do,” he says.

“Even if we don’t reach the exact same legislation, in conclusion, we’re using similar terminology and understanding.”

What does good regulation look like?

Asked what successful or failed AI legislation would look, Gamble says he doesn’t have much of an opinion on failure.

But as for successful – and anything separate from this would be a differing degree of failure rather than complete failure – what Salesforce wants is a regulatory regime that understands the risk-based application of AI.

“So, whatever tool you roll out, understand how much risk it presents to the general public and its utility. And it’s given a level of scrutiny and government attention based off that,” he says.

“The rudimentary example I’ve heard others use is if you’ve got a chatbot that’s helping people learn how to cook for the first time, it doesn’t need the same level of government scrutiny as something that impacts a person’s human or civil rights.

“So, understanding the difference, what those utilities can do and what their use is, the law should reflect and understand that, and that will allow for a lot of space for innovation where harms are lessened. We don’t want to squelch positive innovation.

“Realistically,” he concludes, “that requites a nuanced education, and so that’s what we’re going in and trying to make happen.”

Read more about AI regulation

  • Amid data privacy issues spawned by proliferating AI and generative AI applications, GDPR provisions need some updating to provide businesses with more specific AI guidelines.
  • Legislation is needed to seize the benefits of artificial intelligence while minimising its risks, says Lord Holmes, but the government’s ‘wait and see’ approach to regulation will fail on both fronts.
  • The EU has taken a lead in regulating artificial intelligence through its AI Act. The UK government needs to respond or it risks losing the UK’s status as an AI innovation frontrunner.

Read more on Regulatory compliance and standard requirements