The chances of a big federal AI rules push in the US are looking as slim as ever these days. But Miriam Vogel says the technology is far from a free-for-all, regulation-wise.
In fact, the longtime president and CEO of the nonprofit EqualAI and chair of the White House’s National AI Advisory Committee said to expect plenty of court cases to shape how the law relates to AI in coming years. Vogel recently co-authored a new book, Governing the Machine, which lays out a general playbook for creating AI that’s above-board legally and ethically.
We spoke to Vogel about how AI regulation might come from the courts, how to build trust in AI when distrust abounds, and what she’s learned from working with the government.
This conversation has been edited for length and clarity.
It doesn’t seem like there’s going to be any major federal AI regulation legislation anytime soon, but some states have been attempting to fill that void. What does the current regulatory landscape in the US mean for AI governance?
That is really one of my motivations in writing this book. I talked to too many audiences who thought that AI is completely unregulated. Yes, it’s true—there’s not a significant number of AI-focused specific regulations in the US. But as a lawyer, I recognized all the areas of law that would be applicable to AI that we’ve only started to see.
So last September, I published a New York Law Review article with [former Homeland Security Secretary Michael Chertoff] because we both shared this concern: Too many companies are exposing themselves to a lot of liability and potentially creating a lot of harms by not recognizing that the basic frameworks of law they were practicing in, whether it was contracts law, tort liability, product liability, intellectual property, privacy law, and so forth—all of those laws would be equally applicable in the AI space.
And the hard part is that those cases have only started to be brought and litigated. And so the danger for a company or an organization using AI without recognizing this connection to our current frameworks of law is that they are creating these harms and liabilities for themselves. Whereas if they had been thinking about this proactively, whether it’s IP ownership or privacy laws—HIPAA, COPPA, etc.—they could be building things that had these safeguards in place that would both ensure that people were protected in the way that the laws had envisioned, but also that they wouldn’t be found liable a few years down the road in court cases…There’s many bad outcomes if you’re not intentionally, proactively mindful of the potential liabilities and harms. [In the] book, we have a few chapters on really trying to uncover for people some of the different ways that this has played out [and] could play out based on laws on the books in the US and across the globe.
In the coming years, we will see more court cases that are interpreting laws in relation to AI?
Yes, if we’re going to place bets, then expecting a significant increase in liability is a sure thing. Between the potential harms and the deep pockets involved, there’s just the basic recipe for upcoming litigation. And the other challenge is, if you’re building an AI governance system in your organization, it often takes 18 months to two years, and so waiting for whether it’s state regulations or to see how the court cases play out—you really don’t have time. It’s like having a company on the internet and putting a cybersecurity framework in place after a cyberintrusion. It’s just too late at that point. So a huge intention of the book is showing people best practices so they can avoid the harms and liabilities down the road that will play out if they’re not preparing for it.
How do companies build trust when many people already distrust AI off the bat?
I think that’s your job if you’re a company using AI, which is most companies. So I’d say most companies are now AI companies, and it is in all of their best interests to engage their consumer base and their executives. And it’s not just the stories we’ve heard about. It’s a whole set of fears.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
One interesting exercise that we had in the book was thinking about all the different reasons and ways people are afraid of AI, and we came up with nine different categories of fears and risks…For some people, it’s privacy. For some people, it’s the hallucinations. For young people, I talked to so many college students who don’t want to use AI because of the impacts on the environment. Obviously a lot of people are worried about the workforce displacement. So I think one thing that we try to do with Governing the Machine is just say, upfront, most people are afraid, and that is OK. That makes you a reasonable person, that shows judgment. It does not mean that you’re an outlier. It does not mean that you cannot be a thoughtful AI user. It means that we haven’t done a good job of addressing your fears appropriately.
There’s three messages that we need to be communicating: [First] is, good news, you are [already] using AI…People [don’t] realize all these use cases are AI. Second of all, if you are afraid, don’t be intimidated or ashamed of it; it is very rational. These are legitimate concerns. So instead of minimizing or avoiding them, let’s talk about why you have those fears. Let’s name it, and let’s talk about mitigation strategies, because we have to get to [the third part.] We have to get to the opportunity. We have to talk about the ways that it will benefit you and your community and your family and your company. Because if you’re abstaining, if you’re consciously trying to not use it, you will be holding yourself back. So let’s marry those two conversations that are too often separated between risks and opportunity into one meaningful, honest conversation.
What are some of the biggest lessons that you’ve learned as chair of the National AI Advisory Committee?
First of all, I was very impressed with the wisdom of Congress to create that committee, and some of the ways they created it were really thoughtful. They mandated that the composition of the committee be multi-stakeholder…So anything coming out of the committee would have had the buy-in, the discussion, the feedback from civil society, academia, and industry, and so I think that was exactly the right way. I think that’s the only way we can proceed with AI governance and policy—it has to be multi-stakeholder.
The other thing that was interesting was how we evolved with the evolution of AI. So when we first launched, we followed the mandate for our annual report, and then, because of the way AI was transforming so rapidly, and recognized that government needed counsel in more real time, we quickly changed our deliverables to every few months, if not monthly, to deliver a recommendation, a finding.
And the other way we changed the mandate in a way that I thought was really powerful and fun and effective was instead of just hearing from the experts they appointed, we invited experts and voices from across the public to weigh in on different issues we were considering so that we could learn and make sure that more voices were participating, and we ended up turning those into findings.
When you heard from members of the public, was there anything that surprised you about those testimonies?
Yeah, it was when we really started understanding the depth of the fear and the discomfort. Our appointment as a committee was to advise the president and the White House on AI policy, and there were several listed areas, in particular: workforce, international relationships, R&D, and so forth. And it was interesting how in each of those areas, within each of those communities, we were hearing deep reservations. And again, it came back to pretty much those nine different categories. But whereas we were trying to figure out the best way to support safe and trustworthy development of AI and safe deployment of AI, some of the conversations were, “Why AI? Should there even be AI?” So we had to rethink some of the conversations to meet people where they were—questioning the question: whether there should even be AI policy.