How to Achieve Socially Beneficial AI
I haven’t been writing much lately; something I’m hoping to change starting today. The last 5 years have transformed me, as I’m sure they’ve transformed many of you. Between wars, a pandemic, and the sudden, divisive popularity of AI, the world today looks nothing like it did in 2019 when I became a leading voice in AI ethics, and it’s time for me to evolve alongside it.
TL;DR:
I used to be a hard-lined activist
Now I help run an AI safety tooling startup
I’ve found that the tactics many activists use are falling flat in today’s world
Most governments aren’t interested in technology bans or strict barriers
Instead, I’ve been advising the government in other ways
Knowing how the government really works, now…
The best thing I can do to help bring about socially beneficial AI is through tools, standards, and voluntary commitments that help companies’ bottom line
Hello again, world!
I still remember the day in 2019 when I first logged onto Twitter and gleefully updated my profile, calling myself for the first time an “AI Activist”. It’s bittersweet, today, to reflect on how the last 5 years have changed me, and how at this moment in time I no longer identify with the AI ethics activists leading the charge. I don’t regret the risks I took that got me here. It was terrifying at the time, but my tiny contribution helped re-spark the “AI risk” conversation in a moment of great societal distrust. There are, however, a number of things I do wish I’d done differently, and I’d like to tell some of that story today. After half a decade working on this mission of socially-beneficial AI, I’ve realized a few harsh truths about how well the “No Tech for XYZ” movement is working in 2024, and it starts with something simple: If we want a future where humanity beneficially coexists with AI, we need to stop living in extremist fantasylands.
Any viable strategy to address AI risk needs participation from every stakeholder in the AI ecosystem, including the public, governments, civil society, and the companies who build and deploy it. It’s time to evolve the conversation, and replace online outrage with radical collaboration wherever possible.
Some Background
In 2019, I blew the whistle on a buzzy AI company for its work on Project Maven. I had figured out that the company, who’d previously declined all military work, was open to building lethal autonomous weapons systems, and I quit the next day. I drank from the firehose of civil society, and became an aggressive, critical voice on the risks of AI.
After quitting Clarifai, I struck out to contribute to regulation that would rule out AI’s most pressing risks so we could safely enjoy its promise. At the time, the activists who recruited me championed a narrative that all AI was unfit for human consumption, was snake oil, incurably racist, and was foisted onto society by evil capitalists. I’ve never been quite as pessimistic about AI as the most hard-lined edges of the AI ethics movement… My first few days at Clarifai were like witnessing magic become real. I quickly learned about AI’s limitations (spoiler alert: AI is NOT magic).
And yet, even with its many flaws, since 2017 I’ve seen that AI can be pretty cool and useful. Sometimes, like with AlphaFold, it can fundamentally transform our world for the public good.
Speaking publicly about AI’s limitations landed me a spot on the country’s first National AI Advisory Committee, my first real “seat at the table”. Suddenly, I had a direct line from our reports to Congress and President Biden’s desk. As the committee’s token critic, I knew my role was to challenge the “AI at any costs” narrative, and I did my best to live up to that expectation.
Working WITH the Government Changed Me
Over time, I started to notice something. My work was transforming from the noisy, aggressive tactics of shaming companies on social media into the quiet work of persuasion within a consensus-seeking body.
Where before I sought out conflict, and encouraged outrage on Twitter, now there was the power to change a few words in a document that could impact millions of lives.
I was forced to negotiate with people I’d once considered mortal enemies from afar. I found, more than anything, that our histories had shaped our goals to different ends. All too often, where I had once inferred malice, it turned out that they were also doing what they thought would best benefit humanity. Our priorities were different. Their goal was to uphold democracy against increasingly powerful authoritarian forces around the world, where mine was to protect democracy at home in the USA. I say this not because they’ve somehow managed to change my mind, quite the opposite. Rather I say this because despite our vast ideological divide, we were able to find some common ground.
Our work on the NAIAC is facilitated by NIST, in cooperation with other federal agencies. And so, over the last 2 years I’ve gotten to know quite a few civil servants and policy professionals on a personal level. The staff at NIST are absolutely incredible, inspiring people. A few have privately cheered on my principled stances in the face of overwhelming odds. But they also coached me on how to achieve better outcomes for the public good. They encouraged us not to miss any chance to help people, even if the victories felt small. When I lamented that our first year’s report didn’t go far enough, they chuckled and said that nothing ever would.
In particular, one piece of advice has been stuck in my head: that to win in such a blended environment full of conflicting goals, it’s better to show up with a desire to improve things, rather than to hold onto immutable positions, with a desire just to fight.
A Few Things I’ve Learned Since 2019
On social media, the fighting is the point. And in hindsight, it’s not surprising that a lot of my activist training happened on Twitter. The more rage I put out into the world, the more reporters came knocking for incendiary quotes. Fueled by that particular dopamine machine, it was easy for me to mistake engagement for victory as my follower count grew. But, and this is critical: The followers aren’t the point, helping people is.
National policies are rarely written unilaterally, and so if we want socially beneficial AI, we have to accept the fact that right now, its driving forces lie in corporations, not government. And those corporations don’t care how much you yell at them online.
Sometimes, I wonder if activists’ fear machine can cause more harm than good. There are many examples of this but one in particular leaps to mind: the vitriol and rage we flung at technological solutions to controlling the spread of COVID-19. Against the backdrop of the 2020 Civil Rights protests, Google and Apple released a bluetooth protocol to limit the spread of the virus. It was a perfect case of techno-solutionism, ripe for critique in a society that was quickly losing trust in technology companies. Activists, myself included, slammed the solution as too limited, potentially unfair, and “security theater” with little hope of success. And predictably, in the US we saw low adoption of this protocol, even though it was widely available. It certainly wouldn’t have been a magic bullet, but how many lives might have been saved if not for that outrage? Even if only a hundred, or a thousand, wouldn’t that have been worth it?
I’ve since learned that it’s so easy to criticize, and often so much harder to build.
If anything, this rage and vitriol has breathed new life into the “accelerationist” flame, and created a movement of people who seem to think it’s best to barrel forward into an AI world without any guardrails at all. On social media, they criticize AI ethicists/safetyists as Luddites, and claim we think they’re all evil, genocidal maniacs. One of the startup industry’s leading investors even went so far as to call us (among others) “The Enemy”. I’m partially responsible for this caricature, which is often used to reductively describe our work at Vera, the startup we’re building to address AI risks, which couldn’t be farther from the truth.
Believe it or not, I actually consider myself an accelerationist, but in a slightly more pragmatic way. I work on AI risk reduction to clear the road for AI to flourish, not to slow it down.
We’re building a company focused on AI risks because I believe we need the promise of AI to open doors on faster scientific research. To create economic efficiencies that can help to level the societal playing field for everyone, especially those with disabilities. And we need to do all of this in a thoughtful, responsible way, or else the entire field will suffer. Because every public AI controversy chips away at the public trust, and one too many chips could mean another AI winter.
As an extremely-online Millennial, I remember and do not miss the days of paper maps, hardback encyclopedias, and phones that hung on walls.
Nowadays, it feels like knee-jerk, anti-technology rhetoric misses the mark on what most Americans want for their lives. Even my closest friends, who are well aware of the societal risks, install Alexas and IoT devices all over their homes.
I believe we should be striving for a world where it’s possible to enjoy the convenience of technology safely, instead of a world where these technologies simply go away.
Strong AI Regulation Isn’t Coming
If we want perfect laws to stamp out the scariest risks of AI, we will need the help of Congress, who can hardly decide on whether and how to keep the government open. I know, I know. With all the headlines, task forces, and public hearings on the topic, it certainly feels like this body is working up to something big. But I’ll share something with you here that was shared with me when I first joined the NAIAC: No meaningful legislation gets done in an election year. And even without that point, agreement on issues as thorny as those posed by AI is highly unlikely to be quick, especially when it comes to issues of labor, artists’ rights, disinformation, discrimination, election integrity, and others for which finding consensus is a challenge. This is not to say it’s impossible, but we need to be realistic about the timeline to success.
Over the last few years, I’ve found little appetite (more likely: capability) among US federal regulators for extremely strict AI regulation. In general, my attempts to persuade these regulators of bans and barriers for particularly worrying AI have proved nonstarters. There are, however, quite a few things we’ve been able to achieve, and I couldn’t be prouder of these. Among our modest government contributions lie provisions that uplift social science research, call for immigration reform, defend worker rights, and impose strict requirements for government use of AI in risky categories including prisons and child safety. Some of these have even been codified into official policy, where millions of American residents will be safer because of the few sentences we’ve been able to sneak in.
Impracticality Hurts Everyone
To the activists who trained me, these improvements are all but worthless. Many of them continue to discount any and all uses of AI, even as millions around the world learn to work and play with tools like ChatGPT. They would rather hold out for the bans and criminal liabilities that align with their principles, even if it’s highly unlikely they’ll ever actually materialize. We need those voices too, even if their hope of success is relatively small, and far off in the distance. But it’s worth mentioning that the main reason there is no federal privacy law in the USA is due to California’s hard-lined refusal to bring the bipartisan ADPPA to the House floor in 2022. And every last one of us is worse off because of it.
For me, recognizing what is reasonable to achieve in this country’s current state, I’m much more interested in what can be done right now, and have turned my focus to the tools, standards, and voluntary commitments that we can use right now to prevent and mitigate harm.
My career in startups has taught me many things, but two in particular have come to define me. First, giving up doesn’t help anything, in fact it only makes things worse. And second, if you’ve tried something and it doesn’t work, it’s time to try something else. It’s been a long, slow transition, but that’s why I’m so excited about the current work streams at NIST (the AI Safety Institute, among other things), and what we’re building at Vera, where we remain dedicated to achieving a socially-beneficial AI future for the world. No platform or standard is ever going to fully fix the problems that AI poses to society, but we have to start somewhere.
And let’s face it, AI’s problems are so sociotechnically complex that they will require a true “swiss cheese” defense. Part regulatory, part activism, and yes, part technological solutions too.
So, how will we achieve socially beneficial AI?
There’s only one answer to this question, and it is: through all of us. It’s easy to say we need a diverse set of perspectives to create human-aligned AI, but living this reality can be hard, especially when our perspectives significantly differ. This means academia, civil society, government, the public, and industry too, all need to come together, and earnestly search for the common ground we share.
None of this will matter if the tools to enforce NIST’s forthcoming AI standards fail to coalesce in a way that is actionable for industry. For me personally, that means building a venture-backed startup to build these tools, combining all the AI ethics research I’ve absorbed over the last 5 years with the tech industry experience I’ve earned over the last *cough* 13.
My cofounder and I decided to do this work together in a startup so that we could begin with the world’s most cutting-edge technology and a clean slate. We don’t have tech debt or bureaucracy to slow us down. From outside of the big hyperscalers, our team can continue to generate research that pokes holes in the frothiest narratives of AI hype. We can build a team with diverse perspectives and demographics. We can build solutions that are model-agnostic, and encourage big enterprises to take advantage of the vibrant open source model ecosystem. If we chose to build these tools from within a nonprofit entity, we’d be fundraising forever, and beholden to the philanthropists whose donations could evaporate at any moment.
With venture capital behind us, we can build a self-sustaining economic engine that scales to address this concern at every company, big and small, all over the world.
Our hope is that by offering solutions that are attractive to business, we might craft a world where AI and humanity coexist to our benefit, rather than blindly hoping that AI will just go away.
One Last Thought…
Don’t get me wrong, I still think that we’ll look back on the moment we gave guns to robots and see it was a bad idea. If it were up to only me, I’d love to be entirely free from facial recognition wherever I go. But it’s too critical that we land this AI plane safely, and we need to do it fast. Knowing what I now know about how the government really works, one thing has become exceedingly clear: