Amba Kak proposes policy suggestions to address AI worries

Image Credits: Amba Kak

To offer AI-focused women scholars and others their well-deserved — and overdue — time in the limelight, TechCrunch is commencing a series of conversations concentrating on extraordinary women who’ve contributed to the AI revolution. We’ll release multiple features throughout the year as the AI surge persists, showcasing crucial work that frequently goes unnoticed. Discover more profiles here.

Amba Kak serves as the chief executive officer of the AI Now Institute, where she assists in crafting policy suggestions to tackle AI issues. She also served as a senior AI consultant at the Federal Trade Commission and previously operated as a world policy advisor at Mozilla and a lawful advisor to India’s telecom regulator on net-netruality.

Briefly, how did you kick off in AI? What drew you to the field?

It’s not a simple query because “AI” is a contemporary term to depict practices and structures that have been progressing for a substantial period now; I’ve been involved in technology policy for over an era and in several regions of the world and saw when everything was about “big data,” and then everything revolved around “AI”. Nevertheless, the primary concerns we were involved with — how data-powered technologies and economies influence society — remain constant.

I was attracted to these queries early on in law school in India where, among numerous decades, occasionally century-old precedent, I discovered it inspiring to work in a field where the “pre-policy” queries, the normative questions of what is the world we want? What role should technology play in it? Remain open-ended and debatable. Globally, at that time, the major debate was if the internet could be managed at the national level at all (which now seems like a very evident, yes!), and in India, there were animated debates about whether a biometric ID database of the entire population was creating a risky vector of social control. In the face of narrations of inevitability around AI and technology, I believe regulation and advocacy can be a potent tool to mold the trajectories of tech to serve public interests rather than the bottom lines of companies or merely the interests of those who possess power in society. Naturally, over the years, I’ve also discovered that regulation is frequently entirely co-opted by these interests too and can often work to maintain the status quo rather than challenge it. So, that’s the work!

What achievement are you most proud of (in the AI domain)?

Our 2023 AI Landscape report was unveiled in April amidst a boom of chatGPT-fueled AI excitement — was part diagnosis of what should concern us about the AI economy, part action-oriented declaration aimed at the broader civil society community. It fit the moment — a moment when both the diagnosis and what to do about it were desperately absent, and in its place were talks about AI’s omniscience and inevitability. We emphasized that the AI boom was further consolidating the concentration of power within a very narrow segment of the tech industry, and I believe we effectively punctured through the hype to reorient focus to AI’s impacts on society and on the economy… and not suppose any of this was inevitable.

Later in the year, we were able to bring this argument to a room full of government leaders and top AI executives at the UK AI Safety Summit, where I was one of only three civil society voices representing the public interest. It’s been a lesson in realizing the power of a compelling counter-narrative that refocuses attention when it’s easy to get swept up in tailored and frequently self-serving narratives from the tech industry.

I’m also particularly delighted with a lot of the work I performed during my term as Senior Advisor to the Federal Trade Commission on AI, engaging in emerging technology concerns and some of the key enforcement activities in that realm. It was an extraordinary team to be a part of, and I also grasped the essential lesson that even one individual in the correct room at the correct time genuinely can influence policy formation.

How do you manage the obstacles of the male-dominated tech sector and, by extension, the male-dominated AI industry?

The tech industry, and AI specifically, remains extremely white and male and geographically concentrated in very affluent urban bubbles. However, I prefer to reframe away from AI’s white male problem not merely because it’s now well-known but also because it can at times create the illusion of quick solutions or diversity theatre that by themselves won’t resolve the structural inequities and power imbalances ingrained in how the tech industry currently operates. It doesn’t resolve the deep-seated “solutionism” that’s answerable for numerous damaging or exploitative uses of tech.

The genuine issue we need to grapple with is the formation of a small group of companies and, within those — a few individuals who have amassed unparalleled access to capital, networks, and power, reaping the rewards of the surveillance business model that energized the past decade of the internet. And this centralization of power is inclined to get much, much worse with AI. These individuals function with impunity, even as the platforms and infrastructures they control possess massive social and economic effects.

How do we navigate this? By unveiling the power dynamics that the tech industry endeavors very strenuously to hide. We speak about the incentives, infrastructures, labor markets, and the environment that influence these waves of technology and shape the route it will take. This is what we’ve been executing at AI Now for nearly a decade, and when we execute this effectively, we find it challenging for policymakers and the public to look away — crafting counter-narratives and alternative imaginations for the proper role of technology within society.

What suggestions would you offer to women aspiring to join the AI sector?

For women, but also for other minoritized identities or perspectives seeking to make critiques from outside the AI sector, the best advice I could give is to stand your ground. This is a sector that regularly and systematically will strive to discredit critique, particularly when it emerges from not conventionally STEM backgrounds — and it’s simple to do given that AI is such a veiled industry that can make you feel like you’re always striving to push back from the outside. Even when you’ve been in the sector for decades like I have, influential voices in the sector will endeavor to undermine you and your valid critique simply because you are questioning the status quo.

You and I have as much of a say in the future of AI as Sam Altman does since the technologies will impact us all and potentially will disproportionately affect people of minoritized identities in harmful ways. At this moment, we’re in a battle for who gets to claim expertise and authority on matters of technology within society… so we genuinely need to claim that space and hold our ground.

Leave a Reply

Your email address will not be published. Required fields are marked *