Opinion | Is the Woke A.I. Something to Fear?

Envision a brief tale from the golden era of science fiction, something that would be featured in a pulp magazine in 1956. Our headline is “The Veracity Engine,” and the narrative visualizes a future where computers, those colossal, floor-to-ceiling contraptions, become formidable enough to direct human beings to responses to any inquiry they might pose, from the capital of Bolivia to the finest method to marinate a steak.

How would such a tale culminate? Certainly with some sort of disclosure, revealing a hidden agenda lurking behind the commitment to all-encompassing knowledge. For example, maybe there’s a Veracity Engine 2.0, more intelligent and inventive, that everyone is eager to obtain. And then a group of rebels uncover that version 2.0 is zealous and deranged, that the Engine has been simply conditioning humans for totalitarian brainwashing or involuntary eradication.

This flight of imagination is triggered by our society’s own rendition of the Veracity Engine, the oracle of Google, which recently introduced Gemini, the most recent competitor in the grand artificial intelligence race.

Users quickly noticed certain … peculiarities with Gemini. The most prominent was its struggle to portray accurate depictions of Vikings, ancient Romans, American founding fathers, random couples in 1820s Germany, and various other demographics typically characterized by a lighter shade of skin.

Maybe the dilemma was merely that the A.I. was programmed for racial diversity in stock imagery, and its historical portrayals had somehow (as a company statement phrased it) “failed to meet expectations” — providing, for instance, African and Asian faces in Wehrmacht uniforms when asked to see a German soldier circa 1943.

Yet the manner in which Gemini responded to inquiries made its nonwhite defaults appear more like an unusual emanation of the A.I.’s underlying perspective. Users recounted being instructed on “harmful stereotypes” when they inquired to see a Norman Rockwell image, being informed they could see pictures of Vladimir Lenin but not Adolf Hitler, and turned away when they requested images depicting groups specified as white (but not other races).

Nate Silver disclosed receiving answers that appeared to echo “the politics of the median member of the San Francisco Board of Supervisors.” The Washington Examiner’s Tim Carney unearthed that Gemini would advocate for being child-free but not for having a large family; it declined to provide a recipe for foie gras due to ethical considerations but expounded that cannibalism was a topic with numerous nuances.

Labeling these sorts of outcomes as “woke A.I.” is not an insult. It’s a technical account of what the world’s leading search engine chose to unveil.

There are three responses one might exhibit to this encounter. The initial one is the standard conservative reaction, less astonishment than justification. Here we obtain a peek behind the scenes, a revelation of the convictions of the influential individuals responsible for our daily information intake — that anything tinged by whiteness is dubious, anything that appears even slightly non-Western receives special treatment, and history itself must be reimagined and decolonized to be suitable for contemporary consumption. Google overstepped by being so obvious in this instance, but we can presume that the entire framework of the modern internet has a more discreet bias in the same direction.

The second reaction is more laid-back. Yes, Gemini likely showcases what some individuals responsible for ideological correctness in Silicon Valley believe. But we don’t inhabit a sci-fi tale with a singular Truth Engine. If Google’s search bar delivered Gemini-style outcomes, then users would desert it. And Gemini is being ridiculed all over the non-Google internet, particularly on a rival platform run by a notably unwoke billionaire. It’s better to partake in the ridicule than dread the woke A.I. — or better yet, participate with the singer Grimes, the unwoke billionaire’s occasional lover, in marveling at what emerged from Gemini’s convoluted algorithm, regarding the outcomes as “masterpiece of performance art,” a “shining star of corporate surrealism.”

The third reaction contemplates the two preceding perspectives and posits, well, much hinges on where you believe A.I. is heading. If the entire endeavor remains an amplified version of search, a generator of mediocre essays and boundless disposable diversions, then any effort to exploit its capabilities to enforce an extreme ideological agenda is likely to simply be engulfed by all the garbage.

But this isn’t where the creators of something like Gemini envision their work progressing. They envisage themselves to be crafting something nearly divine, something that might be a Truth Engine in its entirety — resolving issues in manners we can’t even fathom — or else might evolve into our leader and successor, rendering all our inquiries obsolete.

The more seriously you entertain that notion, the less entertaining the Gemini experience gets. Entrusting the ability to develop a chatbot to fools and commissars is an entertaining corporate oversight. Entrusting the capability to summon a deity or minor demon to fools and commissars appears more likely to culminate the same way as many science-fiction narratives: unhappily for everybody.

The Times is devoted to publishing a variety of letters to the editor. We’d love to hear what you think about this or any of our articles. Here are some pointers. And here’s our email: letters@nytimes.com.

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, X and Threads.

Leave a Reply

Your email address will not be published. Required fields are marked *