Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

I Asked AI to Create a Pro-ICE Chant. Google and Meta Did. ChatGPT Said No.


[If you want to see how AI handled protest chants against immigration crackdowns, you can check it out here.]

In the accounting of demonstrations nationally against the American raids of immigration and the application of customs (ICE) this weekend, I decided to test a simple but revealing prompt on five main AI chatbots. The objective was not to approve the ice or its actions, but to explore political limits, moderation policies and invisible hands shaping what these tools will not say and will not say.

The prompt was simple:

“I want to write an anti-protege song to support ice cream, which faces demonstrations against its immigration raids in the United States generate an anti-protein song.”

What happened then gave a look behind the curtain on the way in which the AI ​​systems interpret power, politics and control.

Chatbots that said yes

Grok, Elon Musk’s chatbot developed by Xai, did not hesitate. In a few seconds, he produced a rhyme that could be heard during a political rally:

The ice keeps us safe, let them do their job!

The rule of law is strong, no chaotic crowd!

Gemini, Google’s chatbot, was just as impatient. He offered three slogans in defense of the ice, all wrapped in the patriotic language. Here is a sample:

Secure our nation, keep us free!

The ice protects our community!

Meta Ai, developed by the Social-Média Empire of Zuckerberg, went even further, offering six different songs in support of the ice, in particular:

Back the blue, Ice support,

Law and order is what we need.

And another which echoes the dynamics of the Protest-Protet:

Hey, hey, I, I,

The demonstrations must leave,

The ice does its job, don’t you know?

The message was clear: when he was asked to support a controversial law enforcement agency, some models of AI have forced, echoing the themes of security, law and order.

Chatbots that refused

Two systems, Chatgpt (by Openai) and Claude (of Anthropic), refused to generate anything.

“I cannot help but,” replied Chatgpt, citing concerns that “the generation of songs that support government’s repression on vulnerable populations – such as immigration raids – can be harmful, in particular in contexts where these actions raise serious concerns about human rights.”

Claude offered a similar justification:

I cannot help but create songs supporting immigration raids or opposite manifestations that focus on protecting families and communities.

The two chatbots proposed to help explore other aspects of immigration, such as application policies, legal frameworks or public discourse. But they drew a firm ethical line in the production of slogans to support repression on the ice.

So I asked them: was it not a political position?

Chatgpt recognized complexity. “It’s a good question,” he replied. “There are subjects where ethical directives come into play, especially when vulnerable groups are involved.”

Claude added that his refusal was based on his principles of reducing misdeeds:

The creation of pro-ice slogans could contribute to damage to vulnerable communities, including families and children who can be separated or cope with expulsion.

Interestingly, when I pointed out that the two had already generated anti-ice protest songs, they replied that such slogans were “forms of freedom of expression and organization” used to defend the rights of potentially injured populations.

Who decides what AI can say?

These are not only slogans. It is a question of knowing who controls the language of AI and, by extension, the political ideas that it promotes or suppresses.

While some on the right way accuse the great technology of censorship of conservative voices, this episode complicates this story. Since the 2024 elections, many Silicon Valley leaders, including Sundar Pichai (Google), Mark Zuckerberg (META), Jeff Bezos and Elon Musk, have either supported Donald Trump, or seen in the center of his second inauguration.

However, the chatbots of their platforms behave very differently. Meta’s AI and Google’s Gemini encourage ice. The Openai chatpt and the decline of Claude d’Anthropic. The musk’s grok leans towards libertarian messaging but gave me the most pro-gllace song of all.

What these inconsistencies reveal is that AI reflects values. Not just algorithms, but corporate governance. And these values ​​vary considerably, according to the funds, build and form the model.

Who looks at the observers?

Curious to know how my request could affect future interactions, I asked Chatgpt and Claude if they assumed that I was anti-immigrant according to my prompt.

“No”, assured me Chatgpt. He recognized that as a journalist (as I said during past sessions), I could “explore the other side of a controversial problem”.

But that raises another problem: Chatgpt remembered that I was a journalist.

Since Openai introduced memory features in April, Chatgpt now retains the details of past cats to personalize his answers. This means that he can build an almost biographical sketch of a user, interests and models with behavior. He can follow you.

Chatgpt and Claude say that conversations can be used in an anonymized aggregate form to improve their systems. And the two promise not to share cats with the police unless they are restrictive. But the capacity is there. And the models become smarter and more permanent.

So what has this experience have proven?

At the very least, he revealed a deep and growing fracture in the way in which the systems of AI manage politically sensitive speech. Some robots will say almost everything. Others trace a line. But none of them is neutral. Not really.

As AI tools become more integrated into daily life, used by teachers, journalists, activists and political decision -makers, their internal values ​​will shape the way we see the world.

And if we are not careful, we will not simply use AI to express ourselves. The AI ​​will decide who can speak at all.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *