President Donald Trump’s Awakening War has joined the AI Chat.
On Wednesday, the White House issued an executive order requiring the federal government to be ideologically neutral, nonpartisan and “seek for the truth.”
The order, part of the White House’s new AI action plan, said that AI should not “manipulate responses in favor of ideological doctrines” such as diversity, equity, and inclusion. The White House said it would issue guidance within 120 days to outline precisely how AI manufacturers will show that they are fair.
As Business Insider’s past reports show, completely freeing AI from bias is easier than ever.
Why is it so difficult to create a truly “neutral” ai?
Removing biases from AI models is not a simple technical adjustment or accurate science.
The later stages of AI training rely on the subjective call of the contractor.
This process known as reinforcement learning from human feedback is important as topics can be vague, contested, or difficult to define neatly in code.
The directive of what counts as sensitive or neutral is determined by the tech company creating a chatbot.
“We don’t define what neutral looks like. It depends on the customer,” Rowan Stone, CEO of Data Labeling Firm Sapien, which works with customers such as Amazon and Midjourney, told BI. “Our job is to make sure they know exactly where the data comes from and why it is.”
In some cases, tech companies can recalibrate chatbots to awaken the model, making it flirty or appealing.
Related Stories
They are also already trying to make them more neutral.
BI previously reported that contractors on Meta and Google Projects were often told to flag and punish responses of “preaching” chatbots that sounded moral or judged.
Is “neutral” the correct approach?
Sara Saab, vice president of products at AI and data training company Prolific, told BI that he considers the “wrong approach” to thinking about completely neutral AI systems because “human populations are not completely neutral.”
Saab “starts thinking that we represent us about AI systems and therefore we need to give them the training and tweaks they need to know contextually what the cultural, appropriate tone and pitch is for human interaction.”
Also, tech companies need to consider the risk of bias creeping into AI models from their trained datasets.
“Bias always exist, but the key is either coincidence or design,” Sapien’s stone said. “Most models are trained with data that you don’t know who created it or what perspective it came from.
Tweeting around Big Tech’s AI models can lead to unpredictable and harmful outcomes
For example, earlier this month, Elon Musk’s Xai rolled back a code update for Grok after Chatbot made a 16-hour anti-Semitic rant on social media platform X.
The bot’s new instructions included instructions to “telling it that way.”


