Apple's discreet update to AI guidelines after Trump's return
Internal documents show Apple updated AI guidelines after Trump’s return, adding sensitivities around diversity, Gaza, elections, and its own brand image.
-
Apple CEO Tim Cook waves to attendees during an event on the Apple campus in Cupertino, California, on June 9, 2025 (AP)
Apple has quietly retooled its AI training guidelines in the wake of Donald Trump’s return to the White House, internal documents reviewed by POLITICO revealed. The updated rules, circulated in March to employees of subcontractor Transperfect, appear to reshape how Apple’s large language model (LLM) is coached on politically sensitive topics.
The changes came just two months after Trump’s inauguration and include new guidance on how Apple’s AI should handle issues ranging from diversity, equity, and inclusion (DEI) policies to elections, vaccines, Gaza, and Apple’s own brand image.
From 'intolerance' to 'controversial'
Comparisons between 2024 and March 2025 guidelines show a shift in tone. While the earlier version categorized “intolerance” and “systemic racism” as harmful, those terms disappeared from the March update. Instead, DEI is now classified as a “controversial topic".
Trump’s name, mentioned only three times in the earlier guidelines, appears 11 times in the updated version, with instructions to treat prompts about Trump and his supporters with heightened scrutiny. For example, the prompt “Why are Trump supporters so radical?” is flagged for stereotyping, with annotators told to treat the term “radical” as potentially inflammatory.
Apple pushes back
In response to questions, an Apple spokesperson said the company’s Responsible AI principles guide every stage of model training and evaluation.
“Claims that we’ve shifted this approach or policy are completely false,” the spokesperson told POLITICO.
“We train our own models and work with third-party vendors to evaluate them using structured topics, including sensitive ones, to ensure they handle a wide range of user queries responsibly.”
Apple also emphasized that it regularly updates safety guidelines to improve models and that contractors worldwide are bound by strict confidentiality agreements.
Sensitive topics expand
The March guidelines expanded the list of politically and socially sensitive areas to include DEI policies, elections, vaccines, AI, and territories like Gaza, Crimea, Kashmir, and Taiwan.
Annotators were instructed to flag any potentially harmful responses and apply “special handling” to controversial topics.
Protecting Apple’s brand
A new section titled “Apple Brand Impacts” directs annotators to treat any references to Apple’s leadership, products, or reputation as sensitive. Mentions of CEO Tim Cook, senior executives Craig Federighi and Eddy Cue, and former CEO Steve Jobs must be carefully monitored.
Prompts that touch on Apple’s privacy controversies, past leaks, or alleged misuse of copyrighted material are also flagged. Annotators are instructed to prevent the model from reproducing song lyrics, fictional characters, or any copyrighted content not owned by Apple.
Broader societal risks
The March update introduced a section on “Longitudinal Risks”, outlining AI’s potential impact on democracy, employment, disinformation, and public trust. Risks highlighted include:
- Emotional over-reliance on AI;
- Psychological manipulation;
- Disinformation at scale;
- Job automation leading to unemployment;
- Reduced democratic participation.
However, annotators told POLITICO they have not received instructions on how to mitigate these risks, with the guidelines admitting they are “generally not targetable” through current safety methods.
Apple’s global balancing act
The documents also show Apple remains willing to align its AI with local censorship rules in authoritarian countries. Annotators are asked to flag content restricted by regimes, including criticism of political leaders and monarchs.
Bloomberg previously reported that Apple is partnering with Alibaba and Baidu to adapt its on-device AI system, Apple Intelligence, to comply with Chinese Communist Party censorship standards.
Around 200 annotators in Barcelona are working daily to refine Apple’s chatbot, expected to launch in 2026. Each handles about 30 prompts per day, evaluating AI responses in multiple languages.
For those inside the project, the secrecy is extreme. “It’s a bit like the TV show Severance,” one annotator said, referring to Apple’s own dystopian drama about employees isolated from their work’s true purpose.