
AI Bias in Action: ChatGPT Warns Republican Fundraising Links Are Unsafe, Democrat Links Are Fine
by Lucas Nolan
A marketing expert revealed on Friday that the ChatGPT platform displayed safety warnings for links to Republican fundraising websites while not showing similar alerts for Democratic fundraising sites. OpenAI blamed the bias of its AI system, the subject of the first chapter of the new book CODE RED, on a “technical glitch.”
The New York Post reports that OpenAI announced Friday that a “technical error” caused its ChatGPT platform to display safety warnings for links to Republican Party websites while not showing similar alerts for Democratic fundraising sites. The issue came to light when users discovered that links to WinRed, the official Republican Party donation platform, were being flagged as potentially unsafe, while links to ActBlue, the primary Democratic campaign fundraising platform, did not generate similar warnings.
WILD. ChatGPT universally marks @WinRed links as potentially unsafe.
Of course ActBlue links are totally fine. pic.twitter.com/DXzPuwSP80
— Mike Morrison 🦬 (@MikeKMorrison) March 20, 2026
Digital marketer Mike Morrison first brought attention to the discrepancy when he posted about his discovery on X. Morrison asked ChatGPT to generate links for various Democratic and Republican political campaign merchandise stores. The AI system provided links to GOP stores hosted by WinRed, but accompanied them with a warning message asking users to “check this link is safe.”
The warning message further stated that the link was not verified and might contain data from the user’s conversation that could be shared with a third-party site. Users were cautioned to ensure they trusted the link before proceeding. No such warning appeared when Morrison clicked on a link to an ActBlue-run store.
“WILD. ChatGPT universally marks [WinRed] links as potentially unsafe,” Morrison wrote in his social media post. “Of course ActBlue links are totally fine.”
OpenAI responded swiftly to the incident, with a spokesperson saying that the situation should not be occurring and was being addressed. Kate Waters, speaking on behalf of OpenAI, provided a statement explaining the company’s investigation into the matter.