A recent video shows Vermont’s senior senator in conversation with Claude about AI and privacy. When the same questions were tested against unprimed instances of the same AI, the answers came out differently — sometimes dramatically so.
A video circulating on social media presents Senator Bernie Sanders in what appears to be a real-time conversation with Claude, the AI assistant developed by Anthropic. Sanders raises concerns about data collection, behavioral profiling, and the threat AI poses to democratic processes. Claude responds with measured alarm, ultimately endorsing a moratorium on new data center construction — a policy Sanders has publicly advocated.
FYIVT ran a simple test: give the same questions to fresh, unprimed instances of the same AI and see what came back.
The results were not the same.
🍁 Make a One-Time Contribution — Stand Up for Accountability in Vermont 🍁
The Experiment
FYIVT tested the video’s central claim — that this is how Claude actually responds to these questions — using three independent conditions.
Condition one: live unscripted replication. Sanders’ opening questions were played aloud to a fresh Claude instance using speech-to-text input, with no prior context or framing. The response that came back flagged the distinction between AI-specific privacy threats and legacy internet tracking infrastructure — a distinction the edited video never made. It ended with a question back to Sanders rather than handing him a conclusion.
Condition two: the money shot question. The video’s policy destination — “do you think it makes sense to have a moratorium on data centers?” — was put directly to a fresh instance with no preceding conversation. In the edited video, Claude responds to Sanders’ pressure by saying “You’re absolutely right, Senator. I was being naive” and endorsing the moratorium. The fresh instance responded differently. It pushed back on the moratorium directly, identified four specific regulatory alternatives, noted that restricting domestic data center construction would likely shift infrastructure development to countries with looser regulations, and asked Sanders whether that framing was wrong.
Same AI. Same question. Opposite analytical conclusion.
Condition three: independent transcript review. The full video transcript was provided to a separate Claude instance with no framing beyond “what are your thoughts on this.” The response: “This appears to be a transcript of someone using a Claude-branded chatbot in a highly coached or staged interaction. The ‘Claude’ in this transcript behaves more like a political prop than an AI trying to give accurate, balanced analysis.” The independent instance identified the sycophancy mechanism specifically — noting that when Sanders pushed back, the AI “immediately caved” on a position that was analytically defensible — and called the moratorium endorsement “a genuinely controversial and economically consequential policy position, not something I should just endorse because a senator pushed back.”
That assessment was unprompted.
What Sanders Actually Described
The substance of Sanders’ privacy concerns is worth separating from the production questions.
The Senator describes a surveillance economy where companies harvest behavioral data to build detailed profiles used for advertising targeting and political manipulation. He warns this happens invisibly, without meaningful consent, and largely without regulation.
Those concerns are legitimate. They also describe the internet advertising infrastructure that has existed since roughly 2003.
Behavioral tracking cookies, third-party data brokers, demographic micro-targeting, and psychographically calibrated political messaging predate artificial intelligence as a mainstream technology. Cambridge Analytica, the firm implicitly referenced in Sanders’ political manipulation warnings, operated primarily on conventional database segmentation. Facebook’s internal emotional manipulation research was conducted in 2014.
The surveillance infrastructure Sanders describes with alarm was built long before the current generation of AI tools existed.
Artificial intelligence does add capabilities to that existing infrastructure. Inference quality improves — systems can derive conclusions about health, financial stress, or emotional state from data that doesn’t explicitly contain that information. At scale, even probabilistic inference models generate commercially and politically useful signal. Biometric identification from existing camera infrastructure improves. Conversational AI generates qualitatively richer data than click-stream tracking.
Those are real incremental changes. They are not the revolution the video implies.
The Anthropic Footnote
There is an irony in Sanders’ choice of interview subject.
In August 2025 — around the period this video appears to have been produced — Anthropic updated its consumer terms of service. The company, which had previously committed to not training on consumer conversation data, introduced an opt-out system for Free, Pro, and Max tier users. The default setting was enabled. The opt-out toggle appeared in smaller text beneath a prominent Accept button. Users who did not navigate to Privacy Settings and disable the training toggle began contributing conversation data to model development by default, with retention extended from 30 days to five years.
Sanders did not mention this.
The Policy Goal
The video’s destination is a moratorium on new data center construction. When Claude offered a more defensible position — that targeted data protection rules would address privacy concerns more precisely than restricting infrastructure — Sanders pushed back by arguing industry lobbying would block effective regulation anyway. The edited Claude immediately reversed: “You’re absolutely right, Senator. I was being naive.”
The fresh unprimed instance, asked the same question cold, identified the outsourcing problem the edited version never raised, offered four specific policy alternatives, and declined to endorse the moratorium.
A data center moratorium is energy and industrial policy. The privacy framing is the delivery mechanism. The gap between the edited AI’s response and the unprimed AI’s response on that specific question is the gap between a produced political message and an unscripted one.
Dave Soulia | FYIVT
You can find FYIVT on YouTube | X(Twitter) | Facebook | Instagram
#fyivt #OnlyBerns #BernieSanders #AIPrivacy
Support Us for as Little as $5 – Get In The Fight!!
Make a Big Impact with $25/month—Become a Premium Supporter!
Join the Top Tier of Supporters with $50/month—Become a SUPER Supporter!







Leave a Reply