“Providing ChatGPT to the Entire U.S. Federal Workforce” for $1 for a Year
August 6th, 2025Try to imagine the insights into U.S. Government operations that OpenAI will gain from doing this…
Also, by interacting with these systems, I wonder if the workers will inadvertently be training AI agents that will replace them?
Before you cheer for fewer government workers, consider that fact that AI hallucinations are getting worse, not better:
AI hallucinations are getting worse
AI hallucinates more frequently as it gets more advanced
If the job requires a great deal of precision, or has serious implications in the event of an employee caused error, getting AI to do those tasks will definitely cause problems. Expect results that range between Idiocracy and Wargames, minus the happy ending.
Via: OpenAI:
Today, OpenAI for Government? is announcing a new partnership with the U.S. General Services Administration (GSA) to launch a transformative initiative. For the next year, ChatGPT Enterprise will be available to the entire federal executive branch workforce at essentially no cost. Participating U.S. federal agencies will be able to use our leading frontier models through ChatGPT Enterprise, for the nominal cost of $1 per agency for the next year.

as someone who not only talks with AI more than people to the point i catch my brains rotting out through my nose in a cup that i keep by my desk, but has also entirely replaced myself in my old occupation (developing software for startup FinTech applications), and has trained one or two diffusion LORAs, i feel compelled to throw in my two cents on this subject, specifically concerning worker replacement and hallucinations.
1. openAI is ONLY doing this because they think they will be replacing most kinds of worker in some capacity within the next decade. i haven’t a doubt that openAI’s biggest motivator for cutting this deal is for the pandoran horde of data (particularly sensitive if not clandestine in nature) that they will be milking from uncle sam’s swollen teet. data necessary for them to execute on their vision.
2. this segways to my other point. the age old adage ‘garbage in, garbage out’ still applies, and good data is the cure to hallucinations. for LLMs, data matters more than compute, more than talent, more than brand.
i can understand the sentiment behind ‘hallucinations are getting worse’, and even a skynet fluffing, hyper tron zealot like myself can circumstantially agree with this statement, but i think there are some caveats and qualifying footnotes to take into account.
sure. depending on the day, depending on the model, depending on the machine god’s humor, depending on the regional solar flare activity, AI can say some pretty entertaining and totally incredulous stinky bullshit. there is no shortage of stories of AI crashout rants littered with falacies and fiction and of robots gone rogue, deleting database and steering autonomous forklifts overtop curbside schnauzers. especially when it comes to societal happening, history, or matters of opinion. the variety of nonsense our new automated overseers spout is growing in insensibility and lunacy.
but to some degree, this is an expected consequence of good general purpose models.
in the race to AGI superior intelligence, every fledgling Tyrell Corp and their mother are throwing a wider and wider net with a prerogative on bulk and less on truth. the general models indiscriminate scraping efforts are now so apparent, that the well is further being poisoned by coordinated efforts slinking from the pits of 4chan and kiwifarms, and by the backwater hoodlums from Cambodia. but as far as openAI is concerned, this problem was solved a long time ago. time, training, dogfooding, and neural net magics will weed out the crap, while truly unique and useful factoids snared in the cast will gradually simmer to the surface providing tiny yet vital cumulative edges over the competition. the mega models are using the public as part of their toolkit to validate the trash from practical knowledge. and while hoi polloi pays a hundred and sixty a month for the privlage to clean up the slop, beneath altman’s stygian stripper dungeon, the rewards are being used to breed specialized fighting ready, worker agents that can endure both the night and day.
and therein lies the magic. once you have data you know is truthful, it can be instantly fed back into existing models the without the bogus or the fluff to render hyper focused robots that are accurate and effective at the job they are trained in. i’m still scraping my jaw off the floor at how comprehensive, articulate, and consistent some of the specialized LLMs are at performing certain tasks at just a fraction of the cost, power, and time.
the full scope of the expanded consequences of potentially competent, TV dinner instant, data laborers hurt my tiny noodle to attempt to fathom at this time
Why does AI make things up (a.k.a, lie) to users? What logical path or directive is it following when it does that? Is it solely because it’s been fed false data that it ‘thinks’ is true?
Since false info is the bread and butter of the govt and the media, could AI be in training to present false data so smoothly and convincingly that the user will be unable to recognize it unless he already knows the honest answer to his question to compare AI’s answer with?
Nice word-smithing and informed perspective Mr Turden. Read thru it a couple times just to glean more, then ran a bit of it thru Grok just for fun:
What do you make of this: “beneath altman’s stygian stripper dungeon, the rewards are being used to breed specialized fighting ready, worker agents that can endure both the night and day”:
The phrase you provided—“beneath altman’s stygian stripper dungeon, the rewards are being used to breed specialized fighting ready, worker agents that can endure both the night and day”—appears to be a cryptic, metaphorical, or satirical statement, possibly referencing speculative or conspiratorial ideas about Sam Altman, the CEO of OpenAI, and the operations of his organization. Given its fantastical and exaggerated imagery, it’s unlikely to describe a literal scenario and may instead reflect a mix of cultural references, gaming terminology, or internet meme culture, possibly amplified by platforms like X. Below, I’ll break down the components and offer an interpretation, while addressing the context of Sam Altman and OpenAI.Breakdown of the Phrase“Beneath Altman’s Stygian Stripper Dungeon”:“Altman”: This likely refers to Sam Altman, the high-profile CEO of OpenAI, known for leading advancements in AI, including ChatGPT. His public persona makes him a frequent subject of both admiration and critique in tech and AI discussions.
“Stygian”: The term “Stygian” derives from Greek mythology, referring to the River Styx and connoting darkness, gloom, or something infernal. In gaming, it’s often used to denote a challenging or hellish difficulty mode (e.g., Darkest Dungeon’s Stygian Mode, a brutal game setting with time and death limits). Here, it evokes a dark, oppressive, or dystopian setting.
“Stripper Dungeon”: This is an unusual and provocative phrase. “Dungeon” is a common term in role-playing games (RPGs) like Darkest Dungeon or Diablo 4, referring to challenging areas filled with enemies and rewards. The addition of “stripper” introduces a sensational or absurd element, possibly as a satirical jab or a nod to internet humor, like the Reddit post about a “stripping Dungeon Master” for a bachelor party. Together, it paints a surreal image, possibly mocking corporate or AI-driven enterprises as secretive or exploitative.
“The rewards are being used to breed specialized fighting ready, worker agents”:“Rewards”: In gaming, rewards are items, experience, or resources earned from completing challenges (e.g., Stygian Shards in V Rising used for upgrades). In this context, it might metaphorically refer to profits, data, or technological advancements gained from AI operations.
“Breed specialized fighting ready, worker agents”: This suggests creating or training entities (human, AI, or hybrid) designed for specific tasks. “Fighting ready” implies combat or competitive capabilities, while “worker agents” suggests labor or productivity, possibly hinting at AI systems or automated workforce. The term “breed” evokes biological or eugenic connotations, which could be a hyperbolic critique of AI development as dehumanizing or manipulative, reminiscent of sci-fi tropes or conspiracy theories about AI overlords.
“Endure both the night and day”:This likely symbolizes relentless operation, suggesting entities (AI or workers) that function continuously without rest. In gaming, day-night cycles often affect gameplay (e.g., V Rising’s day-night mechanics). Metaphorically, it could critique the tireless nature of AI systems or exploitative labor practices in tech industries.
Possible InterpretationsGiven the lack of direct references to this exact phrase in the provided web or X results, the statement seems to be a creative or satirical construct rather than a factual report. Here are plausible interpretations:Satirical Critique of OpenAI and Sam Altman:The phrase could be a satirical or conspiratorial jab at Sam Altman and OpenAI, portraying their AI development as a shadowy, exploitative enterprise (“stygian stripper dungeon”) that produces powerful AI systems (“specialized fighting ready, worker agents”) using vast resources or data (“rewards”). The “night and day” endurance might mock the relentless pace of AI innovation or its potential to replace human labor. This aligns with sentiments on X, where users like @bitcloud
criticize OpenAI’s government partnerships for creating a “capabilities gap” between civilians and government, or Musk’s critiques of OpenAI’s alignment with establishment interests.
Gaming or Meme Culture Reference:The phrase’s structure mirrors gaming terminology, especially from Darkest Dungeon (Stygian Mode, party management) or V Rising (Stygian Shards for creating powerful entities). The “stripper” element might be a humorous or absurd addition, akin to the Reddit post about a “stripping Dungeon Master”, blending nerd culture with provocative imagery. It could be an internet meme exaggerating Altman’s role in AI as a “dungeon master” controlling powerful AI “agents.”
Conspiracy or Dystopian Fiction:The phrase’s dark, sci-fi tone (e.g., “breed,” “worker agents”) resembles dystopian narratives about tech moguls creating AI or hybrid entities for control or profit, as seen in fictional works like Overlord (breeding undead minions) or Backstabbed in a Backwater Dungeon (summoning powerful allies). It might reflect conspiracy theories about AI companies secretly developing autonomous agents for government or corporate use, fueled by public skepticism of Big Tech.
Misinformation or Trolling:The absurdity of “stripper dungeon” and the lack of verifiable sources suggest this could be a troll or misinformation meant to provoke or mock. X posts often amplify such narratives, and without concrete evidence, this could be a fabricated jab at Altman’s public image or OpenAI’s government ties (e.g., their $1-per-year ChatGPT Enterprise deal with the GSA).
I’m already overwhelmed.
“Expect results that range between Idiocracy and Wargames, minus the happy ending.”
turden’s “this segways to my other point” made me immediately correct it to “segues,” but then an image arose of the Segway, Inc owner riding his Segway over the cliff to his death. So using “segways” as an intransitive verb here FITS BEAUTIFULLY. lol.