Senator Al Novstrup (R-3/Aberdeen) defends his lazy reliance on artificial intelligence for his Legislative work by saying reciting robot text is just like going to the library or reading newspapers:
In an interview, the 24-year lawmaker told South Dakota Searchlight he’s used Gemini and a handful of other generative AI tools to build arguments for or against bills, likening it to pre-AI internet searches, and to pre-internet sourcing for floor speeches.
“Prior to Google searches, we relied on the Dewey Decimal System,” Novstrup said, referencing a cataloging framework used by libraries to sort books and other physical media. “Before that, we relied on our friends and neighbors, or the newspaper or the radio” [John Hult, “Artificial Intelligence Crept into Lawmaking in 2026, Prompting Excitement—and Concern,” South Dakota Searchlight, 2026.04.03].
No, Senator Novstrup, your AI tools are not the Dewey Decimal System. The Dewey Decimal System tells us where to find books so we can read them ourselves. AI tools scoop up all the text in all the books they can find (and newspapers, blogs, tweets, online chats, advertisements, Russian propaganda…), do profound math you don’t understand, and then tell you, “Here’s what someone might say in response to what you just asked” so you don’t have to bother reading any relevant books yourself.
Your AI tools are not your friends and neighbors. Your friends and neighbors elected you. You serve them. You owe them respectful attention to their claims and concerns, respectful correction and instruction when they are ignorant of or mistaken about important public issues, and respectful exercise of the moral, intellectual, and rhetorical skills that they elected you to exert on their behalf. You and your constituents owe your chatbot nothing.
Your AI tools are not the newspaper or the radio. The newspaper, the radio, and the rest of the press offer information prepared by journalists, humans with a professional and moral commitment to telling the truth and educating the public to support democracy. AI tools don’t care about truth, education, the public, or democracy; they just run programs, spitting out strings of words satisfying statistical parameters.
AI Novstrup isn’t the only lazy legislator relying on artificial intelligence to speak for his constituents. Hult finds Skynet is infiltrating multiple districts with unreliable robot language:
Several lawmakers interviewed by Searchlight said they’d used some form of artificial intelligence to craft arguments or research talking points.
Others went beyond that. Rep. Kent Roe, R-Hayti, said he runs all his bill drafts through Grok, a generative AI tool developed by the Elon Musk-owned company xAI, to refine his ideas. Asking questions more than once, refining search queries and cross-checking the answers are key to getting trustworthy results, said Roe.
The more human interaction, he said, the better the outcome.
“You just recycle it 10 times, and if it comes up with 10 different answers, you know it’s not working,” said Roe, who also uses AI at his day job as an appraiser to speed up reporting on property values. “If you come up with nine out of 10 answers that are very similar, well, then you know it’s doing its job” [Hult, 2026.04.03].
But do you know, Rep. Roe? Or is Grok just giving you the same Musk-biased, reality-detached answer every time?
Even when we catch AI lying to us, it doesn’t learn and will lie again. So says Legislative Research Council director John McCullough:
McCullough’s not convinced any AI tool can perform trustworthy analysis of a bill’s potential conflicts with existing laws, and he doesn’t see a future where it replaces the human eyes of trained council staff.
“By design, we publish the law,” McCullough said. “Implicit in our mission or duty is to be the caretaker of the law. So you have to have a centralized drafting agency that is attempting to make the law be consistent throughout.”
Which is not to say that AI is strictly off-limits at the council in every context. Most legal research tools, including those used by the council, now incorporate AI to help lawyers find what they’re after more quickly.
South Dakota has the smallest state legislative staff in the U.S., McCullough said, so “if we can use technology to help us, then we’ll use technology to help us.”
But the help it can offer has limits, in McCullough’s experience. He recently asked a generative AI tool a question about “some obscure legislative issue” and got an answer that cited a South Dakota law that doesn’t exist. He told the AI tool as much, and it apologized and told him the law actually came from North Dakota.
That law didn’t exist, either.
“It was basically hallucinating,” McCullough said [Hult, 2026.04.03].
Hult’s articles shows Representative John Hughes (R-13/Sioux Falls) and Representative Scott Odenbach (R-31Spearfish) embracing artificial intelligence to write legislation. But Senator Liz Larson (D-10/Sioux Falls) says AI is eroding the quality of Legislative work:
Larson was troubled this year by what she saw as inadequate due diligence before AI conclusions appeared in testimony or in speeches.
“Those standards of rigor have gone down in the past year,” Larson said. “People aren’t even ashamed to say ‘I just looked this up.’ If something is on the Senate floor and up for final passage, you probably should’ve done the research before it got to that point” [Hult, 2026.04.03].
Senator Larson is seeing what researchers call “cognitive surrender“:
The reality is that chatbots like OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude still make regular mistakes. According to an October study by the BBC, even the most advanced AI chatbots gave wrong answers a whopping 45 percent of the time.
But many users don’t understand that reality. As detailed in a new paper, University of Pennsylvania postdoctoral researcher Steven Shaw and marketing professor Gideon Nave found that in a series of experiments, users tended to take the output of ChatGPT at face value even when it gave them the incorrect answer.
Across a series of experiments, participants were asked to answer a variety of reasoning and knowledge-based questions. Despite making the use of ChatGPT optional, over 50 percent of them chose to use the chatbot to answer the questions.
The researchers were testing a key theory: whether users would be willing to believe what the AI was telling them regardless of accuracy, in what they termed a “cognitive surrender” that effectively overrode their intuition and deliberation process.
In the most striking experiment, involving 359 participants, participants followed AI’s correct advice 92.7 percent of the time — and a still-considerable 79.8 percent of the time when the AI gave them the wrong answer.
“While override rates were substantially higher on AI-Faulty than AI-Accurate trials, participants followed faulty AI recommendations on roughly four out of five chat-engaged trials,” the researchers wrote [Victor Tangerman, “Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong,” Futurism, 2026.03.28].
Novstrup, Roe, Hughes, and Odenbach are all running for reëlection. They all have challengers whom I hope (a) don’t have AI chatbots write their speeches and (b) will make a vociferous case that our elected officials, perhaps more than any other knowledge workers, have an obligation to exercise their own minds, speak in their own voices, and resist the temptation to surrender our intellect and identity to the machines.
If you use AI to do most of your work for you, you can be replaced by AI.
Indeed, why waste time and legislator pay on any men in the middle? Why not simply have the Governor prompt Gemini or ChatGPT: “Compose 100 bills that would (1) improve the quality of life in South Dakota, (2) be acceptable to a majority of South Dakotans, and (3) not violate any part of the South Dakota Constitution and the United States Constitution”? Let the LRC review the outputs for obvious hallucinations, then simply sign them all into law.
I have to agree: Sen Novstrup’ s understanding of AI is very 20th century — even boomer. AI down not just harvest information like our favorite librarian in her well- kelp vertical file (how’sthat for showing my age?). It is not a google search that requires the reader to invest. AI is the whole research and production/synthesis process done for the button pusher.
Which begs the question: do the good people of district need someone to push the button or can they directly elect ChatGBT to Pierre?
I started noticing when Rounds was governor, he did not put much effort into the legislative session. That laziness is now prevalent by most SD legislators. The SD politicians put effort into what I call the “SEX and GUNS” bills because that earns the legislators’ contributions, but beyond that most SD politicians are just lazy.
There’s a great spot near Aberdeen that’s flat, has electricity and would perfect for a data center. Quick before Bernie and AOC pass a law making it harder to do so.
You’d have to change the name, though. It’s way too Bruce Springsteenish now.
Frederick?
Mr. Novstrup, the elder, is an elected institution in Aberdeen City and cannot be defeated through normal means. He has a swell haircut, but is known to be dimwitted. Thus, it is good Mr. Novstrup uses the Al to assist him, for it results in better representation for Aberdeenians. Mr. H should laud his old nemisis in that Mr. Novstrup, the elder, even uses computers at such a level.
They fact they use AI, which is cutting edge technology, is initially surprising. Then you consider that AI does all the work and their only job is to half-heartedly ask “Does it feel true?”. Then it makes sense.