Todd Epp reports that South Dakota is among Midwestern states that “lag far behind in artificial intelligence use and efficiency, raising concerns the region may repeat the digital divide of the 1990s.” Epp’s chatbot makes that sound like a bad thing:
The findings highlight a widening digital divide across the Northern Plains. While AI is now a mainstream tool for work, study, and communication, the region’s lagging adoption means fewer productivity gains and risks repeating the broadband gap of the 1990s.
The reports found that Washington, D.C., leads the country in overall AI use, while Rhode Island ranks first in efficiency. States across the Midwest — especially South Dakota, Wyoming, and Kansas — trail well behind, according to studies published Sept. 4 by Digital Information World and Aug. 8 by Phrasly.ai.
South Dakota residents save an average of just 0.81 hours per month using AI tools, the lowest figure in the country, Phrasly.ai found. By comparison, Rhode Islanders save more than 32 hours monthly, nearly a full workweek, according to Digital Information World. South Dakotans also spend the least time per session, just over 10 minutes, compared to Delaware’s 17 minutes, the nation’s highest, Digital Information World reported [Todd Epp, “S.D., Northern Plains Trail Nation in AI Adoption,” Northern Plains News, 2025.09.11].
The source of Epp’s data, Phrasly.ai, is an AI company that wants us to buy its product. “Transform AI-generated content into 100% human text with Phrasly,” the company urges with a profoundly telling disregard for logical consistency (use a computer to generate human text?) and lack of self-awareness. “Save time on your writing, boost efficiency, and make your content indistinguishable from human writing.” You can get “Unlimited AI Humanizations, 15 AI Writer Generations/month, 2,500 words pre process, Advanced AI Humanization,” and more, all for the low low price of $12.99 a month.
AI Humanization—yes, because you’re so bad at being human that you need a machine to be human for you. Shut up and let the machine talk for you.
I’d like to think South Dakota really is holding out on giving up its humanity to prediction machines. But check the methodology behind this report:
Insights were derived from internal user data collected over the past 30 days (June 16, 2025 – July 16, 2025), covering session length, engagement type, and productivity metrics, segmented by state. Population data was gathered using the latest census data (2024). Google Trends was used to assess AI detection queries during the same time period as above [“Written by Daniel Anderson, Senior Content Strategist at Phrasly AI”, “The AI Adoption Gap: Which States Are Leading and Lagging in the Use of Generative AI?” Phrasly.ai blog, 2025.08.08].
I put “Daniel Anderson” and his job title in scare quotes, because Google and DuckDuckGo searches for “‘Daniel Anderson’ Phrasly” produce no LinkedIn page, no online résumé, no interviews, nothing but Phrasly.ai blog posts, suggesting “Daniel Anderson” is just a “humanized” fiction slapped on top of an AI-generated sales pitch. Nor am I finding Phrasly among lists of most popular generative AI web tools. The “internal user data” on which “Daniel Anderson” bases this particular “precise narrative on language technology” appears to come from one bottom-tier company from one month when most South Dakotans go to the lake. It does not appear to capture data from leading generative-AI service companies.
This report by “Daniel Anderson” is similar to what I might say about how much news people read in the United States: “According to internal user data from Dakota Free Press, South Dakotan’s lead the nation in reading news, while Mississippi, Utah, and Connecticut lag in adoption of daily current-affairs content consumption, highlighting a widening civic divide.” But I’m a human committed to creative and factual writing, not an algorithm designed to boost clicks and sell product, so I won’t say that.
Related Reading: Big Tech’s own research shows that using AI may erode critical thinking:
Research from major tech companies on their own products often involves promoting them in some way. And indeed, some of the new studies emphasize new opportunities and use cases for generative AI tools. But the research also points to significant potential drawbacks, including hindering developing skills and a general overreliance on the tools. Researchers also suggest that users are putting too much trust in AI chatbots, which often provide inaccurate information. With such findings coming from the tech industry itself, some experts say, it may signal that major Silicon Valley companies are seriously considering potential adverse effects of their own AI on human cognition, at a time when there’s little government regulation.
“I think across all the papers we’ve been looking at, it does show that there’s less effortful cognitive processes,” said Briana Vecchione, a technical researcher at Data & Society, a nonprofit research organization in New York. Vecchione has been studying people’s interactions with the chatbots like ChatGPT and Claude, the latter made by the company Anthropic, and has observed a range of concerns among her study’s participants, including dependence and overreliance. Vecchione notes that some people take chatbot output at face value, without critically considering the text the algorithms produce. In some fields, the error risks could have significant consequences, experts say — for instance if those chatbots are used in medicine or health contexts [Ramin Skibba, “Are We Offloading Critical Thinking to Chatbots?” Undark, 2025.09.12].
(Pat Powers uncritically reposts Epp’s article; I take a moment to study it, critique it, and put it in context. bUt Pat’s always been some sort of bot.)