Skip to content

Rounds Proposes AI Big Brother to Predict Mass Shootings

Senator Mike Rounds uses the murder of children at Annunciation Church in Minneapolis to advocate universal surveillance:

Rounds says many of the shooters leave clues or even announce what they are going to do before it happens on the public square, which nowadays means social media.

Rounds says AI could be used to identify individuals who pose a threat and bring them to the attention of law enforcement.

“I suspect that that is probably what we need to do now with regard to individuals who talk on the internet on social media and so forth,” said Rounds. “It used to be in the last few years that we simply didn’t have the technical capability to monitor all that, but it might be time to, with regard to the public square, using artificial intelligence to begin that process of actually identifying when threatening remarks are being made in our public square today” [Tom Hanson, “Rounds Says AI May Be Useful in Preventing Future Mass Shootings,” KELO-TV, 2025.08.28].

Great, so like “The Minority Report“, but without the moral quandary of enslaving psychic mutants. Instead, to ensure maximum data gathering and crime prevention, we’ll just build the omnipresent surveillance state on Meta’s Hypernova AI glasses and this new omnisnoop headgear:

The idea has now reached a new zenith. A startup called Halo is releasing a pair of smart glasses that will record and transcribe all your conversations and use it to beam you AI-powered insights. It’ll remember details you forgot and recall what someone told you they like, the startup says, arming you with facts it looks up on the fly and answering questions you don’t know the answer to so you can look like a genius.

In other words, it’ll make you smarter — or at least make you appear smarter, even if you’re actually a dolt — its Harvard dropout creators claim.

“Our goal is to make glasses that make you super intelligent the moment you put them on,” AnhPhu Nguyen, co-founder of Halo, told TechCrunch.

His cofounder Caine Ardayfio called the glasses, dubbed Halo X, the “first real step towards vibe thinking…. I think that with an AI assistant constantly helping you, you can become way smarter. You’ll know everything. You’ll have all of the facts at your disposal.”

“You’ll know exactly what to say and how to say it,” Ardayfio added, “and that’s what I think vibe thinking is: it’s not replacing — it’s not making me dumber, it’s empowering me to be able to say ten times more things, ten times more intelligently.”

Of course, if the glasses are going to be acting as a brain augmenter capable of revolutionizing thinking itself, they’ll need to be on all the time, recording everything you do. That’s what sets it apart from competitors, which have balked at the reputational risks that entails.

“I think our core difference from a lot of these wearables — like the Meta Ray Bans — for example, is that we aim to literally record everything in your life, and we think that will unlock just way more power to the AI to help you on a hybrid personal level,” Nguyen told Futurism [Frank Landymore, “Harvard Startup Says Its Smart Glasses Will Do ‘Vibe Thinking’ for You,” Futurism, 2025.08.24].

If you know everything, your AI-glasses company knows everything, and if a tech company knows everything, the government will know everything.

Before plunking cameras on every corner and cabeza, Minnesota’s Democratic lawmakers would like to talk about tightening gun laws. Note Senator Rounds doesn’t mention restricting guns but suggests that he’d use all that AI data to detain people for speech.

So for the sake of school children that we’ve been sacrificing at the altar of the Second Amendment, what would you rather give up: privacy and free expression or easy access to firearms?

Related Reading: Senator Rounds refers to “threatening remarks made in our public square,” but artificial intelligence can read thoughts and feelings if we so much as show our faces in the public square:

But what about other tools, like Affectiva, FER, and Luxand? These AI models are trained to recognize human emotions and cognitive states by analyzing our “facial and vocal expressions.” They do not require our compliance or consent, reading our minds in a way that we did not agree to. What are our neurorights against such tools? Do we even have any?

As the Cambridge Analytica psychographics scandal revealed, AI can now analyze your social media posts and deduct not just your political leanings, but your mental health, or whether or not you’re about to break up. Facial recognition is not new technology, but facial profiling and emotion recognition is something entirely different: AI can now detect your emotions, your sexuality, and your political leanings based not just on what you post on social media, but on a single neutral photo of your face.

If AI can read us like an open book, without the need for our consent, what effect does this have on our employment opportunities, our choices, what we buy, how we vote, who we marry, who we sleep with – basically, the entire trajectory of our lives? Imagine Affectiva monitoring your work, your doctor’s appointments, your dates. And to think more globally, how will this technology affect our justice and criminal legal system, health care, and our politics?

…Where, when, and how does the thought become itself? If the Cartesian assumption – ‘I think therefore I am’ – has laid the foundation of liberal democracy, with freedom of thought (and its expression) as its central pivot towards the self-possessed, self-governed, autonomous subject, who am I if my thoughts are no longer mine? And if they’re not mine, to whom or to what do they belong? What is that I am expressing when I am expressing my thoughts? What is that I am hiding when I choose not to?

If our thoughts are not ours, just data points to be studied and used against us, there is nothing left of us. The totalitarians have won [Magda Romanska, “Artificial Intelligence, Totalitarianism, and the Future of Cognitive Liberty,” The Humanist, 2025.04.16].

Maybe instead of comments, I’ll just ask you to post pictures of your faces after reading a blog post, and the Web Mind can tell us what you think… and tell Senator Rounds if you’re thinking anything bad.

2 Comments

  1. Donald Pay

    How about we use AI to follow meetings Senators have with the gun lobby? It’s really not the nutcase shooters I’m worried about. They knock off a few people at a time using the guns Republicans hand to them. Where’s the AI program that will tell use which Senators are accessories to murder because they won’t control guns whose sole purpose is to kill humans. Rounds and the rest of the Republican Party are responsible for many more deaths than some guy with mental problems and the military-style weapons Republicans sell to him.

  2. Bull

    Well said.

Leave a Reply

Your email address will not be published. Required fields are marked *