The arguments over how misled, bamboozled, or just plain dumb the voters were in the 2024 presidential election have long since passed the point of diminishing returns. We’ve all heard about voters having to look up whether Biden had dropped out on election day, about how voters who agreed with Democratic policies often voted for Trump anyway, and about how many voters just didn’t know what the hell was going on. But like I said in my What Is To Be Done series, in every election voters are wrong for a wide variety of reasons and often (hopefully less rather than more often) there’s not much you can do about it.
(By the same token, all those chucklefucks out there telling Democrats that to win those voters back they have to shit on trans people, or admit the cities they live in are hellholes, or — and this one is very popular among the dumbest people on God’s green earth — stop being so “elitist” are also missing the point. But you knew that.)
I have suggested that education — not adult education, though most grownups could certainly use it, but middle-school courses in civics and media literacy — would be helpful, at least after a while. (Whether we can survive until them young’uns are old enough to vote is another matter.)
But there’s something else that’s in the general category of education that I think can be of help. And the best thing about it is that it’s not, strictly speaking, a political remedy, though I expect its success would yield political dividends.
I don’t usually pimp other authors here, not just because I’m a narcissist but also for the understandable reason that my readers may see the excellent work I recommend and think, what am I doing reading this Edroso dipshit when I can do much better elsewhere. But I expect many if not most of you already know Rick Perlstein. His stuff is always prime, and his latest is especially good. It’s about the big problem with AI (generative AI, predictive AI, large language models, all that): Information corrosion, as he calls it. That is, it is not good at the thing it’s supposed to be great at — making logical inferences from data — and over several iterations it in fact gets worse rather than better. He uses as an example an AI analysis of his own work:
In the post, I offered an excruciating (to me) example: the first time I tried putting ChatGPT through its paces, not by asking it for a sonnet or a Keatsian ode, but “What does Rick Perlstein believe?” One of the things the confident listicle that came forth offered was actually the opposite of what Rick Perlstein believes: namely, the rank cliché that the biggest political problem America faces is “polarization.” No. Rick Perlstein actually believes the biggest political problem America faces is fascism, and that fighting it requires more polarization. And I should know. I’m Rick Perlstein!
Now, Perlstein is talking about AI that’s trying (or whose promoters are trying, or at least say they are trying) to come up with valid and meaningful conclusions. But we can see all around us how much worse it is (and how much more common it is) when AI is used to lie to us.
I’ve written a few times about it — about AI used to answer reporters’ questions with meaningless pseudo-answers they may not be alert enough to catch, and AI offered as a cheaper-but-just-as-good substitute for art. I’ve also written about it as a political tool, employed to create visual fantasies to match the verbal propaganda the candidate dishes out, as with these:
You will recognize these AI images as the same variety of six-finger shit that’s been mocked into the stratosphere over the past few years, not just for little inaccuracies like that but also for always looking just fake enough to trigger the suspicions (and often the revulsion) of all but the most credulous readers.
For me, the good news is this: That what’s ugly and repellent about AI imagery seems to have remained ugly and repellent despite all the money and effort poured into making it more convincing. I mean every time I see a come-on that says BETTER THAN MIDJOURNEY or something like that, the example they show is something that any schoolchild could identify as AI — that is, as a fake
The new AI Coca-Cola Christmas commercial that’s getting torched by talking heads is not a failure because it’s not slick enough — these people can afford the slickest tech around, and in terms of production values it’s “good” — but because human being immediately know what it is and instinctually disapprove.
This is a reaction that is visceral rather than sophisticated — and it gives me hope, not just that Americans will continue to reject this lurid fakery in their role as consumers, but that they’ll also come to reject it in their role as voters.
I’ll tell you why: Although I have heard and understood the complaints about Elon Musk and social media deceiving voters, and you know how I have complained about the Prestige Press and its role in carrying Tubby across the finish line, I really believe that a significant number of voters knew that all that stuff was bullshit — and many of them voted for Trump anyway, with the misinfo and disinfo serving as an excuse rather than as a reason. Maybe they really thought the fascist candidate would bring prices down; maybe they thought he was funny and liked seeing him on TV; maybe they’re just racists. Like I said, they have all kinds of reasons to be wrong.
If I thought voters were uniformly so stupid as to really believe that shit, I would have to conclude that they were irredeemable, and democracy finished. And I have heard a lot of people say as much! But I don’t think that’s so. I believe that plenty of Americans still know a hawk from a handsaw and shit from Shinola. In other words, they know what bullshit is. And (again, as I said in my award-winning-series-in-a-better-world-than-this about the late election), if a few things shift over the next year or two, so will the electoral tide, and I believe no amount of bamboozlement will prevent them from swinging back — even if the would-be masters of manipulation finally manage to get the number of fingers right.
Roy, I want to believe you, but I wonder if we shouldn’t be drawing a different conclusion from the parallel journey of force-fed AI and fake politics. Namely that the proliferation of phony reality not only made it much easier for people to vote for Trump: “sure, he’s not what he says he is, but the price of eggs, etc.” but because garbage in, garbage out has become the norm, despite the fact it’s ugly and repellent. No, people don’t *LIKE* AI and they don’t like Trump (most of them). But they just sort of passively shrug with an attitude of “er, whatever, this is how it is now.”
In other words, I don’t see people’s stupidity or racism/misogyny or narcissistic self-interest as nearly as big of a threat as their passivity. I fear people have become totally inured to “consuming” politics and see the candidates as products being hawked. That the product being sold is shit just feels like business as usual to them. They’ve lost their sense of themselves as active participants in a democracy.
I guess I should drop here the answer my neighbor the Jan-6-defendant-public-attorney gave me when I asked how the work is going (the following is from memory, not word-for-word):
"My job is not easy; most of these people are morons."