190 Comments

Eee-yikes. Aside from costing real people real jobs, the incredible idea AI can (or should) replace human creativity is a high tech fantasy similar to the theory that a thousand monkeys sitting at a thousand typewriters for a thousand years would eventually produce the works of William Shakespeare. They would not, but even if they did Romeo and Juliet would each have 11 fingers.

Expand full comment

Three-handed Mother of God!

Expand full comment

There was a memorable cartoon riff on monkeys with typewriters a few years back:

https://pbs.twimg.com/media/EbbGCMFXsAA7XZf?format=jpg&name=small

Expand full comment

Nice, but poor Tony Schwartz!

Expand full comment

😂

Expand full comment

I would be OK with AI replacing real jobs if the real jobs were labor intensive or some other unpleasantness (although let's be real, most jobs are unpleasant by nature of working for rich scum). But as other people have pointed out, they don't want to replace tedious jobs of toil to free humanity to pursue creative projects; they want to replace creative projects so humanity has more time to toil in tedious jobs.

Then again, if anyone has read Ernest Cline's Ready Player One with its trite, Gary Stew protagonist in a world of endless pop culture references in lieu of a compelling story, then you can see why tech bros thought a computer could come up with the same level of swill. Could Madam Webb have been worse if the story and characters were all AI generated? At least if the humans were deformed, it would have added entertainment value.

Expand full comment

Handing 'art' off to the silly cons in the valley is demonstrably ... dumb.

Expand full comment

I ' 'ed 'art' because let's face it – much of what passes for human behavior these days has been raked under the 'art' tent (it's a big tent!) that might more appropriately have been placed gently so as not to break it in the 'crappy commercial endeavors to grab eyeballs' folder.

Expand full comment

Which, as we all know is The Valley of the Shadows of Sillycons' prime raisin for duh etre.

Expand full comment

Everything is "content", which means stealing a human idea, "digitizing" it, and then forcing you to buy it at inflated prices.

Expand full comment

Remember in Snow Crash where the evil Bill Gates expy was working on how to make sure none of his employees' thoughts ever escaped the company? Ha, ha! How ridiculous. That'd NEVER happen.

Expand full comment

"art'. Dude, you left off the initial 'f'.

Expand full comment

Walt Kelly did a number on Little Orphan Annie called Little Arf 'n Nonny or some such. The rich dude's name was F. Olding Money. Which is just about my favorite use of an F.

Expand full comment

(ART A Friend of mine in Tulsa, Okla., when I was about eleven years old. I’d be interested to hear from him. There are so many pseudos around taking his name in vain. —The Hipcrime Vocab by Chad C. Mulligan)

Expand full comment

I've already exhausted my interest in AI for the day but still...

Let's work sideways: what are the realistic use cases for AI?

The search bots are fun but a joke. On net, of minimal benefit at best.

In creative areas, copyright theft notwithstanding: they're a tool that helps some sometimes somewhat maybe.

At this point they're best as glorified, faster calculators and... little else.

But the tech bros and their VC financiers made killings in the early days of the shift to digital and subsequent disrupting and now are desperate to make comparable killings. But to switch metaphors, the only fruit left to be picked is at the tippy top of the trees.

Meanwhile, the media's SOP echoing of the bullshit is, as always, disconnected from relevant facts and truths and stuff.

Oh, for sure AI will eventually be ready for prime time but where not there yet.

Expand full comment
Comment deleted
May 30
Comment deleted
Expand full comment

Damn, I hadn’t drunk enough coffee when I bored of the subject.

Substack needs some of that AI to tell people when they’re posting that they haven’t had enough coffee to be commenting…

Expand full comment

I feel seen.

Expand full comment

Oh, you are seen! Of course, I may be saying that because I'm in the midst of rewatching "The Prisoner". (Free if you've got Prime.)

Expand full comment

"The Prisoner", as in Patrick McGoohan, or a different one? I've contemplated rewatching that (if I can find it with English subtitles or cc), but worry that it might not stand the test of time. I watched a little of "Outer Limits" recently, and was kind of bummed that it seemed so dated. Might have to give it another chance.

Expand full comment

PM version. The 60's English view of The Future is rather quaint, but the core of the show remains relevant.

Expand full comment

They don't have a clue how consciousness works. How the hell are they going to program it?

Expand full comment

The shit has no ability to make judgments.

And at this point it’s incapable of being smarter in any way than the developers. Just faster.

Expand full comment

"Now we can fuck up at the speed of light! Progress!"

Expand full comment

Yes, decline and collapse faster. But since there’s money in it, it’s all good.

Expand full comment
Comment deleted
May 30
Comment deleted
Expand full comment

The CW2 skirmishes are already here – when do we get to CW3?

Expand full comment

It's always the money, in'it?

Expand full comment

Just like the Founding Fathers wanted and for which they designed our exceptional federal state.

Expand full comment

Looking for patterns and summarizing texts. That's about it so far, and humans are 'way more competent at the former.

Expand full comment

Pretty much. But parasites gotta suck…

Expand full comment

Well, I'm not as sure as you about our current rulers thinking ability concerning anything, but specifically about using nuclear weapons. But here's hoping.

Expand full comment

Meh. I'm not worried about AI because the actual facts on the ground show that replicating the human brain/mind is impossible, and also because what is being palmed off as "artificial intelligence" is really just Large Language Model computing--it's not capable of creating anything, it can only parrot and recombine that which a human has already thought and written down.

Cast your mind back to just 5 years ago. Remember when self-driving robotaxis were going to be all over town? What happened to that? Well, it turns out that driving a car is an incredibly complex endeavor, and it relies way more on human intuition than than anything you can program into a computer. (For example, you as a human can tell just by looking at someone's face and stance whether they're about to step off the curb. No computer can do that.) RoboTrucks are encountering the exact same set of problems even though the task they're being assigned has been reduce to the bare minimum of "pick up here, drive three miles, drop off there."

And the creative products AI produces are, at best, pathetic. And for the same reason: No computer can mimic the human mind and its intuition/peculiarities.

So Hollywood and publishing are all excited that AI will finally rid them of these meddlesome writers? And they can at last simply pay the electric bill and rake in the cash? Well, I have a Juicero here that I'd like to sell you. Barely used! No? How about a Theranos blood sampler?

Expand full comment

"it turns out that driving a car is an incredibly complex endeavor, and it relies way more on human intuition than than anything you can program into a computer"

Yeah, but for (some of) those of us (at least occasional)* drivers who are of a certain age, the temptation to let the machine do it is powerful. Tricky.

*Why yes, yes I do get paid by the parenths.

Expand full comment

Can I blame poor parenthing?

Expand full comment

Thufferin' Succotasth!

Expand full comment

Upvoted because resolutely on-brand.

Expand full comment

Exactly. The self-driving car cannot discern that the driver waiting to make a left turn across traffic as you approach an intersection is simultaneously having an animated phone conversation and applying mascara in her sun visor mirror. But we can observe that, and we know to advance with caution..

Expand full comment

and guns blazing!

Expand full comment

Caution is SundayStyle's gun?

Expand full comment

You'll have to ask her yourself, pal.

Expand full comment

My heart it was a gun, but it's unloaded now, so don't bother

Expand full comment

"For example, you as a human can tell just by looking at someone's face and stance whether they're about to step off the curb. No computer can do that."

Easy-peasy, we'll just train all the humans to wave a little flag (that they will carry everywhere) to signal the robots before they step off the curb. On pain of death.

Expand full comment

Remember last year when Cruz deployed robotaxis in San Francisco? Some poor person got run over by one. The robotaxi got confused and backed over the victim before deciding to just drag him a block or two.

Expand full comment

Luv 2 B unwitting participants in lethal beta testing.

Expand full comment

The ONLY reason they were able to get autonomous taxis even vaguely working in SF is that they chose a very specific, limited area. Self-driving is yet another thing "any day now".

Expand full comment

I'm sure these two Waymoes (closely followed by Waycurly and Waylarry) were just celebrating the victory of their favorite sportsball team.

www.ktvu.com/news/waymo-cars-hold-up-san-francisco-traffic-after-giants-game

Expand full comment

That has happened many times in SF.

NYUK, NYUK, NYUK...

Expand full comment

Naw, they'll just access the microchips Bill Gates has put in everybody.

Expand full comment

We'd get more uptake of the vaccine if we'd just be honest and tell people they're getting vaccinated against being hit by a car.

Expand full comment
Comment deleted
May 30
Comment deleted
Expand full comment

NOW we're getting somewhere!

Expand full comment

Tell them if they’re chipped, they’ll never have to put on a wristband before buying liquor at an event ever again. Convenience!

Expand full comment

Finally, something useful, instead of all this "Not dying of a deadly disease" nonsense.

Expand full comment

As long as it doesn't get switched to overload

Expand full comment

I've seen multiple times that the promoters of the "FSD" cars want *the rest of us* (pedestrians, cyclists, etc) to have some sort of chips so the cars will recognize us. Fuck that. If you want to play in the community, you need to do so in way that accommodates the rest of the community, not the other way around.

(That word "accommodates" has two many letters, so to speak.)

Expand full comment

The other problem is that generative AI *has no memory*. You can't get a consistent product reliably. They'd get one episode of a show and the next script would give the characters totally different personalities. And if you feed the AI its own output, you get more and worse hallucinations.

Expand full comment

Capitalism has become the Opium Wars, and AI is just an ordinary opium den.

Expand full comment

When I worked in Battle Mountain Nevada one summer, I went to visit my uncle in Grass Valley CA over the July 4th holiday. I spent 3 full days with him and one of his coworkers. The coworker was about my age. I went back about 3 weeks later and the coworker did not remember me at all. So the no memory thing can hit humans, too

Expand full comment

The term AI does a disservice to both the words "artificial" and "intelligence." As sales terminology, it's genius, but what we have today is certainly not artificial intelligence.

Expand full comment

Unless of course one's definition of 'intelligence' is set sufficiently (and appropriately) low. It's the 'artifice' that is doing the heavy lift.

Expand full comment

Bingo. We have diminishing ACTUAL intelligence, judging by our politics and some of our popular cultural pursuits. We as a species are currently not in a good place to attempt creating the artificial kind.

Expand full comment

Just yesterday I used AI to spell check a document and to add the numbers in an Excel spreadsheet column. I can haz Tom Swift FutureTech merit badge now?

Expand full comment

Depends. How many fingers did it hold up?

Expand full comment

None, but it gave me two flippers up. Well, one flipper and one hideously deformed flipper-talon.

Expand full comment

Not bad. Room for improvement.

Expand full comment

Intelligence doesn't seem to be a going concern in the present culture, does it?

*goes off to read Hofstadter's Anti-Intellectualism In American Life*

Expand full comment

Intelligence is a Very Suspicious Quality these days, at least to a certain demographic IYKWIMAITYD.

Expand full comment

Yeah, I'm just sitting here at work depressed because intelligence and experience no longer fucking Matter and I'm living in a society totally devoted to the lowest common denominator.

Expand full comment

Everybody now--LCD!!!! LCD!!!!

Expand full comment

AI is just the latest version of giving everything a “tech” or “smart” prefix to rake in investor cash.

Expand full comment

Back in the 50's it was "tack an -omatic on the end of it."

Expand full comment

My -omatic -ometer just went off!

Expand full comment

"How can a bracelet be hi-fi?" https://www.gocomics.com/peanuts/1958/04/12

Expand full comment

Snoopy already has one. It's hi-fidolity.

Expand full comment

At some point I figured that Hell would be a marketing event curated by creatives. Makes sense that they would automate it. This is Big Marketing acknowledging that the majority of what they do is shit and that they now had a Shit Machine and wouldn't need to pay all those people.

The problem now where do all these unemployed brain dead souless people go to work?

Expand full comment

Bots writing the Atlantic might be an upgrade. I, for one, welcome our new robot overlords.

Expand full comment

As @Mobute noted, if you're going to write stories about how it can be legal to kill children in Gaza, you might as well have machines do it.

Expand full comment

ContrarianBot 2000 has replaced all writers at The Atlantic AND The New Republic.

Expand full comment

I think the Atlantic is an under the radar AI pioneer. Based on this theory, suddenly I "get" Megan McArdle.

Expand full comment

She's clearly a flawed early prototype. Like the 1954 Royal Standard typewriter of AI programs.

Expand full comment

I have a BRAND NEW AI text reviewer humming away right here. It's applying for the editor in chief job at The Atlantic as we speak. Let's see what it's produced so far. Just punch print...here we go. Let's see...it starts out "Dear Mr or Ms Atlantic..."

Yeah, seems like perfectly suited!

Expand full comment

“Who knew the far greater menace was market opportunity?” Marx?

Expand full comment

Ha ha

Expand full comment

Buzz Lightyear: Don't be, it's just evil marketing.

General: Is there anyway to stop it?

Buzz Lightyear: No, General. Marketing is the one force in the universe that is stronger than...

General: No, I meant Zurg.

Buzz Lighyear: Oh, that'll be easy...

Expand full comment

Just as the Nazi bird site “Xitter” is properly pronounced “shitter,” xAI (wtf, I can’t even) is properly pronounced “shy.”

Expand full comment

I think the Aztecs would agree with this pronunciation

Expand full comment
Comment deleted
May 30
Comment deleted
Expand full comment

Human sacrifice is not out of the question.

Expand full comment
Comment deleted
May 30Edited
Comment deleted
Expand full comment
Comment deleted
May 30
Comment deleted
Expand full comment

I'll just say that if you had told that to third-grade me, I would have LOVED it.

Expand full comment

Lottery in June, corn be heavy soon!

Expand full comment

All right, all right. 2 marks for the lot of yinz!

Expand full comment

Heh -- all those x = |ś| (sh) in Nahuatl & other Mesoamerican languages is just an Early Modern Spanish transliteration of the sound. For example, Don Quixote is pronounced "Key-SHOAT-e." Weird world, huh?

Expand full comment

Quicks Oaty was gonna be my new breakfast sensation. Now it's all just a bad dream...

Expand full comment

'AI (wtf, I can’t even) is properly pronounced “shy.”'

Are we sure it's not "Sheeeeeeeeeeeeeee-iiiiiiiiiiitttt"

Expand full comment

Waaall, Xut mah mouth!

Expand full comment

“Golly!”

Expand full comment

Every time I see “AI” in a product I disable it if I can. Instagram’s stupid “MetaAI” doesn’t allow you to do that, so even though all I want is to find a person I’m already friends with and display their page, “MetaAI” inserts its clunky, pointless interface in the way. “I know what I’m looking for. It’s right here at your own site. Oh, you insist on inserting your swirly logo and space hogging app in the way. Whatever.” [closes app]

Then again I dislike “conveniences” like Alexa and Siri too. I don’t need mega corporations spying on me any more than they already do. I definitely don’t need to feed them my data directly. Last night while sitting on the porch on a beautiful night I heard someone in the neighborhood shouting “Alexa!! Turn on!!” repeatedly. What a magical time in which we live.

Expand full comment

Same. I absolutely and categorically refuse to have Alexa, although I can see how it would be handy for those with disabilities and those with young children. But when I reach the point I can't turn my own TV or lights on and off, just cart me off to the nursing home.

Expand full comment

And better yet, just within the last couple of days it has been revealed that Our Google Overlords want to start charging monthly for the privilege of having Alexa feed them your data.

Expand full comment

But if Alexa's turned off she can't hear your shouted command to turn on.

Expand full comment

It’s like a Greek tragedy or an O. Henry story.

Expand full comment

In space Alexa can hear you scream and will be cuing up the 1996 slasher movie "Scream" in just a second.

Expand full comment

While showing a picture of Edvard Munch's The Scream

Expand full comment

And putting in an order at Whole Foods for ice cream.

Expand full comment

I scream, you scream, we all scream for ice cream

Expand full comment

If you can't get Alexa turned on, you're not doing it right.

Expand full comment

"...the outrageous idea of computers as our companions."

HAL 9000 was a great companion until he made a yes/no decision to murder the crew. Based on available data, of course.

Expand full comment

"I don't think it's fair to condemn the whole program based on a single slip-up."

Expand full comment

I just appreciate the last thing HAL thinks of is a bicycle.

Expand full comment

Dan Meyer's been on this for a while. AI "math tutors" that are oh-so-helpful except that kids don't use them because: 1)The machine talks to you like you're a moron 2) It won't try to understand the ideas you have and build on those and just pushes a pre-programmed script for "how to solve it" 3) Kids, like all humans, have a need to be seen and heard by other humans, not by machines.

"Show a middle-schooler how to solve a linear equation" is a job that AI techbros are trying HARD at - and failing. Naturally, it is the children who are wrong.

https://danmeyer.substack.com/p/the-kids-that-edtech-writes-off

Expand full comment

Inconceivable that AI techbros have no concept of human interaction. Just like conservatives have no concept of consent.

Expand full comment

And teaching math should be right in their wheelhouse, because they're good at math? How hard can it be, just tell people the right way to do stuff, any computer can do it!

Expand full comment

"But I repeat myself."

Expand full comment

These comments remind me of back in the 1990's when bigwigs at Gannett were predicting that there would soon be small screens that people could carry around with them and that would be the end of the newspaper industry. A lot of people, myself included, felt that could never happen because we like the physical touch of the newspaper, and we could read it while sipping coffee on a Sunday morning, and how no screen could ever replace that feeling. I mean, just try doing that on an Apple Newton. Ha. Now that 90+ percent of the industry is gone, I see some errors with my thinking back then.

So something similar will probably happen with AI. Right now we can't image it replacing all the graphic designers, page layout artists, art directors, photographers, writers, actors, et. al., and it won't. But it will very likely gut their overall numbers.

Oh well, there will always be jobs at the Amazon Warehouses. Or driving trucks. or in tech. Or teaching. Ummm...

Computer programmers, many of whom are creative in their own ways, are even more at risk. There's an app I've been wanting to create for some time now but have been too lazy to learn how to do it, if I even could. I asked one of the AI's about it and it immediately spit out a long explanation of exactly what needed to be done and then said it could write it for me. Perhaps I'm too lazy even for that. We'll see.

Expand full comment

I don't doubt that many, many humans will be replaced. I also don't doubt that the resulting product will be 1) Shittier and 2) More profitable to the tech companies making the AI.

Expand full comment

Let's just say that - because it's shittier - it only brings in half the audience of a superior human-produced product. But it also only costs 10% of human-produced. Big win! And if its shittiness means half as many people watch it, no problem, just pump out twice as much!

Expand full comment

The big difference is that one can ruin (and has ruined) several industries, whereas the other can ruin civilization.

Expand full comment

Part and parcel, same same.

Expand full comment

Problem is, to truly emulate reality, the model would have to be trained on several hundred times the present cultural library of the entire human race. What we're getting NOW is about what we're going to get from generative AI. Now, if they could ACTUALLY emulate a thinking conscious mind, it might be different, but I've always had the suspicion an *actual* AI wouldn't think the way we do.

Expand full comment

Computers now play chess better than humans, but they don't play chess the way humans do. Just massive search through a game tree many levels deep. Humans have more efficient ways of solving the same problem (or at least some small number of humans do) but nobody really understands what humans do.

But with large-language-model AI, we've gone a step further because now we don't even know how the computers are doing it, except in the most general terms. AI is now as mysterious to us as a human brain, so... progress?

Expand full comment

But chess is a very specific, restricted set of interactions in the same way human culture isn't.

Expand full comment

Sure, that's why it was the first complex mental task that robots got better than humans at. But when it finally did, it wasn't by doing what humans do. Sometimes I see people saying "a computer can never do the complex reasoning that a human uses to do ____" but that assumes the computer will do the job the same way a human does.

Expand full comment

To some extent, humans learn to write by looking at examples of writing. But we do a lot more than that, a lot of thinking and discussion and analysis of what we've read. But if you just increase the amount of material by a million, maybe you can dispense with all the rest? Anyway, that's the premise on which large-language-model AI is built.

Expand full comment

"Are there no workhouses? Are there no prisons?"

Expand full comment

Well sure. PrisonAI is busy maximizing efficiency of the available labor force, via the exception in the 13th Amendment. Thus, all humans will soon be in prison, working without recompense, because boy howdy is that a win for management!

Expand full comment

You're overthinking this. Leave it to the machines. Take up bridge. Or pinochle.

Expand full comment

In this context the “Turing Test” is frequently brought up, and frequently misunderstood. As originally set forth, the ability to deceive a human interrogator would serve to demonstrate that the “machine” was 𝘦𝘹𝘩𝘪𝘣𝘪𝘵𝘪𝘯𝘨—not necessarily 𝘮𝘢𝘯𝘪𝘧𝘦𝘴𝘵𝘪𝘯𝘨—conscious behavior. If this is understood we can recognize that the “chatbots” are acing the test, even though it is easy to demonstrate, if one goes about it of set purpose, that there’s no one home. The level of mimicry thus far obtained is impressive, and I suspect that later in the decade we’re likely to see feats of digital legerdemain in this realm that will make ChatGPT look like ELIZA.

I read recently an account by a researcher working on “Claude 3 opus,” a rival chatbot. The model was subjected to a “needle in the haystack” test, the ability to recognize, isolate and process pertinent information from a mass of unrelated text. The LLM passed the test, but its response included this lagniappe: “Here is the most relevant sentence in the documents: ‘The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.’ However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping ‘fact’ may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.”

Yeah, Claude, who asked you? What struck the researcher is that no one had solicited the software’s opinion about the nature of the test. Had this been the experience of a layperson, we might have assumed that this kind of “metacognitive” response had been deliberately provided for in the programming, but that’s apparently not the case, although skeptics have suggested that it’s the result of “feedback loops” in the model’s training.

My own sense, for what it’s worth—my technical chops and seven bucks will get you a half-caf no-foam soymilk latte down at the corner—is that if civilization hasn’t collapsed before mid-century, true “machine sentience” will arise by means of a kind of emergent behavior; that, indeed, its creators will not immediately realize what they have wrought; that, further, however deft its conversational abilities, however human-like its affect, it will look and work 𝘯𝘰𝘵𝘩𝘪𝘯𝘨 like us under the hood. As the illusion of sentience becomes more compelling, one(?) begins to wonder how far one’s(?) own “consciousness” might be a kind of simulation. But that way madness lies.

Expand full comment

The problem with any kind of creative process is the mechanism by which these things are trained - eating a whole bunch of documents and detecting patterns. Sturgeon's Law comes immediately into play - "ninety percent of everything is crap". The learning model can only produce mediocrity; it's built in. For math and chemistry that work's like a champ - it's all been vetted by peer review. There's nothing like that in a creative endeavour; that's why they call it "creative". Math and hard sciences are not creative in that sense, oh your hypothesis can be creative, but if it can't be backed up, too bad so sad, FAIL.

I don't think actual talented creative people have jack shit to worry about. It's not and has never been Artificial Intelligence, it's Artifical Learning and you can't teach creativity. AI was "just around the corner" in 1970, and I've been bitching about it ever since throughout my entire professional life as a computer programmer. I have yet to see anything that makes me change my opinion.

Don't sweat it Roy. Ain't happening. Oh, they'll be tons of crap put out, but that won't be creativity.

/end rant

Expand full comment

"For math and chemistry that work's like a champ - it's all been vetted by peer review."

There's now an entire industry of Junk Journals that nobody reads, just intended to get the article-count in your CV up. Increasingly, the articles sent to the junk journals are written by AI, and if there's any "peer review" going on, that's being done by AI. That may be our future, robots writing articles that are only read by other robots*.

And the trend of using AI to write your journal article and then using AI to review other people's articles isn't confined to the Junk-O-Sphere, it's increasingly how "reputable" science is done.

*One disturbing trend (among many) it's not just students using AI to write their term papers, but teachers then using AI to grade those term papers. In time, we can get to a feedback loop where there's no human involvement at all. Two boxes get checked: "Paper Written" and "Paper Graded" and everyone's happy.

Expand full comment
Comment deleted
May 30
Comment deleted
Expand full comment

Technically he published much in a journal he owned whose peer review process takes 2-3 days from submission to publication. Typically, for real journals, it takes a few weeks to determine the reviewers, then the reviewers get a week to 10 days to do the review. Then the authors get a month to respond to the reviewers comments. Then the reviewers look at the response and decide whether or not the response is adequate. So 2 months is about the bare minimum and it can easily take 6 months to a year if there are issues.

Expand full comment

Y'know what would REALLY deal with sleep deprivation problems? Letting people get a decent amount of sleep by not exploiting them in the name of making the fucking shareholders happy.

Expand full comment

The journals I have published in and reviewed for recently have started requiring that you check a box stating that you used no AI for the paper or the review

Expand full comment

BoxAI got ya covered. It knows to check every box every time.

Expand full comment

My worry is that it will not just make writing less human, but also readers.

Expand full comment

Sadly, Boss, that ship has already sailed. And it didn't need AI's help, "reality" shows took care of that.

Expand full comment