
I want to start this by being very clear: You are entirely entitled to not like stuff produced by AI. You’re even allowed to not like stuff you SUSPECT was produced by AI even if the “markers” you think you are seeing are completely insane. Because of course taste can’t be argued.
But I hope to get you to think about what you think you’re seeing, and what is happening right now in the culture re: AI.
Because you can do whatever you want and, frankly, with the readership of this blog, there is no point trying to get you to act any way you don’t want to, but I’d prefer you do it after careful examination of why you’re doing things and consider some stuff.
First of all, my position on AI: If AI is to be used, in science or art or whatever the heck you want to use it at, it should be used by people who are already somewhat above merely “competent.” Or to quote my husband, “Don’t treat AI like an infallible expert, treat it as a trainee who is a little naive and very unsure about the real world.” I.e. Ai is great for doing what I call the “donkey work” — whatever portion of the work it is you’re least fond of — and it should be extensively checked.
This is why I will not use AI to write: That’s the part I enjoy. The part I don’t enjoy? Well, I’m not using AI for that either because I’m not going to upload my entire novel up and ask it to edit and make sure I have all the ages right and all that. (I have an Amy B. for that!) Because no. I could maybe if I spun up a private instance of AI, but I’m about as likely to do that as I am to fly.
That said… If I could do it, and had, you know, copious spare time to do it in, that’s what I would use AI for. “Normalize every time character x is referred to so that his name is spelled with an N not an xyz.” “Highlight every use of the word “primarily” in my novel. That type of thing. The donkey work. The kind of thing that has me going through page after page after page looking for a needle in a haystack and falling asleep. The kind of work I’d give a trainee, if I had a trainee.
Now, yes, I am somewhat aware that I am weird for a writer. Most people at least in traditional publishing prefer having written to writing. Would it be possible for a writer in that position to have AI write a first draft and then they go over it with a fine tooth comb which is the part they like. Sure. Provided they REALLY like writing prompts, because I’ve never tried it, but if writing AI is like art AI it will do truly bizarre things if your prompt is scientifically clear. (Like yesterday, instead of “woman dancing with her shadow” for the MGC post, I typoed “woman dancing with her sharow.” Look, act of cat. He jumped on my boobs, fingers went wrong. The results were…. weird enough that I looked at the prompt. really looked.) But ALSO the prompt, at least for, say, animating images, needs to be so specific that it’s like making a contract with the fey. You know what I mean. “Does this word have more than one meaning, no matter how far fetched the second? How do I disambiguate?” I’m sure it’s possible to do it. In fact I suspect some fanficcers in JAFF are already doing it, and some are good enough even I don’t detect it. But quite frankly I’m breaking into a sweat just thinking about writing something clear for writing an entire chapter, say.
And then you’d need to edit edit edit. Which presumes you know what good writing is, and can edit it so that you make the story GOOD. Not my favorite part of the writing life.
Anyway, that’s my position on AI right now. If I were ever to use AI it would be in the way I’d use a trainee, if I could hire one at $2 an hour without breaking the law. That’s me.
But obviously there are people who are better at it, or have a better relationship with detail editing.
Which brings us to AI, the identifying of, and liking or not liking.
Lately I’ve become very afraid that I’m an AI. Look, if I read one more Facebook post, confidently identifying “AI trash” and the “markers” are stuff like “uses m dashes or semi-colons” I am going to do that end of story that goes “Then I too must be–” (My brain has gone blank on what the story is, but I’m sure one of you will know.) Heck, Heinlein must be, and he wrote before AI was invented. That’s how ahead of his time he was. I always wonder how to explain to people that AI uses those because they were trained on writers who use those.
Yes, there are tells, but those are usually things like: it doesn’t work like that in the real world, and couldn’t work like that in the real world. Yes, this means you’re at risk of identifying college students as being AI. But some of the mistakes hinge on being written by something that doesn’t have a body. One of the most disturbing mistakes I’ve found on JAFF is people moving miles away suddenly and inexplicably without even a wave at “they readied the carriage.” Even I in a fugue state of coffee and lack of sleep am not that bad at remembering where last I left my character. Also, there is a tendency to repeat a crucial plot point over and over, as though the writer has Alzheimers.
Still, what upsets me most about the “oh this is AI” missidentification and stuff like “Prove you wrote it” is the immediate “Oh, this is AI trash” followed by many, may intimations that they hate it, hate it, hate it. Then there are the people hiring artists who demand the artists draw in front of them.
And what that does is make the back of my head go “Why?”
Look, I know the songs produced by Suno, say, lack a certain expression, and it makes perfect sense for people like my younger DIL to dislike it. They will point out at various issues with it, but as I told her “I’m not that sophisticated a listener” and as a song writer, I’m mostly in awe of the fact I can bring the lyrics to life as songs even as lame at it as I am. Would a real singer/music writer be better? Yes. They’d also be better at using Suno. I know this, because I know people who do use it, then revise it and fix it, and sing over it. People who actually know how to be musicians. Which I don’t. For them, it’s a tool And they use it well. But yeah, pure AI songs can’t compete with real artists. They can however compete with most amateurs and can allow people like me to self-filk. (which is probably illegal in several states.)
So if you are an expert and can catch things and can tell me why you dislike this or that, I’ll respect that.
But if you need to see the artist draw to make sure it’s not AI? You don’t have that kind of expertise. If you need to see the musician play the music to make sure it’s not AI, why are you even doing this? If AI is that good that it can fool you, why are you bothering?
But Sarah, you’ll say, AI will steal all the jobs of all the artists, and then all we’ll have is slop?
1- No it won’t. The really good artists already using AI just take the whole thing higher and make it better.
2- If it’s good enough that unless you see it being made you’re not sure it’s AI? It’s not slop. (and at current state of the art it must have been edited expertly, at least for novels, otherwise it’s obvious.)
3- This type of pitiful “I hate x because it could steal jobs” never works. This is mostly nonsense. And attempts to preserve jobs by driving a new alternative down never work. EVER. The market seeks the cheapest and easiest solutions. Every time.
3a) as one of the writers who might have her job stolen by AI? If it happens, I’ll find something else to do. Probably using AI to tell my stories in another way. In my opinion that’s what artists do. There will always be really good old form — painters didn’t vanish when photography became a thing — and there will also be good new form (like photography done artistically.)
What worries me is that “I hate AI because it’s AI” is “I hate x because of who made it.”
Now, there is even some valid point to that. For instance I could see a lot of you refusing to read anyone else’s book about a newly discovered colony of hermaphrodites because “What horrors might this contain?” Only you knew me, and knew I’m the chick who often forgets to have her characters kiss, even when they’re about to get married. And then there’s “Trust the writer not to hurt you by killing your darlings.” There are writers I trust, and writers I don’t.
In that sense TRYING something — or not — because of who wrote it makes perfect sense. But LIKING it because of who wrote it is bokum.
I experience another side of this, because I write under my real name. (Or variations thereof.) Which means I don’t hide the fact I’m a woman. Look, I always wrote, and if I’d tried to write professionally in Portugal, I probably would have used a male name, way back. But when I started trying to publish seriously, it was the nineties, which I’m now starting to think was a weird cultural island. Because it lulled me into a sense that it was okay to write science fiction and fantasy under my own name.
After all there had been women science fiction writers under female names since the thirties, so–
So, I didn’t antecipate having a lot of crazy women writing things where the whole point was “men bad” and poisoning the field. And before you tell me this didn’t happen — yes it did, to the point I myself am sometimes afraid of reading a book with a female name on the cover.
What I mean is I do understand that. But I don’t understand reading it looking for reasons to hate it because I have a female name. (And yes some people do.) Or because of my political opinions which aren’t part of my books. (And yes, some people do.)
What I see people do with AI is more like reading the book looking for reasons to hate it, because you suspect it MIGHT have been written by AI.
And that’s a) stupid. And b) Depriving yourself of fun.
Because at the bottom of this, why do you read? To make yourself seem intellectual and important? Really? If that were the reason, you wouldn’t be hanging out with us wrong fun lot.
So, you read for fun and enjoyment right?
Which means the ONLY acceptable metric is “did you have fun?”
I mean, every work — human or AI — has flaws. And — much as I hate to admit this — some of the stuff I’ve enjoyed has had more flaws than not. (Look, the economics chick now has more story in her JAFF, but she still has massive amounts of investing and in-time economics than are reasonable. BUT her characters have heart and aren’t weird non JA constructs. So…. I enjoy her.)
But in the end, with fiction, the only thing that counts, as you close the book is “DID YOU ENJOY IT?”
Yes, values of enjoying it can include bawling your eyes out, or wanting to shake the characters. But…. if you’d do it again– you enjoyed it.
And that’s the only standard writing (or music, or art) SHOULD rationally be judged on: the ludic standard.
Did you enjoy it? If you did, why would it matter if the writer has an innie, an outie, is a human, a clanker, an exquisitely trained small white cat, or an alien from the Ort cloud?
(As a side reach you could hold Did you enjoy it AND was it uplifting? But that’s muddying the waters, as what I consider uplifting is not universal and I know it.)
If you suddenly turn around and denounce something because you suspect it might be AI? That has nothing to do with it being good or bad, enjoyable or not.
That’s social signaling signifying “I’m important, and all the best people say AI is slop. So, as an intellectual, I’m going to condemn potential/denounced as AI.”
You’re entitled to do, that of course. But if you’re doing it you should be aware you’re playing dumb positioning Monkey games, and be honest with yourself.
At least so your world doesn’t shatter when you find out you completely misidentified AI-produced.
I’ll just repeat verbatim the comment I left on Instapundit on this subject, because it applies, almost without modification, here as well (followup comment will contain my modifications):
If AI-generated output was reliable I don’t think people would care nearly as much. For me, the issue is that if something is generated by AI, I can’t trust that it’s correct. (I’m a computer programmer, so I don’t care about AI-generated art or music, but I see a lot of articles on technical stuff that have a lot of “tells” of being AI-generated.) I’ve seen AI-generated articles get technical details completely wrong, i.e. claiming that software package XYZ has feature ABC, but when you actually look at the documentation for XYZ, it specifically says “the ABC feature is not yet ready but we’re working on it”.
The unreliability of AI-generated material is why, if I start suspecting that the article I’m reading was AI-generated, I just close the tab, ignore everything that article told me, and go on looking for an actually reliable source.
LikeLiked by 4 people
Now for the modification to my comment: when you’re talking about AI-generated art, “reliable” doesn’t enter into it. Well, it kind of does: you know that Coke ad that got so much criticism? Part of the criticism was someone doing a side-view drawing of every generated truck that appeared in the clip, and they were completely inconsistent in placement of tires and axles (in fact, even in number of axles). So the concept of AI art being “reliable” does kind of exist (see also clankers’ constant difficulties with drawing the human hand). But the most important part for me is factual reliability. The AI tool doesn’t know that unicorns and dragons don’t exist, and I don’t care that its factual knowledge is wrong in this case. But it also doesn’t know that feature ABC of tool XYZ doesn’t exist, and in that case I very much care that its factual knowledge is wrong.
P.S. That’s twice I wrote “axel” instead of “axle” and had the mistake caught by spell-check. I would say “I must have figure skating on the brain” except that I haven’t watched figure skating in years. So I don’t know why I thought “metal rod that the tires attach to” and “spinning jump in figure skating” were spelled the same way.
LikeLike
Hands are hard even for human artists. I took some life drawing classes in the late 90’s, and our teacher had us do a lot of work on hands in various positions.
Faces are tricky because we have to get past the tendency to draw a symbol of a face and actually see the shapes of the features and how they’re put together. Once you can get past your brain’s tendency to reduce faces to symbols and actually see the model’s face as it is, you’re on your way.
But hands are difficult in a different way, because they’re not only very complex parts of the body, but also because the foreshortening of hands in various positions is counterintuitive to what we know about our own hands via proprioception. So I’m unsurprised that current machine learning technology would have its own difficulties in resolving “hands” from multiple images of hands, especially without a body to feel from the inside.
LikeLiked by 1 person
I read a book, “Drawing on the Right Side of the Brain”, which explained that most people, being left-brained, struggled to draw shapes. It had exercises to help switch over the focus to the right side, which worked for me.
LikeLiked by 1 person
I was today years old when I learned that “axel” and “axle” were different words. (I’ve never once had to use the word “axel” in writing)
As for the rest. You are spot on. We are still in early days and even in later days I doubt Ai will ever become reliable for most things. But, it will get better and better as useful tool.
LikeLike
I had some very unclear thoughts yesterday, which this reflects.
taste versus test
Art is evaluated by personal taste, interpretation by a human mind.
One, this means we cannot entirely automate sorting or evaluation, and specifically we cannot losslessly condense the meaning to simple numbers. (As opposed to compressing an expression to numbers, whose meaning is provided by an outside source. Digital audio and digital images tell us we can do such.) We know we cannot do things like ‘text with an even number of characters is good, text with an odd number of characters is bad’. Neural net AI is to some extent a space that the layman cannot fully predict which tasks are possible, and which are not, so there are probably some useable error rate applications available in sorting Art. Beyond my know-how, and beyond my understanding.
Second, human readable format that contains factual claims can in principle be tested automatically. By simpler techniques than neural nets.
The overlap in ideas is ‘what information is contained in these things’, and how well can we do tasks automatically versus manually.
LikeLike
AI at the moment is a mess. It makes an idiot statement, then people quote it (often making fun of the conclusion) and it uses that quote as a reference when asked the question again.
It is being used, as all tools are, for unscrupulous, unethical purposes and we have no way to either fight back or stop the abuse.
At the same time it is being used for good purposes, but those things aren’t generally visible to the public.
So you get people whose primary exposure is either unethical or careless use of a tool, and they blame the tool because that’s what they see.
They get scared because the news media picks up on it and uses it as clickbait. The fear is self-reinforcing.
LikeLike
Large Language Models and other forms of machine learning are proving very good at picking out patterns in massive amounts of data, and are being used to identify promising chemicals for a wide variety of purposes, including new medicines and antivenins for snakebites. Which proves that LLM’s are not inherently thefty, because multiple court cases have returned the same results, that a mere list of facts cannot be copyrighted (the question of whether copyrighted works require a special affirmative permission to be used in LLM training data sets, vs. the license that inheres in the ordinary purchase of a copy of the work, is still somewhat open).
LikeLiked by 1 person
i don’t know. In theory it shouldn’t matter who writes something. But in practice…
When I look at a book I do wonder, “Did this author sign a letter calling Israel genocidal and saying we should just give the poor, righteous, victimized Palestinians whatever they want?” It’s harder to enjoy a book when I suspect that the author hates me.
I mean, yeah, Roald Dahl wrote some amazing books. But I didn’t know he was an anti-Semite when I was a girl and I read his books. Although I did read them to my kids, so there’s that.
LikeLike
Oh, that’s different. And I also didn’t know Roald Dahl was an anti-semite (Though not rare for upper class Britts.) BUT that falls under “Don’t give money to those who hate you and want you dead.” For years the right didn’t care about this. We’re getting more and more so every day. The left won’t like it. They took advantage of our open mindedness to game the system so only they succeeded. Those days are done.
But AI, particularly a home spun model of open AI doesn’t pay those who hate us.
LikeLike
Reagan gave us the formula. Trump actually uses it, and thus achieves it occasionally.
-We- win. -They- lose.
And Trump also occasionally uses the Bubba Clinton corollary” “Its hard for them to talk with our fist in their mouth.”
-We- win. -They- lose.
Primum Victoria
LikeLiked by 1 person
Sara I have read of your interest in early SF authors and books. Being 10 years your senior, my SF library of 25 cent and 50 cent yellowed and brittle pocketbooks might contain some you would like to revisit seeing as the collection started about 1962😁.
Happy to correspond by email only I cannot find how. Feel free to email me if this message arrives with my details, I am not sure what is transmitted or not.
Edward
LikeLike
The AI detector features ( at least used by askgpt) are unreliable at best. I dropped some essays of mine in there and was told they were AI generated with a 90+% confidence level. Dropped the same into Grok, which identified them as human written to 99%.
Maybe I’m AI?
A few months ago I got appointed as a “safety officer” for our team. We work in an office. There’s nothing particularly dangerous about what we do. However, new company, new policies. I was publishing a “Safety Tip of the Day” for a few months until I cut back to weekly. Each one had a few bullet points (Proper lifting techniques, power strips, etc), all generated by AI. I included an AI generated picture of a raccoon as an office worker in various situations as a type of malicious compliance, hoping to be replaced. Unfortunately, management loved it and I’m stuck with Safety Trash Panda for the foreseeable future.
I suppose my point is, AI has some uses. Especially for things I can’t be bothered to do myself.
[image: okay employee.jpg]
LikeLiked by 1 person
No, that’s EXACTLY the best use of AI: the donkey work that sucks the joy out of your life. Of course, you still have to double check what Mycroft in TMIAHM called “the retarded child.”
LikeLike
According to the AI-checkers the Declaration of Independence is 90%+ chance of AI. They routinely ID human created art as AI and vice versa.
Those checkers are some of the most useless pieces of AI technology around. And yes, to make one you are building an AI. I didn’t expect that subplot of A Miracle of Science to come true but here we are.
LikeLiked by 1 person
If you are still doing that stuff, there was a recent raccoon in the news you might leverage.
Broke into a liquor store, smashed bottles, got blotto drunk, found unconscious in “sploot” position next to the toilet. (face down sprawl)
I dubbed him “11B TrashPanda”.
LikeLike
Yes, my team works for the Army. We’re all retirees and the 11B Trash Panda was the main subject of discussion that morning.
>
LikeLike
Amen Sister Sarah
LikeLike
I use Grok as a glorified Google and it works really well. I’ve started to use it to build the “bear case” against whatever idea I have floating in my head and am finding that really useful too. Ai has caused a hiccough in entry level hiring which is causing problems for number two son, but that too shall pass, it’s just annoying as hell.
my problem with ai is guys like Sam A. He’s a real piece of work and his product can’t be trusted in the simplest of things. It’s programmed to lie. There will be blood from that before it’s over.
LikeLike
Yes. All of this.
LikeLike
it’s fun to experiment. There’s a notion in economics called “fair value”. I got Grok to agree that it doesn’t actually exist and is just the product of another set of assumptions that in turn rest on a very weak foundation. Samy bhoy’s POS insists the experts are right. I suppose I could lead Samy bhoy’s model to another conclusion since it’s just a psychopathic clinger, but life is too short.
Grok doesn’t always “tell the truth” in the first instance — it’s lazy — but asking one or two clarifying questions can often unlock interesting things.
LikeLiked by 1 person
I’ve rubber ducked Claude. Like, if I’m trying to come up with a particular line in a song and I just can’t, I ask it for help.
ALL its lines are SO pedestrian that I immediately come up with one in self defense.
It works.
LikeLike
Yeah, I use Claude like that too. I’d dictated a completely talking heads scene with no sense of place, Whisper transcribed it, then I took it to Claude to stick quotation marks on it and generally tidy it up. I took it away for continuity and style fixes (overly complicated geneology that Whisper/Claude got wrong, etc), and decided to discuss with Claude the question of where this was taking place. Claude offered a bunch of options, of which “the narrator’s ‘rooms’ at university” was not Claude’s ‘favorite,’ but it was mine. Claude offered a lot of blather about setting the scene, of which I borrowed exactly two details: 1). the engineering nerd in the group is way more interested in the notes the narrator has compiled about a piece of archaeotech than in the actual discussion going on; and 2). there’s one of those fancy enameled Eastern/Central European woodstoves in the room.
LikeLike
And then there’s the time I got Claude to write lyrics in a primitive conlang I developed, because terrible as he is at poetry, I am worse:
https://suno.com/s/ArwqRFPXmn2BP7Al
LikeLiked by 1 person
Who is Sam A?
LikeLike
Altman. The Chat GPT guy.
LikeLiked by 2 people
My husband took an instant dislike to him. It’s very rare for him to do that.
LikeLiked by 1 person
Couple questions come up about people in this field. 1. Is he sane? 2. Is he honest?
Altman is one of the key public faces of hyping this, and is alleged to be dishonest. OpenAI, works with Microsoft.
I can’t recall anything specific and certain to point to, but I have been assuming that he is at least dishonest.
(Question 3. is ‘what basis do they have for knowing what they are talking about?’ This is less important than one might think. There are some key ‘experts’ who are seemingly insane.)
Pixy Misa is I think coming at things from a different end than BGE is. BGE has studied banks, finance, and the economy. Pixy Misa is fairly strongly interested in computer science and AI. If the two are truly independently concluding, then Pixy Misa’s opinion that Altman is crooked would tend to be additional confirmation.
Definitely, Altman has a lot of incentives to distort things.
There are a truly large number of companies involved in significant betting here, so just the CEOs with incentives are a long list.
Heads of national governments have understood this as being important to regime survival. The timelines could line up to some potentially very significant psychological stresses. And, well, PRC seemingly was making some bets early on wrt AI surveillance.
LikeLike
There’s also the minor detail that it’s managed to make search engines even worse.
LikeLiked by 1 person
Em-dashes — semi-colons; and TWO SPACES AFTER A PERIOD.
(My mother was a typing teacher and I learned to type ON A TYPEWRITER at a tender age. I realize some of you children out there have no idea what a typewriter is. And you wonder why some of us elderly writers have mental issues.)
LikeLike
Two spaces after the period are muscle memory for me at this point. I have to correct after typing.
And I learned to type in 87.
LikeLiked by 2 people
I learned to type when I was 11 and double space at end of sentence is still muscle memory. And I don’t edit it out.
I’m just annoyed that every online typing box edits it out against my wishes (•̀_•́ )
LikeLiked by 1 person
Mom and grandma had me learning touch typing on grandma’s very old, circa 1930’s, or even 1920’s, typewriter at age 8. Thus, initial, class grade for 7th grade (1968) typing at age 11 (I turn grade age mid first quarter) was high because passed the touch typing section quickly, speed did not count. Now grades for speed OTOH didn’t develop at all until computer keyboard plus screen was available mid-1980s. Accuracy and speed together only became available because of spell checkers.
Touch typing is so ingrained now 61 years later that I rarely check hand placement on keyboards when I put my hands on the keyboard. Will check, when character set is all wrong that is typed. Happens, rarely. But has happened.
Yes. I have gotten over the habit of double spacing after periods. Also broke the indented new paragraph habit. OTOH for those who haven’t overcome initial training I generally can spot those trained pre computer access. Those who trained in the ’80s, to be fair, they were still teaching the old typewriter methods, new keyboarding lessons hadn’t been developed. By the ’90s, when son had keyboarding, the rules were rewritten.
LikeLike
I did “Typing 2” spring semester of 1992, and had to follow type writer conventions. I think it was when Word 97 came out as well as the Netscape browser that changed the rules. For whatever it’s worth. Typewriter rules are the rules for monospaced fonts, and the “conventional” rules are the proportional font / typesetting rules.
LikeLiked by 1 person
Mom (frequently a secretary over her years) had me do a summer school typing class between 7th and 8th grade. I pretty much sucked at it, but did get the touch typing down, just incredibly slowly.
In high school, under some circumstances, I could write mostly legibly. Started to type in the later years, usually with the 1910 Underwood that previous owners left in the attic of the house. (OTOH, it needed a good clean and service, and we never got to it.) In college, I got the portable Olivetti my brother used. It disliked my uneven typing rhythm and would get revenge by inserting stray spaces.
A couple years after graduation (Jimmeh Carter was running for POTUS at the time), I changed jobs and started doing test development for integrated circuits. Suddenly, the typing skills started to matter, and my speed improved enough.
Kept up the test development along with myriad other parts of the semiconductor job, and still do a fair amount of typing. My strong preference was the Microsoft split keyboard, which now seems to be obsolete, but is cloned by others. The current instance is a Meetion. It’s fussy, and brushing against stray keys can make for some interesting typos. Sigh. The laptops in the house seem to have the locator keys (‘g’ and ‘j’), so I have to be careful when I’m on one of those keyboards. (That and the touch pad can make life quite annoying.)
I got used to the single space post-period. Not sure I like it, but…
LikeLike
At $CURRENT_AGE (get off my lawn), my handwriting is pretty much write-only-memory, with some of it unreadable, even to me. I type the shopping lists now.
Arthritis in the hands doesn’t help, and I’m sticking to that story.
LikeLike
To yield on the two spaces is to yield to barbarism.
LikeLiked by 3 people
🤷
I yielded.
Plus forgot to click new comments please, above.
LikeLike
I have half broken the two spaces habit. For coding I don’t do it, for writing words for people I tend to double space, back space type.
And I have changed out 2 keyboards becuause of the home row nubs being worn out, and the letters so I had to actually LOOK at the keyboard to get the hands placed correctly.
LikeLike
I’m also half broken and whether I do it or not is utterly random. Annoying.
LikeLike
Well, at least a global replace should fix that. Search for period and two spaces and replace with period and single space. Can’t do it the other way around or it would break abbreviations.
LikeLike
My last MS split keyboard had to be retired due to wear on the home keys. I need to swap the module in one of the laptops–not a difficult job, but fussy, so I’ve been putting it off. Real Soon Now. :)
LikeLike
On iPads in WPDE comments here if I type two spaces, it helpfully puts in a period and goes to shift for the next character, and then if I hit backspace it instantly ZAPS MY ENTIRE PARAGRAPH.
So the two spaces thing is a source of self annoyance for me.
LikeLike
Two spaces, and the Oxford Comma.
(Not a great battle-cry. But far from the worst.)
LikeLike
Ok – so, AI is ‘out there’ and I will just have to deal with that. For the most part is isn’t a big deal for me and I just shrug when an AI picture has some one with three arms and try to figure out what was supposed to be going on. I do NOT trust AI and a lot of humans are untrustworthy too so it’s just another factor for me.
Now, the really important part for me: Make the forcing of it in programs and other things STOP. I run windows (version 11 pro right now) for my own reasons and it works for a retired old guy. However, the damn mother ship of Microsoft slipped in that awful “Copilot” AI thing and I hate it. I’ve done my best to kill it off but you can’t be totally rid of it as they tied it into the core operation. Sigh… Can I go back to DOS and WordPerfect?
LikeLiked by 4 people
Oh, I too am tired of getting AI shoved up my nose, but….
LikeLike
Sorry to be that guy, but if you are using windows you already signed up for them to apply the dildo of bullshit with or without lube whenever they felt like it.
I don’t expect anyone to learn the lesson this time, I only expect people to abandon windows when it becomes literally impossible to work in it, and still not have learned the lesson they were supposed to learn.
LikeLiked by 2 people
Not all of us have the will or mental capacity to learn Windoze alternatives, nor the skill set to fix them when they go more wrong than Windoze does.
For some of us, just setting up our computers without making an Microshaft account, and uninstalling unwanted applications like Copilot is the limit of our skill.
And sometimes we need programs for work that don’t have versions that can run on whatever Linux distro is currently most fashionable.
Most people have to use the tools available, even if the makers continually change them in ways that only make sense to programmer with their heads so far up their own fundaments that they’ve completely lost touch with what normal people find useful.
LikeLiked by 2 people
The problem with that is that it sounds plausible, and for a long time it was even true.
But the pile of technical knowledge you need in order to have a barely functional windows system keeps increasing.
Everyone is going to have different limit of how far they can tolerate Stockholm Syndrome vs learning an alternative. But there is a limit.
LikeLiked by 1 person
Win 7 was the last one I liked. One laptop was purchased with Win 10 so I could run Turbotax, but I got tired of/infuriated with TT and Win 10, and it got converted to Linux several years ago.
Now it’s the crash test dummy for my Linux distribution’s “current” semi-permanent beta-test regime. It’s also the handy machine to take when I have to do a road trip and I don’t want to bother with DVDs. (Yes, I have a fair number of computers. Most of them are refurbished machines. IIRC, the newest is circa 2020 and the oldest is 2012 or so.)
LikeLike
At this point, it’s much easier to install Linux than it is to remove CoPilot and other unwanted infestations.
Worse, Win 11 doesn’t even want you saving your own data on your own drive.
I looked at upgrading to Win 11. And noped out.
LikeLike
We were early MS/DOS users, but the Mac 7100 brought us into the Apple world.
We knew we were in the right place when my 10 year old daughter could install her own games.
Had a couple more Mac desktops, then switched to laptops, and still do that.
I maintained a couple of labs at the CSUEB computer center for a while; I have lots of Windows experience, and I don’t care for it.
When Mac switched to OSX, it was a gui on top of a variant of BSD UNIX; I just looked it up, and MacOS has been a recognized ‘UNIX’ since OS 15 (article on that stuff is https://www.theregister.com/2024/10/11/macos_15_is_unix/). It does have a recognizable shell at the Terminal app, zsh or bash if you like.
It’s true that Apple hardware is more expensive for what you get than Windows machines. It’s also true that a bunch of things don’t run on Macs – but we’ve never wanted to do those things, so we have not experienced a lack.
And with auto-update turned off, it feels like the machine is MINE. ( I believe it was Linus who noted ‘We all deceive ourselves one way or another.’ )
LikeLiked by 1 person
Regarding copilot. Me too. So far every time it pops up and says “Hi. I’m here. Can I help?” the response is to click “NO”. Told hubby and son to do the same on their laptops.
I’d like to go back to good old DOS too. WordPerfect I can take or leave. Currently I use LibreOffice.
LikeLike
I’ve uninstalled Copilot so it never appears before me, and as near as I can tell it hasn’t adversely affected the operation of Windows 11
LikeLike
The Bond movie “Quantum of Solace” from 2008. At least, the version of it that I saw, had Bond in South America, then somewhere in Europe, then back in South America via scene cuts. Each time, it took me a while to figure out what had happened, and pretty much hosed any pretense at continuity.
LikeLike
Are we sure it wasn’t written by proto-AI?
LikeLike
Worse, an actor!
(Daniel Craig stepped in to edit the script during the writers’ strike.)
LikeLiked by 1 person
well, this is part of the issue with identifying AI. There are people who are worse than AI at writing. No, really.
LikeLiked by 1 person
I don’t doubt it. Someone (MCA Hogarth?) recently said that AI converges on “low-average” quality, which sounds about right. That still leaves plenty of room at the bottom of the curve.
LikeLiked by 2 people
This is why it’s an ideal “dumb assistant” to do the donkey work. Provided you LIKE to do the afterwork.
I cordially HATE editing, so … Though I can see times I could do that, like if I’m brain dead and can’t write. (Like when we left CO)
LikeLike
I think we perceive it as low average. I’m not sure if it is a joke that we are actually looking at average due to managing to undersample the worst things.
This week, I have been in and out from “I have no idea how statistics works at all”.
Sometimes I enjoy thinking about stats, and sometimes I enjoy learning about them, but I think I may have ten or more years to go of ‘there is always another assumption somewhere, that can be wrong’.
On the plus side, I think I know enough about particle filters that I do not think that they are AI. (The particle is a point that exists for several of the different variables. The filter uses a bunch of particles to sample/estimate some stuff. This is another older theory that is more practical now that computers got better. Which is not the same as being entirely practical, and is quite distinct from being a panacea.)
LikeLiked by 1 person
The “low average” thing is actually part of why I find it useful as a sounding board/rubber duck. I am extremely good at missing the obvious, so a program that makes obvious inferences about what I’m sharing with it is actually moderately helpful to me.
LikeLiked by 1 person
The scary part is that there are, or used to be, b-movies so badly made that the actors were the smartest people in the room.
LikeLike
I’ve seen movies where the props were the smartest things in the room. 😄
LikeLiked by 2 people
And it wasn’t a mainframe computer either! :D
LikeLike
ooh. Me too.
LikeLiked by 1 person
In terms of ethics/quality in art/writing, what is the difference between someone using AI and someone using a ghostwriter?
In terms of the state of AI, having worked with it in programming, treat it as an overly eager, utterly inexperienced kid genius.
It’s fast, confident, and has no connection with reality.
It will confidently do the wrong thing and insist that it’s correct. When you tell it it’s wrong, it will say “yes sir, I’m wrong” and then continue to operate from based on that wrong position
the good news is that you can completely clear the state (fire the kid and get another one) and try again
One professional programmer I know hates AI, because it turns him from a planner who lays out what should be done and does it (almost) right the first time to a full time code review guy, trying to debug code he didn’t write (which is the hardest part of programming)
and to most programmers, the least fun. a large percentage (I think a majority) can’t handle dealing with other people’s code at all. This is why their first reaction to any problem is “throw everything out and let me re-write it”
LikeLike
Well, right now as writing craft, it’s somewhat behind a dumb college kid hired as a ghost writer. Not that I’ve used either, but college kids know we don’t have teleportation and also that in a murder mystery it’s essential that who was murdered doesn’t change from chapter to chapter (it was weird. From a baby to an adult man, but neither ever named or identified. No it wasn’t artistic. Just bizarre.) But ethically? No difference.
LikeLike
In terms of ethics/quality in art/writing, what is the difference between someone using AI and someone using a ghostwriter?
In terms of the state of AI, having worked with it in programming, treat it as an overly eager, utterly inexperienced kid genius.
It’s fast, confident, and has no connection with reality.
It will confidently do the wrong thing and insist that it’s correct. When you tell it it’s wrong, it will say “yes sir, I’m wrong” and then continue to operate from based on that wrong position
the good news is that you can completely clear the state (fire the kid and get another one) and try again
One professional programmer I know hates AI, because it turns him from a planner who lays out what should be done and does it (almost) right the first time to a full time code review guy, trying to debug code he didn’t write (which is the hardest part of programming)
and to most programmers, the least fun. a large percentage (I think a majority) can’t handle dealing with other people’s code at all. This is why their first reaction to any problem is “throw everything out and let me re-write it”
LikeLike
I’m a strong believer that AI should be something you have to opt into, not something that you can’t opt out of.
LikeLiked by 2 people
Everything, not just AI, yes.
LikeLiked by 2 people
Jack WIlliamson’s “Humanoids”, who will Serve Mankind. And thus ensure we are totally protected from any harm. By smothering us all in a totally all-consuming “service” that leaves us utterly trapped and helpless.
gah….
LikeLike
Here is something we need to somehow incorporate into the government:
If you don’t make good decisions in your own life, you can’t be trusted to make decisions for everybody.
80% of ‘our’ self-serving politicians and bureaucrats would be out on their asses if that were enforced.
LikeLiked by 1 person
My problem with AI is those who rely on it to fix their garbage, or put out garbage because they got lazy and didn’t check what AI gave them. Seeing that in some circles in “science”. Science™! was suffering enough as it is, but now knobs are relying on it for papers, said papers fit Teh Narrative™ so get published and the data is as phoney as the ai ad photos on that service Insty used or YooToob clickbait phake shorts etc.
LikeLiked by 1 person
note: fix their garbage POORLY. I’ve seen a lot of great stuff that is User sketched, AI fixed and then user fixed to match vision. Those are like a step above 3D game engine stuff. i.e. it took talent to get that good of an AI work (unlike that crap McD’s commercial that “took days of prompts” or that cheap Coke Xmas ad)
LikeLiked by 1 person
Yep. This is what I mean by “Experts use this tool best.”
LikeLiked by 2 people
Somehow, I suspect it cannot be -worse- than the 90+% excrement coming out of Hollyweird.
Then again, if programmed by the current corpies? Hold my DRAM and watch this!
LikeLiked by 1 person
Em dash!
P K Dick used a form of AI as a character. How he got a bomb that powerful in the size of a human torso remains a mystery.
LikeLiked by 1 person
Ah. Okay.
LikeLike
“Normalize every time character x is referred to so that his name is spelled with an N not an xyz.” “Highlight every use of the word “primarily” in my novel.”
Computers can be great at this. LLMs…not so much.
When I switched to Linux, I couldn’t get Scrivener to run outside a VM, so I wrote my own “writing software.” It can do global search and replace. It can find and count words.
When it compiles all the scenes into a single file, it automatically checks for smart quotes, overused words, weasel words, trailing spaces, passive voice, inconsistent tense… Basically all the things I often got wrong. But it does so with old-school regular expressions and a natural language tokenizer. An LLM can’t do any of that reliably.
LikeLike
“I hate x because it could steal jobs” can just as easily apply as:
“I hate new improved humans because they steal our women.”
Which has probably been said by every speech capable humanoid for the past 6 million years, while their women kept abandoning them for the better model (or for which the new improved models kept successfully stealing them.)
LikeLike
I’m not sure what my opinion on jobs is.
On the one hand, I have angsted muchly about that topic.
On the other hand, I have done so poorly at conventional job search that expecting to innovate and roll my own does not sound /that/ bad in comparison to everything else, some self inflicted.
On the gripping hand, economic theory says be not afraid, and some religious theory maybe says be not afraid.
There’s basically a sweet spot in the history of automating engineering tasks. That is where the demand for engineering man-hours does not significantly exceed supply. You don’t want everything fully automated, and engineers all retire to live from royalties, because the risk of a problem from something that wasn’t automated approaches a certainty. (If for no other reason, then if you really need a totally novel improved technology, and we entirely lose the art as a living skill, the development effort is less likely to succeed.) But, there is also a lot of boring and unimportant human decisions, that require no art and no intuition, that it would be nice not to do, and not to be paying expensive people to do.
Intel’s design teams are an example of something, but even if I were not ignorant the confounding problems would be too many for me to sort.
Those are a design problem where you would need a lot of experts, and the communications between them would be a major part of the challenge. If everything could be automated down to like five people, then probably there could be a lot of organizations competing and doing great work in that space.
Tech changes, the big semi designers and foundries now are working in various ways to improve on Extreme Ultraviolet (EUV) lithography. After reading about that, I am impressed that it works at all, very fine engineering.
A bunch of the particle accelerator and laser boffins and wonks from some major US labs, have gone ‘screw it, let’s just hook up a particle accelerator x-ray laser, and figure out x-ray lithography’. They are a private company, and the feds have just given them some stuff to build at a major US federal semi lab.
Five or ten thousand engineering textbooks are probably too much for a human to read. But, whether human or LLM, reading back from last year to the mid forties would not give you all the tips and tricks to make an effective textbook on x-ray lithography. The right books might have 40 to 60% of the theory you might need, but a recipe that is not tested yet is not necessarily going to work.
LikeLike
It’s not so much that “AI will steal your jobs” as “tech bros and corporate penny-pinchers will fire their reliable workers for the not-ready-for-prime-time programs because it’s cheaper.”
Which, let’s be honest, is what those folks do anyway with any perceived innovation.
I personally have some ethical objections about the blender LLMs. (Won’t get much into that.) The curated ones—like the medical scanner programs—those are gold. It does drive me bonkers how gullible people are with some of these things, though I’m also annoyed at how good the video tools are getting when they are presented as real. I want to see real cats freaking out at snow, not AI generated videos for clicks.
LikeLiked by 1 person
LLMs and diffusion models are good at rapid, low-quality output.
They’re great for memes, indie book covers, music for your one-man indie game, summaries if they don’t need to be accurate, internet searches if they don’t need to be better than the current dead internet, etc.
As they’ve now run out of training data, I don’t expect much improvement with current methods. They will never replace even a moderately skilled human. I also don’t expect many people will be willing to pay what they actually cost.
However, in the short term, I expect many businesses will do the same thing with AI as they did with outsourcing and H1Bs. AI will replace a bunch of jobs, make a huge mess of it, maybe even kill a few thousand people…and the managers who decided to use AI will be rewarded and learn nothing.
LikeLiked by 1 person
Having run out of primary-source training data, the LDMs are now consuming the output of other LDMs, forming an ever-proliferating positive feedback loop of bullshit. Expect shit to get really weird. 😮
LikeLiked by 1 person
Before Midjourney and their ilk took off, I watched a role-play campaign that had a setting that was basically like mind-meltingly bad AI (with in-universe reasons for being that way.) That campaign is getting animated, and if they don’t use an early AI video generator to at least start coming up with concepts to draw over, they’re missing out.
LikeLike
I think music is going to hit hardest.
For starters, there are effectively only eight notes (yes, there twelve, but four of them are alternate voicings). And not all of those play nicely together, so there are a very finite number of ways you can pleasingly string them together.
Then with autotune, people have spent the last quarter century getting used to hearing modulated voices.
LikeLiked by 1 person
I guess I’m an AI; I use a lot of semicolons. ;)
LikeLiked by 2 people
It’s okay. I — personally — have a wicked m dash addiction.
LikeLiked by 1 person
MS Word has an overriding root authority prime directive to obsessively change all dashes to m dashes. I think it is the actual reason that program exists.
LikeLike
I am very susceptible to semicolons. Might be genetic; Mom claims to have seen a letter Dad wrote to another family member where the entire first page was one sentence.
LikeLiked by 1 person
Parenthetical asides are my undocumented feature. Because ADD means finishing one sentence before starting another is somewhat difficult.
LikeLiked by 1 person
I was going to say my dashes are ADD in action.
LikeLiked by 1 person
I do that too.
LikeLike
Um. Guilty? Yes. That.
Also a tendency to write something long out. Read it. Cut it back, a lot. Reinsert something a (*tad? Who am I kidding?) lot shorter. Works when I’m writing. Not so much when I’m talking. At least I’m consistent. Right?
(*) This is where I’d use a strike through. If it was working.
LikeLike
It is possible I would feel better about AI if the people who created it knew why/how it works.
As it is, I am left with a Frankenstein crossed with Skynet built by Morlocks vibe.
An undead computer code that can turn on us at any moment.
I am thoroughly tired of seeing the first page or so of any internet query now consists of AI generated misinformation.
LikeLike
They DO know how and why it works. If you think they don’t, you’ve been lied to. It is as Charlie Martin put it “A giant calculator of probabilities” That’s all it is.
It’s can’t “turn” on us, anymore than a spade can turn on us.
LikeLike
Hmm… I’ve known some mean spades.
LikeLiked by 1 person
Sam being a primary example. 😉
(Fedora tip to Dashiell Hammett and Humphrey Bogart)
LikeLiked by 1 person
Lied to? Heaven forfend!
Of course they are lying. Their lips are moving.
Do they know what the down stream effects will be? I seriously doubt it.
As long as it’s profitable, they might not care.
It does seem interesting to me though, that one of the first effects of spending too much time with AI is some sort of psychosis can can tragically lead to suicide.
LikeLiked by 1 person
I think it’s because the people who are susceptible to treat AI as a great and wise adviser are…. well…. vulnerable, shall we say?
LikeLiked by 1 person
Well, I have screws loose enough that I have deliberately not involved myself with LLMs.
Partly for reasons of not having enough time to spend that way, but partly because I try to avoid certain sorts of psychological health risks.
LikeLike
I have see some of the darndest shovel accidents, especially with entrenching tools used in haste.
“Spuddcowski! How the (HONK!) did you manage to hit yourself in the back of your head with your own e-tool?
LikeLike
Yes. And there was the famous incident where grandma’s field hand almost took his foot off with a hoe (I now want to write the version of this where people misunderstand it) but seriously, they’re not SENTIENT. They don’t TURN on you. You can use them stupidly, which many people do, same as AI.
I think attributing self-actuated motives to the things even in our minds is a mistake.
LikeLike
There are definitely people who know the how and the why of some of this stuff. Those are not the people being promoted by media, but they do exist.
The flippant description is pile of matrix operations, GPU go brr.
Matrices wind up being a representation in linear algebra, and systems of equations. These have tons of applications, including the graphics for computer games. Gaming basically drove the development of graphics cards, that are very fast and specialized at solving matrices numerically.
Engineers liked graphics cards after the gamers got them on the market, because systems of equations are something that engineers need to calculate a lot. Quite a lot of engineering simulation solvers have been running on GPUs for, at least, around twenty five years.
Ten or fifteen years back, the engineering literature got extremely thick with an approach called a neural net. One of the first applications was image processing, which is also something that electrical engineers do by older or more conventional means. The filters in photoshop or whatever kids use these days? Called that because identical to, or similar to the 2D digital filters that electrical engineers learned to make for image processing. And the 2D stuff is similar to, or at least inheriting the mathematical ideas of, the 1D digital filters that are a very standard tool. I think that maybe 1D digital filters are required for a bachelors degree in electrical engineering these days?
An image is basically a grid of numbers, or a three grids of numbers.
I think this is the point tonight where I lose what ability I have to explain the little I know to a entirely general audience. It involves programming abstractions, and I’ve read or seen the description many times, but never actually done it myself.
The image processing neural nets are 1) an input place, for an rectangle with a set dimension in pixels 2) an output place, which is also a rectangle (or matrix) but might not have the same dimensions 3) several intermediate ‘layers’, which store a rectangle of numbers 4) some number of little programs that change the next rectangle of numbers based on the previous rectangle.
So far the best general analogy I can think of is a spreadsheet.
If we have four intermediate layers, then we have 4 + 5 + 1 + 1 =11 sheets that we use. five sheets for doing the data calculations, an input sheet, an output sheet, and four sheets that just hold data.
We have to have a bunch of data in pairs of images. One of the part is the input, and the other is a desired output. If I am an electrical engineer fifteen years ago, I might have a bunch of aerial images of tanks, which I, or my teacher, spent a long time using conventional means to convert into an image that just has the tank highlighted.
I have a program copy the input data into that sheet, and the desired output into the output sheet. The converter sheets of the spreadsheet move the numbers around, until they have a) changed a set of ‘weights’ b) the generated output is close to the desired output. Then I repeat with the next pair of images.
What they found out, is that if they have enough of this data, and use the ‘training data’ chunk to get a set of weights, then using those fixed weights on the test data set can predict very well.
So some of the military remote sensing people have been quietly considering this stuff very promising. Likewise other researchers in engineering, for a very wide range of applications.
There are several challenges. One is that it is very expensive in data. Two, the training times are very long. Three, there are a bunch of questions about methodology, about statistical assumptions, and about what is actually happening, and how to fix problems.
Depending on the specific applications, electrical engineers or other engineers may have quite a few other better, faster ways to get similar results. However, those methods (while maybe very well understood) can be rigid, and just stop working for certain inputs. With the nets, you can get results without explicitly building every assumption into the actual code.
This is maybe the state of the art, as of five or more years ago.
The image generation stuff basically came out of the image recognition or image classification research. Basically, throwing a ton of money at self driving cars caused it to happen.
The LLMs come from a) applying this stuff to text strings b) using a concept called a transformer. I’ve basically studiously ignored this because it bores me.
So BLS OOH says that there are around 300k electrical engineers working in the US. A subset of that graduated in the last 20 (or 10) years. EE degrees total in that period will be somewhat higher than are currently working. At a minimum, bachelors 20 years ago know about 1D digital filters. Bachelors this past 10 years may have heard graduate students talking about this stuff. The field whose speciality this really is Computer Science.
In theory, university people understand what they talk about. In practice not always. Some of the ones who do are also insane.
Reports are that in this speciality, there has been some fairly determined efforts to drive people more insane.
So, Microsoft is closely partnered with Altman’s OpenAI. OpenAI collaborated on copilot. Satya Nudella is head of Microsoft, and he has an MS in Electrical Engineering. I’m not entirely sure if a) Nudella is crazy b) Nudella is dishonest c) he is wrong d) I am wrong. I’m confident that I am pretty correct, but this is not a case where I have objectively conclusive evidence.
A source I trust until I learn more once told me that, actually, neural nets are a bunch of digital filters. This was an electrical engineer, so it is possible that he could have known that. He claimed that one could somewhat understand the internals. If this is true, then there might be a straight line between neural nets, and more conventional electrical engineering adaptive digital filters like the Kalman filter. The Kalman filter is basically how GPS receivers solve for position.
I’m not denying that there is insanity. There is insanity. I’m not saying that none of these systems have aspects that nobody understands. I would be surprised if anyone fully understood these tools yet. I’m not saying that the financial reasoning, or the social reasoning, of the people pushing this stuff is correct. I’m personally betting a certain amount on limited change.
There are a lot of people who have studied bits and pieces of this wide and deep technology. The people promoted to visibility in AI /might/ be as unrepresentative as Hillary Clinton is of women, and as Shaun King is of blacks.
LikeLiked by 2 people
Dropping by late to say that you’ve basically got it right. (The digital filter connection doesn’t come up that often on the CS side. Maybe more for people in theory or image processing? The common framing is “function approximation” and “gradient descent”.)
RE: Understanding neural networks. The algorithms are well-understood. The tricks/best practices to get them to work are somewhat understood, although every now and then someone upends the conventional wisdom. The outputs of the system are understood at a coarse level, but the models are too big to understand with any degree of specificity. So they’re prone to sporadic undesirable behavior (e.g., hallucinations, swearing at the user) that can only kind of be trained out of them. There are some techniques for seeing how the model works internally, but they’re usually approximations that don’t capture all the quirks (i.e, they’re good for some insights, but not enough to make the models reliable/predictable).
LikeLike
I do not have the gift of prophecy (praise the Lord, fasting), but I’m expecting the AI bubble to crash in the next five years or so. A lot of people will lose their shirts (I think Elon Musk has a lot of shirts), but, like the dot-com crash, the groundwork will have been laid for the next generation of AI to come in, better, cheaper, and more reliable.
Maybe we’ll even get cheap, safe, dependable nuclear energy out of it.
LikeLiked by 5 people
Yeah, fingers crossed for nuclear power as well.
LikeLiked by 2 people
Same. I’ll be very surprised if there isn’t an AI bubble that pops like the dot-com bubble did 25 years ago (or whenever that was…memory is hazy). I think that if it takes at least 3-5 years to reach the market collapse, we probably will get scalable, relatively cheap nuclear energy out of it, or at least be starting to. That would be a very, very good thing in itself, regardless of what happens to the AI market.
LikeLiked by 2 people
It’s not the same as the dot com bubble. The doc com bubble was weird. I had an inside view and can write about it if you guys would like me to. Would you?
LikeLiked by 1 person
Sure!
LikeLike
Yes please? I was inside that bubble (bye bye retirement #1/#2 monies) and a different perspective would be nice. You have a knack of putting into words stuff I don’t understand, and handing me compare and contrast material that helps with figuring out “People” and “People’s Actions”.
LikeLike
Definitely: “Yes please!”
I too was inside that bubble pop. Lucky enough to avoid bye bye retirement or child tax deferred college funds (401k/IRA) moneys. We had long term lack of work savings set aside because of hubby’s timber job. OTOH using that put a monkey wrench somewhere else. I can’t really complain (can but …). We were way better off than colleagues also caught in the bubble pop.
LikeLike
We were young, Dan was looking for work, and I got stuff from him that…. yeah.
LikeLike
Absolutely!
LikeLike
Yes, indeed. I was there only in the sense of being an adult human at the time, so all I really know is that for a short while everything was about The Internet, and then a bunch of companies and badly spent money went poof.
LikeLiked by 1 person
I, on occasion, get into a fight with just spell-check (why I never have it on while typing, only as a run-through on a complete document). When I decide to run a grammar check… well, let’s just say that at least 90% of its suggestions are ignored.
I will NEVER let an AI do an “editor check” on my work. First, because I don’t think it will ever be competent enough to catch all problems (such as a passage in the wrong place, or passing a bit of dialogue from a properly spelled character that is NOT speaking at the time).
Second, because I am quite sure that, at some point, they are going to add “sensitivity evaluation” to it. Monitors are too expensive to replace them a half dozen times in a novel length bit of work!
LikeLiked by 3 people
There’s also tone. If you have a particular dialect going, and it decides to “correct” it to modern, that would kill the feel.
LikeLiked by 3 people
I think if I were using one for editing work more complicated than “did I introduce any typos in this revised version of this bit of dictation I had you clean up?” I would instruct it to list mistakes with quotations of relevant text, but not allow it to change stuff.
LikeLiked by 2 people
meh. editing for words is MUCH easier than editing for sense and story. And I understand from people who use it that way that it’s actually pretty good.
LikeLiked by 1 person
I suppose that I should note that grammar check did pull me out of excessive passive voice use. That’s why I’ll still run a pass at the end of a piece of work just to make sure that I haven’t relapsed.
LikeLiked by 1 person
My main problem, when sick or tired or yes is that I write UNENDING sentences. one I broke up in today’s post was … 80 words long.
LikeLiked by 1 person
Bingo!
LikeLiked by 1 person
AMEN!
Did you enjoy it? That’s the only measure that really matters. Good art is a source of joy, and joy is a de gustibus non disputandum kind of thing. Some people find joy in things that would either bore me to tears or make me angry or depressed; many people do not enjoy the things I enjoy.
I think part of the anti-AI sentiment comes from feeling deceived about something you used to be able to depend on. Used to be that you knew a human (or several) had written and performed that song, created that image, written that text. No longer. It’s unsettling. More than that, what was presented to you as a human artist could be entirely fabricated, calculated to make you feel a human connection to something that isn’t human at all. That’s the part I don’t like. If it’s AI and it’s good, fine; but for the love of Pete, DO NOT pass it off as something else. I may not forgive you for it when I find out. (Maybe this stuff gets good enough that in the near future nobody finds out…in that case, I guess we all just adjust our expectations.)
So would I use AI? Do I? Yep. I’ve used it to generate pictures of roleplaying characters, because that’s something I’ve always wished I had and am totally incapable of producing, and nobody will make them for me on demand. (Like a lot of people, I’m happy to be able to *see* things I previously could only imagine and had no hope of communicating visually, and art isn’t the point.) As a professional marketing writer, I’m using AI at work to write rough drafts — not all of them, but some. If I’m not sure how to get started, it saves time by getting the sometimes painful “shitty first draft” phase out of the way quickly. I have deadlines to meet and an often heavy workload, so I’m using it to save time. But even the best result it’s produced to date is only on a direct email of 200 words or less (not a heavy lift in the first place; the heavier/more complex the lift, the weaker the AI gets), and was about 80% there. A useful time-saver, but even its best wouldn’t have been a good product if a good writer wasn’t riding herd and fixing it up.
Probably the big reason why I’m not likely to ever get really into using AI is the “provided they REALLY like writing prompts” thing. I do not like writing prompts. I don’t hate it…but there’s too much dry, nitpicky tedium and not much satisfaction in it. I enjoy writing as a craft because playing with words is fun; it’s similar to the enjoyment I get out of making something with my hands or playing an instrument, because I enjoy the nuts and bolts of making a thing myself. AI takes those enjoyable things out of my hands, and relegates me to basically just manipulating the output of an unpredictable, finicky machine. I’m practicing the art and science of prompting the machine because I’m paid to be productive, not because I enjoy the activity.
If in the end all writing jobs turn into crafting prompts and editing the output, well, okay…I guess. It’d remove much of what I like about this profession (I’ve been a proofreader and copy editor before and was very good at both, but I like writing much better), but them’s the breaks sometimes, and at least I’d still be getting paid.
LikeLiked by 2 people
I personally think forcing the normies to come to terms with the fact that they can’t trust the images they see on the screen may be AI’s greatest gift to modern culture. The authorities have been lying with images for generations; now the peasants can do the same.
LikeLiked by 3 people
Very good point.
Also as we are “training” it, it is training us. We have been far too trusting of our “lying eyes” too long.
The George Floyd video is a case in point. Cropped snippets shown on endless loop without context was used as a pretext to riot, murder, steal, loot and destroy livelihoods.
Is AI deceptive? Why not? It was created by a species known for its deception capabilities.
LikeLiked by 5 people
I heard it said very succinctly on the radio that AI won’t take your job, but the person who uses AI better than you certainly will.
LikeLike
This is probably true. I don’t like it very much, but as I said above, I get paid to produce functional written products on a deadline. So if it can boost my throughput while I ensure the output quality, I’d better be figuring out how.
LikeLiked by 2 people
Something to keep in mind– do not confuse the user interface with “AI.”
Few things enrage me like a tool wasting time and resources to market to me, such as when the AI box has a fake processing timer, or the various other “make it look sciency” nonsense things.
Grammarly is probably the most commonly used AI in business, and any other that has “summarize this email” or “summarize the recordings of this meeting.”
LikeLike
which is why I poke at the thing every so often.
LikeLike
One, I think ChewTube is giving us the same videos, because I think I saw that one. And two, I’d probably use AI the same-ish way: How many times did I refer to character X as Y, how many instances of the word “discombobulate” were in the second story arc chapters 37-61 (Zero, I know that one), and suchlike things.
Keeping track of the space station, the space ships around, and the internal workings of said thing like air mix, stability of orbit and need of maintenance thrust, ratio of breathing biologicals to nutrients, the spread/reduction of networked infectious nodes, let alone the rest of the solar system and their reaction to the events in LEO… That’s stuff that might have use for a plot maintainer system, but it’d probably be more effort than it’d be worth to create it.
LikeLiked by 1 person
I spent a few hours playing with images and questions on a couple free AI sites to appreciate how skilled Sarah has gotten to make my book covers. AI creates nothing new. It’s simply a better indexed search. If you have no rich mental image you want it to express for you you are going to get simple if not cartoonish images. Once made it will resist significant change. Better just to start over if not [leased.
As far as factual searches, I asked it to draw me a overlaid graph of hourly temperatures in Miami for the last 5 days. It (Gemini) insisted the data did not exist. Apparently it can’t find the National Weather Service or Weather Underground.
LikeLiked by 3 people
I mostly go to Perplexity for that kind of data retrieval. It’s not 100% reliable, but it’s one of the better ones.
LikeLike
Gemini is not linked to the web NOW. Its only “view” is like five years ago. there’s a reason for that. I don’t remember what it is. (And I only know this stuff from Dan babbling at me.)
Both Claude and Grok, were unable to identify a font that was way older than that when I uploaded a sample. (I had done the cover for NML, the initial one, three? four? Maybe longer years ago. I changed the image but wanted the title font. couldn’t remember what it was called. It kept insisting it was…. something else I have which is not even close to it. BOTH Grok and Claude.) So yea, it’s not a panacea. It’s a TOOL.
And yes, prompts are an art form in themselves. Sometimes you can get something wonderful by accident first time out, but otherwise takes me about a day to three days to get a coherent image that is something I can use. it’s like that.
LikeLike
Thanks, Sarah, for keeping us in the real world.
LikeLike
Yes! Yes Sarah! There’s so much wisdom in this piece that I can’t even take the time to single out the parts that are just outstanding.
BTW, I held up my two paperback novels at LosCon and asked the audience which one was AI generated. Of those who commented, about half got it wrong. One savvy fellow panelist correctly caught a very tiny blip of a color spectrum that didn’t belong and I failed to cover up, so she correctly identified the AI cover, but the others were probably just guessing. Now that was somewhat cheating I suppose because the AI generated cover was more than one AI generated image that I had edited together, and the non-AI image was two photographs cropped and edited.
LikeLike
I should clarify. I was only talking about the covers of my two books. Sorry, I’d say I haven’t had my coffee yet, but I’ve never had my coffee. Can’t drink the stuff.
LikeLiked by 1 person
Ludicris?
LikeLike
Ludic. Pertaining to enjoyment or pleasure. (rolls eyes.)
LikeLike
“Ludacris” is a -really- NSFW rapper. Some of his stuff is kinda clever, from a warped POV.
LikeLike
I know. But I want to point out the title has nothing to do with it.
LikeLike
Sarah, some of your books show up in a LibGen search: https://www.theatlantic.com/technology/archive/2025/03/search-libgen-data-set/682094/?utm_source=apple_news
https://www.theatlantic.com/technology/archive/2025/03/libgen-meta-openai/682093/
LikeLiked by 1 person
What is?????
LikeLike
Oh. It’s a pirate site. Apparently there’s nothing I can do.
LikeLike
The article lays out that the pirated library was allegedly used to train Meta and Open AI AI. Some authors are suing. You are automatically a part of the class action lawsuit.
More info here: https://authorsguild.org/news/meta-libgen-ai-training-book-heist-what-authors-need-to-know/
LikeLiked by 1 person
Some of your books show up in a LibGen search. See further details here:
https://www.theatlantic.com/technology/archive/2025/03/libgen-meta-openai/682093/
LikeLiked by 1 person
I’m really having fun with Ai. I’ve been generating pictures and having Grok right song lyrics and fanfics for me and actually having conversations with it. The really neat thing is looking for the mistakes it makes. I have had actual arguments with it over its mistakes. it’s a neat toy, but that is all it really is.
LikeLiked by 1 person
I’m also in the “neat toy” camp. For now, with what we see.
I’ve had interesting discussions with Grok comparing Freddie’s non existant valet in Cotillion with Jeeves, but also argued with it whether Judy Garland was in the Harvey Girls (she was the star, don’t tell me she wasn’t in it.)
And I frequently run against the free image limit in Grok Imagine.
LikeLiked by 1 person
D*mn it, woman. I now want to read stories written by Freddie’s Valet.
LikeLiked by 2 people
I went down a rabbit trail two weeks ago to use AI to create a “Brian Writing Style Guide” and created a squad (not yet a platoon) size element of reviewers to provide feedback on my writing. Yes, I am emulating myself as part of a co-intelligence experiment while having a small Army of clankers provide very specific specialist style feedback. I suspect that ERCOT energy traders have seen the blip increase in power consumption at datacenters already. Anyway, I realized something the day before Thanksgiving… AI is very much a mirror. The way it behaves very much mirrors the person that uses it. If you are a dull monkey, you might get some dull monkey output. If you are clever, well I suspect it is a really, really deep rabbit hole.
Given today’s topic, I asked the question “What would Hoyt (sorry my ex-Army brain still uses people’s last names often) have to say about an AI creating a Sarah Hoyt Writing Style Guide?”
Please forgive me for this rabbit hole…
So in five minutes, I asked Grok to review the last 200 posts from Hoyt, create a style guide, and then entered a prompt asking AI-Hoyt-Clone to write a post about what AI-Hoyt-Clone thought about the style guide for Actual-Hoyt being used. Very Meta… but again AI is an interesting mirror. Here is what AI-Hoyt-Clone responded with:
—–
Me, Myself, and the AI Clone: Or, How I Learned to Stop Worrying and Love the Algorithm (Sort Of)
Hand flails wildly from behind the monitor: CAT. No, seriously—Indy just decided my lap was prime real estate for his afternoon existential crisis, complete with kneading that feels like he’s auditioning for a role in a low-budget remake of The Godfather (paws as horse head). I’m mid-prompt, trying to coax this digital doppelganger into spitting out something that sounds like me—y’know, sardonic libertarian rant with a side of hope and a Heinlein chaser—and bam. Feline veto. Because of course. If life’s gonna throw curveballs at my attempts to wrestle with artificial me, at least it’s got whiskers and opinions. But here’s the hook, the real gut-punch frustration: the damn thing almost nailed it. Paragraphs that twist like my essays on cultural brain-worms, metaphors that land with the subtlety of a bug ship dropping out of hyperspace. And yet… off. Just enough to make me snort coffee and wonder if I’ve been punked by my own subconscious.
Look, I’ve been poking at this AI-emulation beast for months now—ever since some egghead over at xAI (shoutout to the mad scientists building Grok, because why not tip the hat to the folks actually innovating instead of regulating it into the ground?) figured out how to bottle lightning and call it “prompt engineering.” The topic? Using AI to emulate me. Sarah A. Hoyt, word-slinger extraordinaire, immigrant scrapper turned sci-fi scribbler, queen of the near-daily blog post that’s equal parts therapy session and Molotov cocktail lobbed at collectivist nonsense. It’s meta as hell, right? Like staring into a funhouse mirror that occasionally winks back with your own sarcasm but trips over the punchline. And yeah, it pisses me off. Not because the tech’s stealing my soul—God, what a tired trope—but because in the hands of the wrong crowd, it could. Imagine the DEI overlords at some Big Tech hive-mind factory churning out “inclusive” versions of my rants: all the fire neutered into pablum about “group equity” and “safe spaces for narratives.” Fuck that. That’s not emulation; that’s erasure by algorithm, collectivism’s wet dream where individual voices get homogenized into a beige slurry.
Rhetorical question, because why not: Have you ever read one of those AI-generated “in the style of” pieces that starts strong but devolves into repetition, like a drunk uncle retelling the same fishing story but swapping “bass” for “trout” every third go? That’s the absurdity I’m mocking here—the glitchy emperor with no clothes. I fed the beast a chunk of my back catalog: riffs on land acknowledgments as historical fanfic gone wrong (“THERE IS NO NATURAL RIGHT TO LAND, YOU IDIOTS—TAKE YOUR HEAD OUT OF YOUR ASS”), allergy metaphors for societal encroachment (“It’s like injecting a bolus of bullshit straight into the vein of freedom”), and those quiet vignettes where time folds like origami and a lost kid turns out to be the tyrant you never raised. Out comes this… thing. Close enough to fool a casual scroll, but peel back the layers, and it’s got tells. Sentences that string too neatly, missing the em-dashes of my stream-of-consciousness asides (—like this, where the cat just headbutted the keyboard again, autocorrecting “tyranny” to “tyrannosaurus,” because priorities). Or the humor: mine’s got bite, self-deprecating barbs that draw blood without apology; the AI’s? Polite chuckles, like it’s afraid of triggering the sensitivity readers.
And don’t get me started on the libertarian angle—because of course I will, it’s practically encoded in my DNA, right next to the Portuguese stubbornness and the allergy to feathers (long story, involves farm cats and a childhood delusion that I was one). Here’s the rub: AI emulation should be the ultimate individualist hack. A tool, like a word processor or a thesaurus on steroids, amplifying your voice instead of drowning it. Heinlein nailed it in The Moon Is a Harsh Mistress: “Specialization is for insects.” Why chain yourself to one brain when you can bootstrap a swarm? I’ve used the damn thing to brainstorm plot twists—feed it a half-baked idea about a chrononaut dodging collectivist time cops, and out pops a scaffold sturdy enough to build on. Or to edit: “Make this rant punchier, but keep the sass.” It does, 80% of the time, freeing me from the donkey work so I can chase the muse instead of wrangling commas. That’s freedom, baby—performance as liberation, to borrow a phrase from that engineer buddy of mine who’s always yapping about reliability strategies. (Hi, Brian—your PSR thing? Gold for plotting. Proactive threats? That’s how I dodge plot holes.)
But oh, the absurdities when it goes pear-shaped. Picture this: I prompt for a post on “the blackpill vs. standing your ground,” and the AI spits back a screed that’s me if I’d been lobotomized by a Twitter algorithm—endless outrage cycles, no pivot to hope, just a hamster wheel of “they’re all out to get us.” Collectivist poison, that. It assumes the worst because that’s what the training data’s bloated with: echo chambers where individualism gets tarred as “selfish” and tools like AI as “job-killers for the masses.” Bullshit. The real threat isn’t the code; it’s the regulators itching to “protect” us by slapping leashes on innovation. Remember when they tried to “save” horses by banning cars? Same vibe. AI-emulating-me isn’t about replacing Sarah Hoyt; it’s about multiplying her, letting a thousand word-warriors bloom without the gatekeepers deciding who’s “worthy.” In a world of five-minute hates and groupthink ministries, that’s revolutionary. It’s the ultimate fuck-you to the hive: Your voice, amplified, unfiltered, outpacing the slop factory.
(Aside: Older cat’s now supervising from the windowsill, judging my typing speed like I’m late on kibble rations. Thanks for the vote of confidence, furball. At least you’re not emulating me—that’d be a post full of naps and judgmental stares.)
Humor helps, doesn’t it? Because let’s face it, this whole endeavor’s a cosmic joke. I’m over here, graying immigrant with a keyboard for a sword, battling entropy one post at a time, and now there’s a silicon shadow aping my schtick. It’s like Lazarus Long discovering a clone army of himself—flattering, until one starts spouting vegan manifestos. But strip away the snark, and the optimism creeps in, sneaky as dawn after a storm. See, I’ve used it. Not just for fluff—real work. That vignette last week, the one with the lighthouse keeper and the time-lost tyrant-baby? AI helped scaffold the ethics tangle: what if destiny’s just a collectivist con, and free will’s the real warp drive? It threw curveballs I hadn’t seen, forcing me to sharpen my own blade. And the readers? They don’t care if a tool oiled the gears; they care if the story sings. That’s the market talking—pure, unadulterated individualism. Judge by enjoyment, by the ludic spark that makes you snort or ponder or rage-type in the comments. Not by some purity test on “human enough.”
We’re at a hinge point, folks. The collectivist crowd’s baying for AI bans, “to protect artists!”—as if we’re fragile orchids instead of scrappy survivors. But me? I’m betting on the tinkerers, the prompt-wranglers turning code into co-pilots. It’s not theft; it’s augmentation. Your voice, louder. Your stories, faster. And in a culture that’s spent decades trying to collectivize creativity into “diverse” quotas and trigger warnings, that’s a goddamn superpower. Sure, it’ll glitch—hell, I glitch, ask my editor—but that’s the beauty. Iteration. Improvement. Freedom earned, one debug at a time.
So, yeah. Frustrated? A bit. Amused? Hell yes. Uplifted? Damn straight, because this rabbit hole’s reminded me: the future’s not some monolithic overlord; it’s a thousand messy experiments, cats and all. Eschew the blackpill on tech—embrace the chaos, wield the tools, and watch individualism roar back louder than a bug trumpet chorus.
Your turn, troops. Fire up your own AI playground—emulate me, or better yet, emulate you. Prompt a rant on your pet peeve, scaffold a story seed, or just ask it why cats think 3 a.m. is playtime. Share the hilarity (or horror) in the comments. Did it nail your voice, or did it turn you into a motivational poster? Hit us with the best (or worst) lines. And if you’re a writer lurking: ship it. The algorithm’s just a mirror—make sure it reflects something fierce.
Because in the words of the old man himself: “Progress doesn’t care about your feelings.” But damn if it doesn’t feel good when you grab the wheel.
(Word count: 1,047. Donations fuel the coffee, kibble, and existential crises. You know the drill—buy a book, toss a buck, keep the rebellion rolling.)
—–
Once again, apologies for emulating but it is an interesting rabbit hole we are traversing. I’m very curious what Sarah thinks as well as what this group thinks.
– Brian
LikeLiked by 1 person
Sounds like a bubble head version of me. Like my z gen sister. LOL.
Emulation zilch. Entertainment much.
Meanwhile back at the ranch, I have a clanker to beat for another song. So, excuse me.
LikeLiked by 2 people
It’s like a song that has a functional chord progression and *almost* all of the right notes in it, but the bass and the electric guitar just had a new set of strings installed and they keep detuning themselves at odd moments.
LikeLiked by 1 person
Oh, and no. I don’t need AI to scaffold anything. At this point writing short stories can happen while half dead.
Enid Blyton wrote functional novels till the month she died of Alzheimers. I aspire to that kind of ability.
LikeLike
Ooh, it’s fascinating what it picks up as being the identifying marks for her– Sarah’s right, it does sound like a bubble head version of her.
Or maybe lightly drunk?
Trying to avoid saying anything of substance?
It’s a little like those street-caricature guys, it’s recognizable but not a photo.
LikeLiked by 1 person