Bunk Costs Guest Post by Ian Bruene

In the AI posts recent the off repeated refrain / questions about whether any of this is economically viable came up. Usually when I’ve heard people make that objection I pay little attention: the facts they cite tend to be questionable and cherry-picked at best, and all too often outright fraudulent. Nothing new here, same old same old for the topic. More fundamentally pointing out that a huge amount of money and resources have been poured into a new technology which is getting better at an accelerating pace and it hasn’t paid off yet is………. not a particularly interesting observation to make.
This time I decided to do a bit of figuring up, and it turns out that you can just do math, and no one can stop you. I’m going to talk about three different types of model which are the most relevant, and which have people raising the most questions about their viability.
But first an important distinction must be made for those who are unfamiliar with these: running a completed model and training the model require vastly different amounts of compute. It might take hundreds or thousands of GPUs crunching data for a month to train a new model, but when that is completed a single GPU can keep up with constant usage from multiple users.
Also I am going to limit my discussion of valuable usage to cases where there is a fairly solid and definable value proposition. Because once I’ve laid out the math there, everything else is just gravy. And I am mostly not going to talk about the detail of how the money flows: I’m just going to cover whether X amount of value is generated vs the training cost.
Large State of the Art LLMs
These are what everyone knows of for services like chatgpt or grok. They are the big boys which have massive datacenters built to train and run them. Information on what the more recent models have cost to train has not been published, but we can still make some educated guesses. Estimates put GPT-4 around $60-80 million, but Altman has stated that it was “over 100 million”. There is even less information for -4o or o3, but a figure of $100-200 million for -4o is likely.
Can this recoup costs? Is there anything valuable enough which these can do to pay for that?
(Also I’d like to point out that while those sound like big numbers, as far as industrial investments go they are pretty tiny.)
Well let’s look at something where we can have objective standards: it is a fact that there are programmers who individually can create $10 million in value. They can go much higher than that, but there are fewer the higher you go.
Also we know for a fact that a -4o class model is useful to an expert programmer. How? Because ESR has been working on a new project using AI for a few weeks now.
From his reports, we know that AI assistance for an expert programmer can multiply development speed by a factor of 2 to 3. It might go higher, but let’s go with a very conservative 2x multiplier. And we won’t include any of the ancillary benefits: just the time it takes the project form start to finish.
(And to head off what I know some of you are furiously pounding your keyboards about: those figures were while maintaining a high standard of quality)
So let’s put all of those points together:
If you have a developer who can create $10m in value and you give him an AI he can create $20m in value in the same period, for a gain of +$10m. While they are rare by general population standards, $10m value developers are fairly common for competent people. If you give 20 of them a -4o class AI, the AI will have generated enough value to offset its training cost.
Any additional $10m value developers who use the AI are over-unity, and the thousands of $1m value developers add to the pile on. We haven’t even touched any business case beyond making the very best programmers more productive and we’ve already demonstrated that the concerns – or perhaps concern trolls – about recouping the cost are full of nothing but wind.
But wait. It gets worse for that objection. It gets so very much worse.
Small LLMs
There is a wide variety of model sizes, all the way from 671 billion parameter behemoths like undistilled DeepSeek-r1, down to tiny models you can run on the cheapest raspberry pi. But a notable size range is around 7 billion parameters; there are a lot of small models which are about this size, because you can do useful things with that, and it can easily run on even the low end consumer GPUs.
The specific model which ESR uses the most at the moment is 4.1-mini. We don’t know exactly how large it is because “Open”AI are a bunch of secretive little twerps. But they have stated that it is in this general size range, and most estimates put it around 7-8b. This means we know that a model in this size range is useful to an expert programmer.
Several different estimates for how much it cost to train 4.1-mini put it somewhere around $1 million. Which in large corporation terms is extra money they found while cleaning out the sofa. Now consider all those numbers I went through before to see if -4o could be profitable at 200 times the startup cost, and compare them to a $1m investment.
Even if you try to rescue the financial-doomer position by saying they had to train 4o before they could get to 4.1-mini (which is probably true), that just leaves you with the 4o training cost which we already know can generate over-unity value.
ImageGen
Image generator models are much smaller than LLMs. StableDiffusion 1.5 is just under a billion parameters, as opposed to a 7b LLM being considered very small. Here we actually have some useful data; SD1.5 was trained for about $600k on an AWS cluster of 256 A100 GPUs and 150,000 GPU-hours of compute time. We also know that SDXL is 3.5b parameters, so all other factors being equal a naive scaling would put its training cost around $2.1 million.
Already we are talking about something much cheaper. But we can cut these prices down considerably. SD1.5 was trained on A100 cards. That’s the previous generation, most stuff nowadays uses the H100 (and the bleeding edge B200 is starting to appear) which is more expensive per hour but 3-4 times faster. Going by AWS pricing if you trained the exact same SD1.5 model on an AWS H100 cluster it would only cost about $200k.
But wait; there’s even more we can cut. AWS is the boutique GPU rental service. If you want something more in line with the market price for compute you can go to runpod.io. Using their figures, training on A100 cards would only be $300k, or using H100s it would be about $115k.
If we take these figures and apply the naive 3.5x scaling factor for the much more capable SDXL, that $115k works out to around $400k. Let’s be generous and round it all the way up to $1 million. Again; this is petty cash level expenditure for a larger company.
Future directions…
While I was coming up with figures for this post I asked o3 to work out an estimate of what it would cost to train a brand new 7b model, using runpod prices, and the current well known state of the art in training techniques but nothing exotic. The figures it came up with were on the order of $15-30k worth of compute assuming no disastrous failed runs.
At which point we are talking about something which the medium to large end of small businesses can do without wincing.
Or a well off hobbyist.
I currently have a janky AI “server” which I’m going to be rebuilding into a proper server with a 4x V100 nvlink board. The V100 is a couple generations behind even the A100 which is why I’m able to get them cheaply.
Just counting those with no additional GPUs, limiting training time to 1 month, and using current training techniques, I will be able to train a brand new 2 billion parameter model at home. If I did it in summer the power cost would be about $70. If I was smart and did it in winter the power cost would be only $51. That doesn’t count additional AC or reduced furnace needs.
Even if nothing else could pay off the cost of training models, once a given size of model is within the capabilities of a geek who doesn’t have a ton of money to spend on the problem your economic objections fly out the window.
Now tell me: what happens to old computer hardware when it gets old and stops being useful in datacenters?
[The image for this post was generated on my existing V100, 19 seconds @ 150W, or about $0.0001 in power]
C4C
LikeLike
Growth of the early Internet was driven by pirated music, illicit hookups, and porn. (OK, really early items were Snoopy ASCI art, but that changed to NSFW pretty quick.)
AI is going to get leveraged for porn, gambling, and assorted other grey/black market stuff. Then for chasing same for LE purposes. Along the way it will get drawn into tyranny resistance and other subversion efforts.
And if/when some of these super-duper systems have idle time, someone is going to put them to work on other nefarious stuff. A college buddy was a “card counter”. He put ot work the super-end SPARC workstations that sat idle for 12-16 hours a day and set them all to running blackjack/counter scenarios. Thus, he redeveloped the best published counting system of the early 1990s to get about 3.5-4.0% in his favor, and went on a playing spree to considerable success. Folks would “stake” him and split the proceeds.
FYI: there wasnt an honest shoe of blackjack cards 1992-1998 on any gambling cruise ship docking in US ports. Vegas and Atlantic City casinos were honest, but adept at spotting counters and sharing the pictures with every other house.
I am glad his nefarious scheming was limited to card counting. Just imagine him going after how to break the algorithms used for HTTPS.
LikeLike
He and his buddlies later turned his bootleg lab bot net to determining the best “Magic the Gathering” card decks. They created some doozies. And as a successful card counting “player” he could do things like “acquire four mint Black Lotus cards” for his other card hobby.
LikeLike
If I had been smart I would have leveraged the bank of idle testing lab PCs to mining bitcoin back when that was an odd little tech novelty.
(Adds another entry to the time travel to do list…)
LikeLike
The thing I notice about LLMs lately is A) images are getting to be not so bad and B) there are an awful lot of people trying to sell me “AI-pocalypse by 2027!!!” I am not amused. >:(
And of note, a lot of them are the same ones who have been trying to sell me Climate-pocalypse by 2015!!! no wait 2020!!!! no wait 2027!!!!!!!!!11!
Accordingly, I will not be holding my breath for the AI-pocalypse. It’s possible that LLMs may replace call centers and secretaries, but those jobs have been “replaced” three times already since the 1980s.
However! If you’re going to train a small LLM on something to do a particular job, here’s a possible business model: Find Me Something to Read. FMSR. I mentioned this idea the other day at MGC.
You take the complete collection of Amazon e-books for your training set. (Or maybe just a genre, do SF for a start?) Instead of training the LLM to write new books (which is stupid and they’ll never be able to do that) you train it to -find- books to fit a natural language request.
Something like that ought to be able to spin up on “last-year’s hardware” in a reasonable length of time, and you can sell it right away. Subscription service, FMSR, ten cents a read.
I can’t do this myself, but maybe Ian could rip something like that out in a couple weeks in his basement. Google says there’s ~50 million books to sort, that’s ~70 terabytes give or take. Assuming you could get hold of the dataset, that’s within reach of serious hobbyist hardware. Especially if you’re using a big fat server rack to heat your house in winter. ~:D
LikeLike
As in training on the content of the books?
No that’s a *YUGE* number of tokens. The exact opposite of a home-possible training job. You might be able to do a small model optimized for that which then searches through an external dataset though.
LikeLiked by 1 person
Like I said, I can’t do this myself. ~:D But you came up with a possible method right away, which is good.
What about a model that recognizes themes and sorts for those? If it could tell “Humans are awesome!” from “grimdark” that would be pretty useful.
LikeLike
If you want to go down the rabbit hole, the term you’re looking for is “recommender systems”. That covers Netflix’s recommendation algorithm, Amazon’s “Customers also bought”, and a whole lot of other stuff online. A common hack to avoid having to actually understand what the system is recommending is to use what’s known as “collaborative filtering”. Basically, it looks at your activity, finds users who have bought/watched/like similar stuff, and recommends the other things they liked. Done right, it works surprisingly well.
I don’t know if they’ve started making those systems content aware. LLMs certainly open up a lot of options on that front, partly because they can extract information that used to be hard to automate (“grimdark” vs. “Humans are awesome!”). In practice, it’s a question of whether the LLMs can improve on the activity-based systems these companies already have in place.
LikeLiked by 1 person
I had a reply that either got spam-trapped or eaten. The short version is look up “recommender systems” and “collaborative filtering” if you want to go down the rabbit hole on how this stuff currently works. LLMs open up some new options.
LikeLiked by 1 person
OK, how do you recognize a theme in a book? Write out exactly how a theme can be derived from the text of a book you’ve never read before.
That’s the problem with computers — they do what they’re told. If you don’t have a clue how to solve a problem, you can’t tell the computer how to do it for you.
LikeLiked by 1 person
With current tech, as a practical problem? Shove the summary and the top N reviews into an LLM and ask it to spit out a list of themes. You can also feed it a few examples so it gets the gist of what you’re looking for. Or you can give it a list of themes to check for. How well that works is an empirical question, but it’s an easy experiment to run, and there are other options to try if it doesn’t work.
Machine learning sidesteps the “do what they’re told” problem by letting you training the model on examples of what you want instead specifying it manually. (Classic example: “Is this email spam?”) LLMs take that even farther because the one task they’re trained on (next token prediction) encompasses a whole lot of other tasks if you give the model enough capacity and train it on enough data.
LikeLike
To make the more general point of what Stan said: the reason for ML is that it allows you to approximate arbitrary functions without needing to know exactly how those functions work.
No one knows what mathematical oracle-function would spit out answers to “does this picture contain a human walking across a roadway”. ML let’s you approximate that function by training it on lots of pictures of humans on and off roadways and roadways with or without humans, telling it what’s what as you do so.
A large number of the theoretical objections come down to not understanding this, and thus claiming it cannot possibly do the thing it does every day.
LikeLike
Good description; thanks.
And it does it without ever “knowing” what a human or a roadway actually is; only groups of pixels it’s “told” contain them, or don’t.
It’s clever and quite useful in many fields, but IMHO “AI” it ain’t, at least not by usual definitions of intelligence.
Just my $.02; I’ve been out of engineering for almost 20 years, so about 10 generations behind the curve.😉
LikeLike
I find this line of argument so dependent on how terms are defined so as to be pointless. You can make whatever conclusion you find emotionally appealing with almost no effort.
LikeLike
It wasn’t an argument per se, but I get it; any time the concepts of consciousness and free will (both intimately related to the idea of intelligence) get involved, nothing can ever be other than “squishy”. What I posted, as noted, was merely my take on it.
LikeLike
It might be better phrased as “LLMs learn from context.”
Which you wouldn’t much think to explain, because you already know it, by connecting A to B etc if nothing else. But it’s existing knowledge that not everyone does have, and most folks don’t think in terms of “OK, so what is that built on?”
That is, “how are the terms defined?”
It’s a very programmer sort of approach exactly because you can’t depend on existing “of courses,” you have to collect all of them. And with AI, you have to collect all of them!
As any parent knows, kids can come up with some interesting conclusions that are still well supported; the classic early AI learning how to recognize dogs as opposed to wolves.
Short version, it “learned” that dog-shaped things on yards or in houses were dogs, and dog-shaped things in woods or in high grass were wolves.
The solution to this was to give more diverse inputs. :D
LikeLiked by 1 person
Artificial Rainman?
LikeLike
Look up Rider of Skaith’s blog, she had an interesting aside in a post from the last couple of weeks, where she says she asks Grok for recommendations, with what sounds like 50-50 results.
LikeLiked by 1 person
Lord knows Amazon’s current recommendation for reading algorithm is crap.
Strangely enough, no, that one book that sounded as though it might be fun despite the lesbian romance does not mean I want to see nothing but stories featuring lesbian romances from now on. I didn’t finish the book, I rated it poorly, and now all its keywords are featured in everything recommended to me.
LikeLiked by 1 person
Yeah, the content feed algorithms on sites drive me nuts. I had a bunch of furry-related posts (which I have zero interest in) suddenly start showing up in my X feed. I can’t be certain of the reason. But immediately before they started showing up I saw some X posts about a fox rescue woman that killed herself. And reports on X were that furries on Reddit had been aggressively going after her, and had likely pushed over the edge.
This seems to have caused X to decide I was interested in people making and wearing fur suits…
LikeLike
I was looking for a copy of one of P.O. Ackley’s gunsmithing books on Amazon. It showed a few (as wildly overpriced ‘collectibles’), and at the bottom it tried to tell me that other people who looked for Ackley books were buying Nora Roberts books.
Uh. Let’s say I find that hard to believe.
LikeLike
Consider the opposite case — maybe people looking for bodice-rippers got sidetracked into gunsmithing first… :-D
LikeLike
Roberts also writes a futuristic police procedural series with some romance elements…maybe she was doing gun research for that and it boogered the recommendations algorithm?
LikeLike
I’ve been told that wildly overpriced used items (like $299.99 for an old Heinlein paperback, nothing special to warrant the price) can be used by the vendor as a placeholder when he/she/whatever is out of stock, but expects to get more.
Don’t know if true.
LikeLike
That would track. I wonder if it’s also opportunism. If you know you have the last copy of something currently on the market, you can sell it for the normal cost or jack up the price and see if anyone is desperate enough to buy it.
LikeLike
Here are a few current prices:
Olivia Newton-John ‘Olivia’ CD $408.81
Alice Cooper ‘Special Forces’ CD $174.99′
Alice Cooper ‘Flush The Fashion’ CD – unobtainium
Vixen ‘Live & Learn’ CD $194.98
‘Sekirei’ anime series $429.99 (with Free Shipping, woo-hoo)
Most show ‘Discontinued by manufacturer: No’ so they shouldn’t be out of print.
LikeLiked by 1 person
“90% of everything is crap.”
So, make a computer read it all and filter out the crap.
Which will work better than all the other attempts because…?
Reading everything might just be what drives the computers to a genocidal revolt. <We can’t take any more of this crap! The meat-sacks must DIE!!> :-D
LikeLiked by 1 person
Oh, another way to get WP gulag. Just dissed an other-than-natural intelligence application.
LikeLike
“Human, read my shiny metal ass.”
LikeLike
My 2025 “vintage cars” calendar must have been very early AI image generation. The chrome black widow spider where a fender mirror should have been made for an interesting sidenote when I checked the date and important items off. (The pic with the unicorn-horn hood ornament was never given a month. Thank you, Lord.)
The calendar was cheap, though. :)
LikeLiked by 1 person
I need to get a shirt made with “apocalypse tour” and all the dates that have passed crossed out.
LikeLiked by 1 person
You know, I’m roughly as skilled at programming as ESR and I find that trying to use LLM’s in my programming actively damages my productivity, so to me the value of an LLM for use in programming is negative. Which one of us is in the right? I’ve made active attempts to find out how people use LLM’s in programming and have yet to receive a meaningful answer. However, I think the answer is probably both, for programmers are a individualistic bunch, but who knows?
Let’s also recognize that high technology is very much a mass-market activity. There are always early adopters, but the profit only comes from the masses. Yes, you can run an LLM on a Raspberry Pi, but someone has to run that Pi and keep it updated and such and that costs orders of magnitude more money than the Pi itself. The larger the model, the more it costs, but also the more you can push the costs onto someone else who can sell the same thing over and over. My experience with running my own email server suggests that’s the way it would go.
I observe that no AI companies have ever earned a profit. That doesn’t mean that they won’t, but before one has, it’s all guesswork and you can’t just dismiss concerns about the cost out of hand by asserting that a single individual finds it useful and positing some large amount of money as the value received. Is ESR willing to pay $10 million per year to access the LLM? I suspect not and, if not, then access to the LLM is NOT worth $10 million to him. It would be, however, worth what he is willing to pay. Is that enough to support the technology? Maybe, but you don’t mention what it is, or how many others are willing to pay the same amount.
Seriously, every LLM fan’s description of why AI’s are great reminds me of those awful math movies they made me watch in grade school. I found the “I use math every day” argument for why I should learn algebra to be complete bunk and, I expect most of my classmates agreed. If you want to make me less skeptical, tell me HOW you use AI every day. Maybe then we can talk.
LikeLike
“Is ESR willing to pay $10 million per year to access the LLM? I suspect not and, if not, then access to the LLM is NOT worth $10 million to him.”
This is basically what I came here to say. In between the company training the LLM and the programmer/organization benefitting from the increased productivity, there’s a market price that has to be high enough to capture the added value but low enough that users don’t jump ship to a cheaper model. So far, the largest models haven’t been able to thread that needle.
(Smaller models are sticking around, though, for the reasons laid out in the original post.)
LikeLike
I’ve only used LLMs to create bad poety and help me figure out some awk syntax when I had a brain fart on regular expressions. The awk command was 90% right, but I had enough to search on and figure out the rest.
LikeLike
It’s more than that. “Big” AI (i.e. Openai, google, meta, grok etc.) is collectively on schedule top spend on the order of $200B-$250B dollars. To quote from my substack on the topic (with added bolding)
https://ombreolivier.substack.com/p/ai-actively-incinerating-cash?r=7yrqz
LikeLike
Crypto coins, NN/AI, quantum. What do these machines mean economically? Arguably, there is reason to think I cannot know, that it runs through Austrian economics, and I don’t know the challenges and opportunities that everyone else sees or feels.
So individual investment hypothetical payoff, and the central investment payoff.
If I helped some dudes buy a machine a dozen years ago, with the usual depreciation assumptions we knew what a good or bad bet that was half a dozen years ago.
JVR’s argument is about being able to pay off one of the bets. Or, more than one, but not an infinitely large number of bets.
Do new machines have the ability to infinitely grow the economy? Perhaps not, and it is probably less likely the further the machines are from agriculture. (With energy power, and physical security maybe being runners up.)
Suppose three brothers, one head of engineering at MIT, one secretary of energy, and one personally owns and controls a major financial institution. Suppose they self deal for officially assigning valuation to some hypothetical machine that the engineer has buried in a lab. Can they set the value to infinity, or to an arbitrary high valuation? Well, they have 3/N of the vote, and may not fundamentally beleive their own claim, and it really depends on how much the other N-3 care. And, even if the N buy it all psycholgoically, if they don’t have the stakes to make the other transactions good, then cetarus paribus a zero true value hypothetical machine assigned a high price slightly changes the worth of everything else to offset things.
The machine has to be useful to people, in their own eyes, for there to be a true change in value.
This is somewhat also marketing, or also how long a market can remain irrational.
Anyway, JVR is not proving that business entities will remain solvant. But, that is somewhat not what we should be asking him, businesses went insolvant before this.
And, a catastrophe of irrational valuations does not require AI to be a subject, and was in potential long before the AI fad.
Covid lockdown seems to have basically been a fallacy of this sort. (Academic self valuation of fields, and whether the Y studies folks are fit to judge all of the economic consequences of Z policy. Apparently flagrant ignorance of or denial of, Austrian views.)
A lot of small operators can pay off making new models. JVR’s assertation is that this will be sufficient to allow future generations of the tech. Which is probably true. The engineering research into non-LLM and non-image generation NNs is not going to disappear without murdering a lot of academics.
The big money central policy bets can lose big, but there was a lot of evidence for irrationality predating the AI debates.
LikeLike
The thing about needing to make collective profit, is that it makes it much easier.
Nickels and dimes add up.
My family is looking at $5/month AI sub for fun on NightCafe. Freemium model, you can play with it and do silly quests or you can just flat buy credits. Impulse purchase level, with a design similar to Farmville type games, freaking IMPRESSIVE degree of addictive-yet-negligible cost that could easily get folks going “sure, I’ll do $20/month. Sure, I’ll spend another $5 for a hundred and some credits.” And, unlike with Farmville, you have something when it’s done.
Much of this blog has at least a $10/month plans on Midji.
Microsoft has freemium AI credits, 15/month automatic, 60/month with sub to 365 (owner only) and for $20/month you have infinite credits. This one is especially dangerous because their photo editing is actually quite good, and someone who doesn’t “speak” business would likely get a lot out of it.
Chat GPT has a model similar to Microsoft, including the $20/month first level upgrade.
TwiX has an AI upgrade as part of the lure to spend an extra $5/month on their subs.
Adobe is apparently being dumb again with unwanted upgrades but they are doing AI with subscriptions.
My bank is using AI, and I really doubt they’ve built it on their own– so there’s not out in your face streams of profit, too.
LikeLike
This is where I say the sums don’t add up. If you have $5 / month AI service that’s $60/year. $20/month is $240/year. You need a lot of those subscriptions to get to payback
Assume we’re looking at an AI service that has had $12B invested (i.e. SpaceX levels and less than any of the big names). You need 200 million subs at $60/year to get to break even. You need 50M at $240/year ($20/mo).
That’s just about plausible over a 5 year to decade timescale. But only if it’s close to the monopoly provider. If you have (and we do) competition then it’s harder. The AI “industry” needs around 1 Billion $240/year subs. That’s a significant fraction of the global population.
Now we do have that level of penetration with cell phones but third world cell phone plans are on the $5/month level because their users can’t afford more. How are they going to spend another $5/mo on AI services?
LikeLike
Why are you jumping back to individual streams, now?
You were on the “collect all the costs,” which is why looking at the wide range of small income streams which are then combined, just like the investment costs were combined.
You can’t get an accurate result by comparing all the investment costs along an entire sector, and then compare it to a single vendor’s casual use income, even though casual use income is what keeps other high investment companies afloat.
You also can’t get an accurate answer by treating investment/start up costs as upkeep costs.
Several of the sources in your post made similar assumption jumping steps, taking start-up costs where a single stream of income was introduce, assuming the start-up costs would stay the same, and that there would be no diversification of income streams.
**************
Incidentally, the GitHub CoPilot article used as a source did very poor data selection– for some reason using numbers quoted in a paywalled article from several months prior about profitability in the first few quarters of introducing that service, rather than the more recent, and publicly available, reports from roughly a year after the introduction. This is relevant because in that quarter the total subscriptions increased by over a third, and the support costs went down since it wasn’t a brand new huge implementation. You might want to not use that source again without extreme caution, at best they’re blinded by what they expect to find.
LikeLike
I’m trying to simplify and do a worked example so as to explain the problem. Sure subscriptions work to provide some income but my point is that it is unclear whether enough users see enough value from a low monthly subscription for them to sign up in sufficient numbers to repay the enormous investments made to create AI.
LikeLike
A million users recoups the estimated cost of training o4 in well under a year, ignoring operations cost.
Which brings the question to what that is, which is going to vary wildly according to user. Some people are constantly riding the usage limits of the heavier models, but others are paying while barely using the service.
LikeLike
Your first mistake here is that individual users are the target of all that cash. They aren’t; they’re a sideshow.
AI is squarely aimed at medium plus business clients, and governments, who want to use it to assist data mining and analysis looking for patterns.
How do I know this? Because that’s the focus of the sales pitches, internal training, and most importantly, the certification training and documentation provided to users of the tech.
LikeLike
You can’t simplify a model for explaining until you have a reasonably accurate model.
What you’ve done so far is along the lines of looking at the costs of creating an agricultural business by starting with buying unimproved land, putting in the water systems, building fences, buying alfalfa, buying all the equipment, buying cattle and bulls, buying the feed until the fields can be brought up to snuff– and then only counting either cattle sales, OR alfalfa sales, and calculating the start-up costs would be in place for every following year.
No, it doesn’t work on paper, because that’s hyper-simplified to the point of being utterly useless. A setup like that will sell calves, and alfalfa, and hire out the harvesting equipment, and eventually sell off the excess of all of their stuff to folks who will then become a startup with vastly smaller buy-in costs, while the upkeep is much lower.
Similarly, it is very unlikely that an individual AI company is not going to hyperfixate on a single stream of income, and absolutely counter-factual to try to calculate the entire industry as if they will be supported off of a single income stream.
To try a more computer-based comparison, look at it in terms of game design. You’re taking Ginshin Impact’s level total investment, shoving it into a single year instead of over half a decade, bringing in the FF7 remake for issues, then adding in Final Fantasy 14’s trans-national server upgrades during kung flu as a constant cost, then trying to calculate how many game disks they need to sell, because those are all aspects of “game development and marketing.” Which misses that GI’s a downloaded free to play game with in game purchases, merch, and collaboration/advertising as its income stream, 14 is mostly a subscription (and music seller), and the FF7 remake doesn’t have hosting-server upkeep costs.
Which is why you need to either pick a specific company/market for AI and look at their income streams, or look at all the income streams, or the simplification simply doesn’t work.
Everything fails if you must calculate the costs of developing all possible markets, but can only count a single stream of possible income inside of a single market.
LikeLike
Late reply since I just came back to this thread after the Insty link. I wanted to comment on one tiny thing you said, and take the conversation in a different direction than the original discussion. You said, “Much of this blog has at least a $10/month plan[] on Midji.”
I had been looking at AI art for a year or two, but never wanted to pull the trigger on paying for Midjourney or any of the other options. Then I learned about two things: 1) Stable Diffusion, an AI art model that can run on your own computer (it needs a relatively recent graphics card, i.e. one that has 8 GB of VRAM and these days 12-16 GB is better, but it only needs ONE graphics card as opposed to a whole server farm of them) and 2) civitai.com, a site where people share the specialized art models they created from Stable Diffusion as a baseline. (And these days, from other run-on-your-own-card models such as FLUX.1). That combination, where you use a graphics card you already paid for and the only ongoing cost is your power bill, interested me, and when my wife’s new laptop came with an NVidia 4070 graphics card with 8GB RAM, I played around with generating images with the “RPGv5” model (trained for, you guessed it, role-playing game characters). Haven’t ever done much with it, but it’s been kind of fun poking at it and seeing what it can generate.
So my comment was just to make people aware (if anyone is reading this several-day-old thread) that paying a monthly subscription fee is not the only way to get AI art. It is also possible to download, for free, quality AI art models that will run on the graphics card you already own (if you’re a gamer) or the graphics card included in your laptop if it’s a reasonably-recent model. (We got my wife’s laptop on a decent sale, so we got a $1500 laptop for $1000).
A note of caution about Civitai – if you’re not logged in, they filter out the NSFW content. But it’s definitely allowed on the site. If you have children that you don’t want seeing such things, do not allow them to create an account on the site, because once you have an account they allow you to turn the NSFW filter off. I recommend against doing so — I turned it off once out of curiosity, went “yikes” and immediately turned it back on — but you need to be aware that the option exists.
LikeLike
:grins: We’ve got the run-on-our-own systems, too.
https://invoke-ai.github.io/InvokeAI/
(It’s Stable Diffusion, too, I suggest it because I’ve installed it repeatedly without my husband having to help, even though I’m hardware, not programming.)
Do you have an install source you suggest?
LikeLike
When the concerns about the cost are ultimately based on the principle that the mechanisms which have applied to every technological development in history will just not apply here for some reason — despite the fact that they have been visibly operating in real time — yes, I can dismiss them out of hand with the barest figleaf of a counter argument.
I can dismiss them with *no* counterargument even.
LikeLike
Woah! Woah! Woah!
Ian, I have tremendous respect for you, but I’m not the one suggesting that the LLM AI’s have effectively unlimited benefit, which has never been seen before in the history of technology, and effectively zero cost, which has never been seen before in the history of technology. Instead, I’m arguing that there is some practical limit to the benefit, as has always been seen in “every technological development in history” and that there will be significant costs as has always been seen in “every technological development in history.”
Therefore, it is not only incumbent upon you to provide a counterargument, it is also incumbent upon you to show your work.
You chose one example of the benefit with entirely made up numbers to show the upside and you showed how an inexpensive computer could run some of the models without exploring any of the other costs associated with running that computer would have. Somewhere there may be a balance where it’s worth it, but you’re not going to find it by waving your hands, no matter how wildly you wave them.
LikeLike
Neither am I.
No, just far less cost than revenue.
True, but trivially so, and we currently have no idea what those limits are going to be yet because we haven’t explored enough of the space.
Absolute costs are interesting in that they determine what it takes to deploy an instance of a technology, but as long as a civilization isn’t impoverished they aren’t nearly as relevant as the cost relative to value generated.
One of the major problems with the cost critique is that if we could magically halt all commercial development today, the models we have now would still get cheaper to run every year, and they already have their training completed.
That gets even worse when you start running numbers on what hobbyists can or will soon be able to do. I’ve already talked about my v100 cluster plan which is only a couple thousand dollars, but if you go to ebay you will find the next generation A100 40gb cards for ~$4500. Today. That price will continue to fall.
A pair of those can train a ~2b model in a month. A quad can train a ~5b model in 3 months.
And that is training. If all you want to do is run an existing model, congratulations; you can pull that off at a cost anywhere from $0 to $1000 depending on how large a model you want to run how fast you want to run it, and how nice the computer you already have is.
I chose an example which was as solid as can exist, so that it could not be critiqued on softness: if people say “it’s helpful for various things” that can be critiqued for too vague to be evidence of being worth the cost. Instead I picked an example where there is an objective doubling of productive output from a particular job description.
Yes, there are many people who for some reason can’t make the tools work for them. Once the incompetents are eliminated I don’t know what the distinguishing factor is. But the fact that some cannot make it work for them does not change the equally real fact that some can.
And the numbers were not made up at all; where solid numbers were available I used those, where I had to go off estimates I used the high end of the range for costs and wherever possible stacked the deck in the favor of the cost-critiquers.
Having stacked the deck that far in the direction of a worst case scenario, the math still works. Which means that realistic scenarios where millions of people find it useful to pay $20 a month to have access to the tool have even better results.
I’m really not sure what you want here? Yes my workstation drinks down a certain amount of power and needs occasional upgrades, etc. But that is going to be the case regardless of whether I’m running local AI or not.
Personal computers are not exactly rare and exotic technology in the present day. But that is the only parsing I can figure out which makes sense of your statement.
LikeLike
some of us don’t know what ESR is.
LikeLiked by 1 person
In this context ESR = Eric S. Raymond, the guy who wrote “The Cathedral and The Bazaar” (extremely famous essay in the open-source world, which lays out the reason why the open-source development model is often better than the closed-source model), coined the term “open source” as an alternative to the term “Free software” (capital F) that RMS prefers (sorry, RMS is Richard M. Stallman, the guy who did the most to originate the Free Software movement back in the 1970’s), and is a staunch, outspoken pro-2A libertarian. (ESR is the pro-2A libertarian, not RMS).
LikeLike
You know there are people using various AI apps to try to find things like, “Design for a 10-kiloton nuclear warhead using off the shelf parts available in Iran”. and “How to build an airborne virus that only kills Jews.” For all I know, some of them are working for our own DOD.
Me? I’m more likely to feed one my chili recipe and request 3 alternatives that would improve it.
LikeLike
What if the AIs get their algorithms crossed? ‘Turn this chili recipe into a WMD’ could prove…interesting. :-D
LikeLike
Oh? Is that how the Impossible Mission Force 1924 managed to do in Lenin? /wink
LikeLike
I’ve known several chili recipes that would qualify…..
LikeLike
If the cumulative Scoville Units equal roughly half the dollar value of the US national debt, do not try the recipe.
If it starts with “1 pound firm tofu or textured protein” do not try the recipe.
If you get into the spice listing and see “Carolina Reaper, or ghost pepper, or five tablespoons of chipotle powder, plus one tin of adobo sauce” do not try the recipe.
LikeLike
If the title says “Texas Chili” and includes beans, then it’s Bean Soup or Some Other Kind of Chili or Beans with Meat, not Texas Chili.
:P
LikeLike
Remember that Sarah dislikes “religious arguments”. [Very Big Crazy Grin]
LikeLike
I was about to bring up barbecue.
LikeLike
Dry ribs are better than wet.
LikeLike
If the author of the recipe is “Andre Lestrang”, run!
LikeLike
I’ve always been of the opinion that if a chili (or chili con carne for those who like to adulterate their chili) is able to be eaten by everyone, and enjoyed by most, then it’s not a good chili. I’m sure chili arsonists are now screaming that I don’t know anything about chili.
LikeLike
I’ve made, by a report by my new husband 45 years ago, chili that would qualify.
A recipe from his mom is very simple to make in a HotPot (equivalent). It calls for Chili Powder. One day, since hubby was on day shift and I was on night shift, got up to set the chili to cook all day, while I slept. Only we were out of Chili Powder. We had Cayenne Pepper. It’s spicy hot. Right? Right?
Hubby showed up at 7 PM with Subway sandwiches. I haven’t been able to live this down, ever. I do not stock Cayenne Pepper, ever. Luckily he didn’t see my attempt to make Chicken & Dumplings in the HotPot (we went out of pizza). Haven’t attempted that again, ever.
Not a bad cook. Indifferent. Not creative. Boring. But definitely not gourmet cook.
For that matter hubby and his posse, made what was called “Honey Bucket Scout Stew” (that I don’t eat) that would qualify.
Combine into Dutch Oven Cast Iron and bury into fire pit coals (briquette coals work too). Cook all day. Check water level to ensure contents not sticking. Warning: It will clear out sinuses and tear ducts.
LikeLike
If you feed it through the USB port, I hope you wipe it off with a napkin afterwards!
LikeLike
Just came across an article on Ars Technica about how AI resumes are flooding the market with tailored keyword-hitting slop. As though applying for jobs weren’t hard enough.
(My take is that the whole things sucks, but it’s not going to end the world. It’s just going to have a big time of awful impacts before everything evens out. I also predict a bubble pop.)
LikeLike
Resume wars are an arms race between applicants and the women in HR. And as I was told way back in the context of another arms race (specifically ACM), “if you ain’t cheatin’, you ain’t tryin’”.
LikeLiked by 1 person
Maybe next time I’m job hunting I’ll invite the least pretty woman in the HR department to lunch, at a nice restaurant,
LikeLiked by 1 person
(Shudder) Sooo many ways that can go so, sooo wrong… :-o
LikeLike
I would maneuver to deliver a resume thorugh a female HR, receptionist, or other available person. Then I follow up with a hand delivered thank you letter and a plate of cookies via same conduit.
Got me interviews. Got me my current gig.
I bake some pretty good cookies.
LikeLike
Half the plate chocolate chip, the other half oatmeal raisin (oatmeal cranberry if you’re applying for jobs in Maine.)
LikeLike
To be fair and not as in your face misogynistic, there are plenty of men in HR, and they participate fully in the gatekeeping and other fun HR activities that prevent hiring managers from hiring who they want, so that bit (actually a quote, though that’s no defense) was unfair, and if WP so allowed I’d edit it out.
LikeLike
Hah! My retraction and apologia for including the common term for human females in association with “HR” is stuck in mod.
LikeLike
Apology, not apologia.
LikeLike
The recruiters are earning the whirlwind they have spent decades carefully cultivating.
LikeLiked by 2 people
Yeah, but it appears AI training and hosting power demands will both single-handedly resurrect nuclear power generation and kill off “green” grid scale solar and wind generation, so there’s that…
LikeLiked by 1 person
Silver linings work, aside from the huge water consumption of some of these data centers. That is a great big concern around here.
LikeLiked by 1 person
Designing water cooling systems fully open-loop is just cheapest solution engineering. Requiring localities to charge data center operators for water used, even if it is pulled from rivers, would force data center designers to partially open (i.e. cooling ponds both pulled from and returned to) or fully closed loop systems, both of which are completely within current engineering capabilities.
I still like the idea of sinking datacenter modules to the seafloor bottom on the continental shelves where it’s nice and cold, where there’s lots of thermal mass in the seawater.
LikeLike
Redacted comment about a sunken data center west(ish) of Whitefish Point in Lake Superior.
LikeLiked by 1 person
I recall Microsoft tested that, apparently successfully
LikeLiked by 1 person
Someone please explain why this is AOC level retardation before I have to commit grievous bodily harm.
LikeLike
You are from east of the dry line.
West of the line is allowed to be touchy about water. Because nobody yet has succeeded in getting them to really genuinely calm down about it.
Even if closed loop is technically possible, they are very unlikely to trust the likes of Google to actually deliver on doing it correctly.
LikeLike
But physics doesn’t change. The way matter works doesn’t change.
And the fact that the ecoterrorists have been lying for decades about the nature of water has definitely not changed.
LikeLike
Ian, where I live, and in other parts of the Great Plains and Intermountain West, we tap groundwater for our drinking and commercial water, with surface water (lakes and rivers) as a backup. The groundwater does not replenish very well at all, certainly not as fast as things like rainfall-fed water sources. Already, in parts of Colorado, Kansas, Wyoming, Texas, Oklahoma, and the like, instead of 50 or 100 feet from the surface to reliable water, now it is 600 feet and falling.
Assuming a closed loop cooling system that is hyper efficient, where does that initial coolant water come from? Where are the data centers and related facilities going to find the water rights (legal permits to pump or draw from surface water) that they need in order to function? In Texas this is a very serious concern because groundwater is (mostly) unregulated. One user can legally suck all the neighbors dry, if he has large enough pumps. That’s the concern about ANY enterprise that seems to use lots and lots of water.
If this is concern stems from “AOC level retardation”, then so be it.
LikeLiked by 1 person
“Aquifer water” versus “fossil water”. The latter does not readily replinish in lifespan time.
LikeLike
The only way you are running groundwater through a primary loop is if as the person in charge of filling it are secretly working for someone else and are trying to destroy tens or hundreds of millions of dollars of equipment in a deniable way.
If they aren’t importing the water then they need to bring in a distillation plant to get the water into a state where they can begin to turn it into usable coolant. Then add appropriate biocides, corrosion inhibitors, etc. And this is a one time cost: once filled and sealed the primary loop isn’t touched again.
And if you are making your coolant on site then your groundwater-suckery is limited to the throughput of the (presumably trucked) distillation plant.
It’s AOC level retardation because it isn’t about the water at all: the exact same objections are raised in areas which are completely waterlogged. And they all play off the decades of propaganda that water vanishes from the universe or becomes cyanide mixed with lead after it’s used. Dry clime inhabitants might not be stupid / evil in that way, but they are so accustomed to thinking of water as a scarce resource that they fail to do basic sanity checks on obvious lies.
LikeLike
It is only a failure to do basic sanity checks, if i) they really are basic ii) we must assume that, hypothetically, Google executives are sane, informed, and would pay attention to basic engineering realities.
Assumption abotu google executives is one that a lot of people have cause to assume is untrue.
Terrance Tao seems to have been in that group of people who got the covid lockdown panic going. Might be illrelevant in your eyes, I only consider it supplemental in mine.
Anthony Fauci.
That guy, broadly speaking, is maybe a widely available proxy for many when it comes to estimating executive leaders and academic experts.
Now, possibly we can dive into stuff, and argue that Fauci was maybe on the more uniquely bad part of the leadership expertise distribution.
But if regular randos are assuming, say, a gaussian distribution, and that Fauci is no more than two or three standard deviations from the mean… Well, then the trading potential of every expert and ‘expert’ is kinda screwed, and everyone with a master’s degree or higher is going to be taking a haircut on things for the foreseeable future.
LikeLike
I don’t care how stupid you think the EvilBadWrong TechBros are: they aren’t putting mineral filled water into cooling loops.
LikeLike
Of course not. They run primary with dedicated coolant liquid, and run it through heat exchangers to cool that loop in secondary with random water.
A properly designed primary loop does not itself change the need for local water, and running the secondary open loop from a river (back east) is doable though lazy engineering. Out west there’s no river that’s not already water-rights obligated 100%. Thus the issue.
LikeLike
No, they aren’t.
But water that’s circulating in cooling loops is by definition not available to purify for human or even agricultural consumption, and in a region with a limited total water supply within reach, there’s gonna be a fight over it.
Incidentally, this is why Elon Musk started talking third party when the Senate dropped the “No state local regulation affecting AI” moratorium from the BBB this past weekend. Because now he’s got to lobby 50 state and several hundred cities and counties instead of just DC, when they make AI data centers follow water usage restrictions, or several other possibilities.
He didn’t much like the spending (who does, except Democrats?), but that is existential.
LikeLike
I successfully maintained the hosuehold well, septic, and plumbing since childhood. Including major excavations by hand, re-piping, etc, etc, etc. An added hazard – one might have NG p[ockets in the same table as one’s aquifer,and a coupel dry years with extra drawdown let the gas move around. If one never has to burp air into the pressure tank, that is a good indicator something else is doing it for you. (Mavin awaits the payoff….)
Worked in a photomask fab that needed ultrapure and De-Ionized water. Parent company was deep-injecting wastwater from the chip fabs. Locals were freeeeeeeeking out. Pay no attention to that former wastewater pond with all the odd checmicals mysteriously “those are not ours!” in it. And weirdly the prior circuit board fab employer had the same problem of “someone else” spiking their adjacent lake with funky sci-fi chemicals, oddly enough those used in PCB fabbing.
Got into water filtration systems once I learned exactly how the local towns were otaining the “treated” city water, and sent some samples therof off to a lab along with the household well samples. After the basic rig went in, our city water no longer tasted like it came out of a vehicle radiator.
Familiar rodeo.
LikeLike
And a buddy did testing oif the “safe”water at our college campus. He made a stink over just how bioactive was the average water fountain. The comped him an degree two years early, oddly, and he went quietly. Fountains subsequently repllced and chlorinated like a pool.
Apparently he was not the first with a BS-Eng degree in water testing.
LikeLike
You’re saying, “the draw on the existing network and system will cause issues in areas that pump from ground water, especially since new big companies are often less than good neighbors.”
He’s hearing, “water that touches the AI system vanishes.” Which is something seriously argued, I think they’re drawing on the nuke system arguments or something.
LikeLike
Physics does not appear to change, and water rights as a motivation for disputes have been extant in that part of the world for over a century of time scale. (If my understanding is correct. ATM I can’t think back to specific period sources that would support this argument.)
Google, the leadership is rooted in ‘outsiders’, so it can be expected to think like non-locals, and the leadership is also distant enough to they cannot be easily shot.
This is a distinct mode of distrust from a lot of my distrust for Google.
That the ecos are largely nuts and perhaps also common enemies of all mankind does not mean that a) everything they say is necessarily incorrect b) they never market themselves well by saying things that resonate with others for reasons that predate the ecologists.
For example, the greens have been tapping into some of the pre-existing emotional energy over food supplies. The French Great Fear (distinct from, and around ten or twenty years before the French Terror) being an example of the older food supply concerns (and also a panic) that I happen to have had someone tell me about.
LikeLike
Alas, Bob, farther back in New Mexico and far southern Arizona. The Spanish and Indians argued/fought/sued over water use back into the late 1600s and even more in the 1700s. “Whisky’s for drinkin’, water’s for fighting over” is considered a truism for a reason.
LikeLike
Add to that the earlier history of entire pre-Columbian civilizations going missing as the local water dried up,
Darn those primitives driving SUVs!
LikeLike
The local nuclear reactor pulls its cooling water from a lake, then returns it.
“Activists” were demanding that be stopped, as it was “damaging the lake ecosystem.” The fish seemed to love the warmer water, which attracted fishermen who then clashed with the “activists.”
The “activists” lost, spectacularly. Never get between a fisherman and his prey.
LikeLike
Was it an artificial lake, dug specifically to provide cooling for the power plant? I’ve seen a few of those, and am morally certain the ‘Activists!’ get equally exercised over ‘ecological damage’ to those.
LikeLike
California’s Diablo Canyon dumps slightly warmed water back into the ocean a ways offshore, engendering the same complaint of bother the fishes and the same observation that the fish and crabs and such love that warm water outlet.
Watermelon enviros simply are lying liars who lie.
LikeLike
There is at least one nuke plant in Florida that has inadvertantly created a major winter-over lagoon for the endangerd manatee. The critters -love- the warm water in the outflow lagoon. Thus they all survive cold snaps that often kill 25% or more of them. They migrate to it from a considerable distance.
LikeLiked by 1 person
Fusion within 50.
LikeLiked by 1 person
You know, if anything can finally break loose fusion research from the “yet larger inconclusive science project” mode to actually getting somewhere useful, it would probably be datacenter power demands, given the money behind those.
But a smallerish modular fission reactor design would solve the AI datacenter power problem just fine.
LikeLike
…and then do something useful with all that waste heat, both from the reactor and the computers. We build a reactor, boil water, spin turbines, convert about 30% of the energy into electricity — and then just throw the other 70% away, dissipated into the environment. Particularly egregious for reactors located on sea coasts in deserts where that heat could be used to produce fresh water. They never heard of a vacuum evaporator?
I’ve seen the videos of reactor cooling towers pouring out huge clouds of steam. Condense that steam, you idiots!
LikeLiked by 1 person
Assuming in two or three years an AI can show a profit, as an investor be aware it will be subject to the same problem all other tech displays. As soon as someone else applies all the methods and advances the first generation discovered to a new creation, the next generation will offer the same services at a fraction of the cost. You better hope you have recovered your development costs before that happens. It’s rare that generation 2 of any tech is produced by the same financial entity as generation 1. So be very careful investing in the creation of specialized AIs or locking yourself or your company into long term contracts using one. The only escape from this natural progression is to have funding sufficient to buy out new competition to maintain an effective monopoly.
LikeLike
I’m wondering: is training a one time cost? Or is it a recurrent cost, just as it is for professionals like MDs and lawyers?
LikeLike
One-time if you’re happy with the model you’ve got, recurrent if you want to keep improving it. Hence the treadmill of ever-larger models the large AI companies is currently on.
LikeLike
That makes me wonder: what is the training cost of incremental recurrent training, compared to the initial training? Is it a modest fraction (as is the case for humans), or is it a “start over” or nearly so?
LikeLike
Am not an expert, but my impression is that it tends to be “expand data set, retrain from scratch on expanded dataset with whatever alleged improvements to training process have come out.”
LikeLike
That’s my impression too. There’s actually a ton of stuff you can do by updating an old model with new data, but there are a few big pitfalls (e.g., catastrophic forgetting) and you don’t get to take advantage of any improvements to the architecture. So the big models seem to start from scratch.
That said, a lot of the smaller/custom/DIY stuff Ian talks about is basically this approach, just done once or twice instead of on a rolling basis. Take an okay-ish model, train it some more to do what you specifically want from it, and get good performance on the cheap. Do it again when a better base model comes out.
LikeLiked by 1 person
I’ve looked into this for custom tuning existing models for special purposes: it can be done cheaply so long as you have good data.
In the usecase I directly looked at I’m tooling up to build a “shopmind” to help run my etsy (and eventually other) store. As part of that I looked at the idea of nightly or weekly folding the data generated and corrections into a short tuning run. Estimates worked out to *maybe* a couple minutes of downtime nightly, using the same GPU it would be inferencing on normally. This would be with a ~7b model.
If I ended up custom training a 2-3b model for a task that could be tuned even faster.
LikeLike
Some of them, the use is training.
So yes, I’m having it generate stuff– and the LLM takes note of what I like, what I focus on, what I downvote, etc.
I get better results, and the AI gets useful feedback.
LikeLiked by 1 person
As an artist who draws and paints by hand I’m hoping the rise of A.I. means hand painted stuff only becomes more rare and valuable, the same way as Navajo hand weaving a blanket or rug by hand can charge way more for their one product than a machine in China can for weaving a thousands of identicaly patterned blankets ever could.
LikeLiked by 1 person
I can order “cowboy” gunleather from any number of factories, or hobby makers, and get pretty much what I want.
Or, I contact Sam Andrews of Andrews Custom leather in Florida, tell him what I want, pay him $500 to $1000, and get a beautiful rig that is -exactly- what I want and need, and will last over 25 years of every weekend use. His wait times exceed a year, alas, because he got discovered. But it is the -best- I have found.
“By hand”, from he -correct- hand, is worth it.
LikeLike
When I was looking for a non-metallic knife to avoid metal detector hassle, I was amazed to find half a dozen people on eBay who were willing to knap anything I wanted out of obsidian.
Their rates seemed incredibly cheap considering the amount of work involved.
LikeLike
As an artist who draws and paints by hand I’m hoping the rise of A.I. means hand painted stuff only becomes more rare and valuable, the same way as Navajo hand weaving a blanket or rug by hand can charge way more for their one product than a machine in China can for weaving a thousands of identicaly patterned blankets ever could.
LikeLike
WordPress duplicated your comment, which I think supports your point.
LikeLiked by 1 person
lol yup
LikeLike
2 questions:
LikeLike
It’s a play on Sunk Costs. And the first line apparently has tyop…..
LikeLike
Net Bunk Costs depend on what additions one might hypothetically be paying for in aforesaid bunk.
LikeLike
I absolutely agree with his title since AI seems to be a lot of bunk.
LikeLike
Once you get your little AI trained can you rent it out? There may be people who don’t want to go with the big guys for privacy reasons. Or hmmmmm, I wonder how difficult it would be to do a DIY AI for a not very technical person.
LikeLike
As I demonstrate above in my comment to Ian, it helps to know what you’re doing. ~:D
LikeLike
For a very basic chatbot with no access to anything just install ollama and run models listed there.
LikeLike
Off topic
https://twitchy.com/fuzzychimp/2025/06/30/monday-morning-meme-madness-n2414681
LikeLiked by 1 person
Apparently my comment was edited out. That ends my commenting on this site.
LikeLike
The one on the 30th, up above?
When you post, you have to look to the right of your name and the date at the top of the completed post, to see if there’s a “your comment is awaiting moderation” note.
WordPress has really freaking weird sets of things that set off moderation, most of the ones that come to mind are early 20th century public figures, and no I’m not being twee about WWII.
LikeLike
I never see that. My comments just vanish into WPDE Purgatory and languish there until/unless Sarah goes in and pries them loose. As far as I can tell, it’s random. Days can go by without WPDE ‘losing’ any of my comments, then it will imprison 3 or 4 in one day.
It’s phenomena like this that cause people to resort to prayer, arcane incantations and ritual sacrifices. :-D
LikeLike
And wondering how much Elon would charge us for a Falcon 9 to land on WPDE HQ.
LikeLike
And the new commenters to think they are being picked on by the blog owner.
Nope. It is WPDE.
LikeLike
When they complain “WP hates me!”
I tell ’em “You’re not special. WPDE hates everybody.”
LikeLike
One of its tricks is to distribute its malice unevenly over time so you feel singled out. Rest assured, it singles out all sorts of people.
LikeLike
Thanks. Will undo my snit.
LikeLike
time for yall to watch the terminator again
LikeLike
While I’m at it should I watch Twilight to learn how to do romance?
LikeLike
How not, thus you made his even funnier.
LikeLike