“Artificial Intelligence” is fine

Hey, look! The first post of 2026, and it almost got posted within the first week. That’s a good start! I’ve been thinking about some of the talking points I wanted to put in here for a while now, and while I haven’t really “drafted” anything yet, I wanted to get it out here, so here’s what I’ve got.

First, we all know that “AI” as it’s being marketed excruciatingly and shoved into everything is a big pile of shit. Most “AI” integrations aren’t anything more than chatbots or algorithms or what would have been basic automations a few years ago, but now we call it “AI” so venture capitalists will throw all their money at it. The “AI” built into Microsoft Word is just the spelling checker turned up to 1000% such that, instead of just realizing you’ve typed something that isn’t a word, it sees that you’ve typed a word and starts suggesting what words normally come after that. Hell, that’s what Clippit (the actual name of the paperclip) was doing back in Office 97. (“It looks like you’re writing a letter!”) It’s not “new,” and it’s not “intelligent,” and honestly, it’s not even all that helpful.

Next, we have what people are actually doing with these things. We have people “writing music” and filling Spotify and YouTube with it, and people “making art” that’s terrible, with people with the wrong number of fingers and legs that bend the wrong way, and people “writing” text that doesn’t make any sense and isn’t coherent at all. It’s all trash, where people with no desire to “make something” but the desire to “have made something” can babble out some prompts and copy/paste output into the platform of their choice and say they did it. They don’t get any of the satisfaction of creation, but that’s not what they wanted, anyway. When someone wants to make art, they can just do it. Maybe it’ll be bad (and it always is your first time, because creation is a skill that takes practice and training) but that’s ok, because it’s still yours. Hell, I think you would get more satisfaction commissioning someone to make art for you than you would just getting a bot to do it, because then at least you can think “they made this for me.”

And then we have the heinous shit people are doing, most recently getting Grok to make CSAM and post it on Twitter. Like, just literally being like “hey chatbot, take this picture of a child and make them nude” and the bot’s all “here you go!” As if the internet wasn’t enough of a pedo hellscape (that exact phrase being used in a report about Roblox) but now one of the largest social media platforms owned by the richest person in the world is helping!

But ok, my gimmick here is that everything is “fine.” That everything that’s critically acclaimed actually is just whatever, and everything that sucks also has something wonderful and worthwhile. So what’s the positive to all of this garbage?

Honestly, at the moment, not much. I mean, there’s the abstract positive that a lot of really stupid people with way too much money are spending all that money on nothing and we all continue to not like it and not buy it. That’s at least worth some entertainment value.

There’s also that some of this nonsense is actually almost useful. Right now, all the air in the room is taken up by LLMs, basically prediction engines that can look at a “token” (basically, a snippet of text, or a piece of an image, or something) and have some knowledge of what tokens are usually found around it and reproduce something that sounds like natural language, or looks like a picture, or whatever. That’s where we get all this “AI” slop filling everything, but for things where reproducing what came before is all you need, like writing simple code (or interpreting existing code), it can help people to make those little tools we wish we had, so we can get back to the stuff that’s actually productive.

Nobody really talks about machine learning or neural networks anymore (because being able to deepfake your dead grandma is apparently more marketable) but things like speech recognition and handwriting recognition (that were impossibly futuristic at one point, then cutting edge, and now for the most part they just kinda work) are based on that stuff. (My understanding is that generally, these are just algorithms, and not really “AI,” but those algorithms are trained using a lot of the same types of systems that train ChatGPT and other LLMs.

And then there’s using the same sort of training on things like financial data, that could potentially change the way things like forensic accounting or actuarial work is done. (Of course, they could lead to things like the model that ended up being trained to “spot cancer” based on images of moles and “learned” that moles with rulers in the picture were cancerous, so the principle of “garbage in, garbage out” still applies.)

Basically, there are a lot of solutions out there just looking for problems. (And right now, companies like OpenAI are betting trillions of dollars that the problem you have is that you really want a chatbot to tell you to kill yourself or post crap to social media to be read by other chatbots so that even more chatbots can post comments about Obama.)

It could go the way of cryptocurrency, where a lot of money and time will be spent telling us it’s “the future” when ultimately it goes nowhere except for a small community of diehards and a very large network of grifters. (Really, “blockchain” in terms of the technical aspects of a trustless public ledger is a neat concept. There still isn’t a use for it that wouldn’t be better of to just use a database, but it’s still a neat concept.)

Or it could go the way of something like digital cameras, where some people play around with it but people who really care don’t use it, until suddenly it just sort of becomes the way it’s done. (“AI generation” will hopefully never become the “way it’s done” but there must be some middle ground between “I made this by hand” and “a chatbot made this for me.”)

So yeah, “Artificial Intelligence” is stupid. If someone wants you to use it, they’re probably also stupid. If you’re thinking “I don’t know; I think there might be something to this…” you might not be completely stupid. Basically, if you don’t buy it, and don’t use it to replace actual human thought or creativity, then it’s fine.

Naturally, no content on this site is generated by any sort of AI. You can tell, because it’s stupid, but in a sincere and human way!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.