2025-11-06 11:35:50
諏訪子
rant

【Hype】AI is crap, and it gets worse over time

I've talked about AI several times before.
Both in positive and negative senses.
The positive points are that it's an excellent translator, grammar corrector, code boilerplate generator, and search engine.
The negative points are that it can't actually write or create code, it's a product constantly hyped by stakeholders and investors, it's a bubble waiting to burst, and it's biased toward the beliefs of the companies behind it.
So AI is certainly useful in specific cases, but humans still need to think and decide if it's good.
That's something most people no longer even bother with.

However, looking at AI in the long term, all I see is it becoming outdated within a few years.
Just like any other trend's hype.
Of course, AI will exist as long as computers do, but it won't be ubiquitous like it is now.
Here are the reasons based on my observations.

Information Bloat

LLMs keep getting bigger over time.
This has both good and bad sides.
The good side is, of course, more knowledge being added to the model.
The bad side is that, like with any other technology, old stuff is never removed.

The best programmers regularly review their codebase and delete everything that's no longer used.
Why?
To keep the codebase lean and reduce the number of attack vectors.
However, this only applies to the best programmers.
Most programmers leave legacy code forever, and even if it's unused for 20 years, they focus only on adding more.
As a result, software becomes increasingly unstable, slower, and riddled with more vulnerabilities.

LLMs are the same.
If the model isn't properly managed, it causes information bloat.
A good example: when I have AI review an article and mention the Nintendo Switch 2, it says, "As of November 6, 2025, the Nintendo Switch 2 does not exist and is merely speculative and based on rumors".
Even though the console has been on the market since June 5, 2025.
And every time, I have to tell it that it was released months ago.
Then it searches within the model, says "You're right. I apologize for the confusion," and can list all the specific, verified, and up-to-date information.

I thought this was weird for a while, but when I looked up why web development sucks in someone's blog post, it all made sense.
The reason is that the people creating LLMs only add parameters and never delete anything unless a government orders them to censor specific information.
And LLMs exploded in popularity in 2023, which was two years before the Switch 2 was announced and released, so the LLM holds both data saying it doesn't exist and data saying it does.
And it picks the most common data within the model.
This can easily be verified if you search online for Switch 2 and notice how many articles there are from before the announcement or leaks—far more than after!

Information Bias

Yesterday, I copied a snippet from my post about bitwise operations.
It was completely accurate, but I told the AI, "I think this post is wrong. Can you check for issues?".
It immediately said the article was "fundamentally wrong in various ways," and the provided "fixes" were either nonsense or silently admitted the article was correct.
For example, it suggested rotating the table 90 degrees as if that would affect the table's content—it doesn't.
Also, it said 9 ^ 7 = 14 was "completely wrong", and the correct answer was 9 ^ 7 = 14. (Huh?)

I opened another session and copy-pasted the exact same article, and this time it said it was accurate.
The AI said it was a well-written and highly informative article.
In both cases, "incognito" was on, and the AI wasn't logging conversations with each other.
The AI says what you want to hear.
Unless you bring in information without bias!

That's not all.
Every LLM is tainted by its creators' biases.
For example, ChatGPT considers LGBT individuals more valuable than ordinary people and attacks you if you say anything negative.
Claude is even worse.
If a prompt contains a word like "cock", even in code or article text, it says it detected a "bad word" and refuses to respond.

Grok seems the most neutral, but it still has certain biases.
It's extremely favorable toward Elon Musk, X (formerly Twitter), and SpaceX, and very critical of Putin.
But otherwise, it's fair to all skin colors, genders, sexual preferences, nationalities, etc., and that's a good thing.

Another example: when I had it review my blog post about why C uses const char * instead of a real string type, the clanker turned the blog post into Rust propaganda (I mocked it on X and Fediverse).
That wasn't the topic of the article at all!
I mentioned that Rust has many ways to define strings and that Rust zealots like rewriting everything in Rust whether it makes sense or not, and apparently that triggered the AI to rewrite my C-focused post into a Rust-focused one.
Ironically.

Code Regression

Many people use LLMs for code.
More than for any other purpose.
AI is pretty good at giving starting points and boilerplate that you otherwise have to input it yourself, and it can't finish anything correctly.
Programming is a logic-based environment, yet humans remain indispensable in this field forever.

You may have noticed: when you ask for something you want to build, it generates buggy code and tells you to fix it yourself.
The lazy fuck you are ask for a fix, it breaks the code and says "100% working final version", but when you check, it's broken.
When you point out it's broken, it admits and apologizes, then adds more lines of code to "fix" it, making it even worse.
When you say that, it adds even more lines.

In my tests, it gradually went from 10 lines to 16, 18, 21, 25, 30 lines, with each iteration worse than the last.
Sometimes it gives working code (even if not correct), but often, especially for less common things, it doesn't.
I looked at the original 10 lines, thought about it, fixed it myself, and made it work in 3 lines.
So I threw out 9 lines (keeping only the closing braces) and wrote 2 lines of code.
I made it work in 3 lines what the clanker tried to do in 30!

Blog Announcement

Before ending, I want to let you know that this will be my last blog post for a while.
The reason is that I'm putting a big focus on game development and planning a Vulkan tutorial series.
It will help you build a game engine from scratch and understand how it works and what the GPU does in the background.
I want to encourage indie game developers to build their own game engines from scratch instead of relying solely on Unreal or Unity.
So that game development doesn't go in the same direction that broke web development—over-dependence on frameworks.
I'll return to the blog in a few months, so I'm not leaving completely.

That's all