You may have noticed that I've been less and less active online.
I've had a lot going on personally, some of which you may be aware of already, so that's certainly played a big part, but much of this drive towards AI utilisation has been causing me to disassociate.
Social media is pretty much flooded with accounts shilling AI and exclaiming how if you're not using it, you'll fall behind, won't be employable, everyone's using it and so on.
Particularly on Reddit, I've had to block a substantial number of these odd shill accounts that pop up every time the subject of AI comes up. It's very tiring and demoralising.
On top of that, I've seen programmers I'd had a lot of respect for actively using it too.
Someone even produced an issue under one of my projects using their LLM agent.
It's really hard to get away from it all. So, I'm often just switching off after work lately, and either working on my own little things for myself or just doing other things in the outer world instead.
And then seeing this rise of vibe-coding and people using LLMs to write the code for them seems to often result in code lacking a lot of personality, with verbose comments and outrageously long documentation with lots of little emojis, making it hard to take any of it seriously.
Quite often the level of verbosity isn't even there for other humans to work with the code but other LLM instances instead, as it's often more information than you or I would need.
It makes me quite emotional to think about the direction things are going. Code was, as far as I'm concerned, an art itself. Every programmer has their own style they develop. Their personality and the history of their work can come through the code, including at times, yes, amusing comments exclaiming great pain and frustration.
The solutions programmers came up with, often against deadlines, could be beautiful or wildly impressive solutions that you just otherwise wouldn't think of.
It makes me sad to think how much of this we're going to lose.
It was all so human. As someone who preserves a lot of this work, it would be a shame to preserve art that's just produced by a machine rather than a human being. I just don't see the point.
It really makes me feel like the skill I've been developing for over 20 years now has less value than it did before. Yet when I look and play with the technology, it confuses me greatly as to what others see in it?
Like anything, I'll usually check something out myself before forming an opinion on it, and I've been toying with them on and off since they kicked off really.
I've used Gemini and DeepSeek to produce a basic script now and then (such as something for ffmpeg, urgh). But usually for me that's about the limit of its abilities before you start encountering slop that you need to manually fix yourself, and it's just a waste of time.
Hell, I'd quite recently stumbled into a problem and on a whim decided to ask Gemini to help me; it got stuck in a loop. One conversation suggested one solution, and opening up another conversation with the solution it gave me, then gave me what I'd started with.
And then we get into the ethical issues. It all feels quite disgusting.
All of this is being sold as a commercial solution, via subscriptions, to pull information that people put out on the web either for free, for credit or under some explicit license.
Here you're getting that information without citation or credit in most cases. Imagine you wrote an algorithm for a specific problem, and it gets slurped up and spat out by an LLM to some stranger without credit or citation? Imagine that someone claims they came up with the algorithm thanks to their AI agent.
It feels like LLMs have made licensing and copyright completely moot. At least for you and me.
Claude Code seems to be the big LLM everyone is going for lately when it comes to programming. Throwing all our eggs into one basket always works out great, right?
Ethics aside, over a month ago now I paid for a subscription (sigh) to Claude Code, just for that month, and gave it a shot with one of my projects.
It's a codebase I'm maintaining but not actually familiar with, as I didn't write it myself, so in theory it made sense. I thought; a while back we had a problem that was an absolute pain in the ass to track down, could Claude have done it faster?
I let Claude produce everything it should've needed to understand the codebase (which looked okay) and then provided it an outline of the problem we had. It was actually more information than I'd started with when I investigated the issue the first time.
After a little while Claude very confidently came back with a "fix". What it did certainly made sense in the context of what I'd said, but it wasn't what was actually causing the specific issue I'd described to it.
Further to this, there was a solution that could've helped me track down the problem at the time; Claude didn't suggest this or even mention any else for that matter. As far as it was concerned, the solution it implemented would fix the problem.
Of course, if I did not know any better, I would've tested this (or submitted it like an absolutely useless donkey), and then tried again with Claude until we actually got to the solution.
Crucially, the experience of having solved it myself originally paid off better than expecting Claude to be any help. I'd put changes in place so we could catch such issues going forward, and that's knowledge I can keep in mind for the future.
Following this, I'd performed another experiment passing it a GitHub issue we had, asking it to fix the issue. The issue was very fleshed out and written up very well, so I'd expected this to be a piece of cake.
Claude managed to roughly find some areas in the code related to the specific feature the user was dealing with, and once again attempted a fix, but again it didn't work.
At this point I'd run out of tokens. Hey, did I mention there's a limited number of tokens? They get gobbled up pretty quickly when you're dealing with a large codebase like this one.
So, I'm not sold on it.
Let me end with this quote from Anthropic's (Claude) CEO, Dario Amodei, from here.
[...] We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. [...]
He is supposedly educated enough to know why this isn't the case. However, this is from an individual that wants to sell you his technology and as far as I'm concerned, is clearly intended to muddy the water with disinformation about the technology.
It's a headline. It's hype.
LLMs are statistical prediction engines. You're not going to produce consciousness from training a model on Stack Overflow answers.
If you're actually interested in the subject of artificial consciousness, I highly recommend following the work of Steve Grand.
Anyway, that's where I'm at. This devaluation of work and theft have left me a little broken.