🕤︎ - 2026-01-01 - ♺ 1

Looking at #feedbase here, it looks like I forgot to write about the 8th year of Feedbase last year.
Let's forget about that and write about the 9th year then. I'm still using Feedbase as a convenient means to keep up with RSS feeds ranging from news and podcasts, to Youtube channels, status information and what have you.
Rachel by the Bay created a "Feed Reader Behaviour Project" a while back, which nudged me to add a last_fetch timestamp to the groups, and skip fetching a group if it was less than an hour since last time.
This summer I also switched the website from the library Spock to twain instead, as it was easier when upgrading to Debian 13 (trixie), and I had already switched my other websites to twain.
Oh, and I have added a way to "properly" rename groups, where the original name stays around - but is filled with artificial articles all telling you to subscribe the the new name.
The number of commits added up to 34 in 2025, up from 21 in 2024, and down from 51 in 2023. I still haven't cleaned up the code and released it for public consumption.
The development in connections over the years:
|
2021 |
2022 |
2023 |
2024 |
2025 |
| Unique IPs |
479 |
506 |
1358 |
2859 |
3093 |
| IPv4 |
324 |
369 |
1126 |
1753 |
2291 |
| IPv6 |
155 |
137 |
232 |
1106 |
802 |
| Only once |
138 |
225 |
483 |
1224 |
1224 |
Quite spooky that the number of IPs only connecting once was exactly the same in 2024 and 2025!
Interestingly, I have not been the one connecting the most often since 2022 - I am down to 7th in 2025! The 6 most connecting IPs have connected more than a thousand times in the past year.
The total number of articles is nearing the 10 million mark.
🕝︎ - 2025-12-29 - 🟊 2 - ♺ 1

Once upon a time I read the HTML specifications whenever a new one was published.
Heck, I also used to try all the menus in WordPerfect (4.2), just to know what they did. Yes, computers were quite boring before the internet became widely accessible.
But I haven't been keeping up, and as I learn that every element with a hyphen in the name is valid, I thought it would be a good time to look through the list of elements defined in HTML these days.
Luckily such a list appears on in the left column on MDN if you visit an element and your browser window is wide enough. Here are the ones I hadn't noticed:
Quite a lot - and I even skipped the ones I noticed last year, details and datalist, date and datetime-local!
🕜︎ - 2025-12-01 - 🟊 3 - ♺ 10

Our area in Novonesis has just posted 4 job openings:
- Site Reliability Engineer - I think this is what we used to call system administrator; hands on a bunch of machines running Ubuntu in a dedicated server room, take care of a backup robot with a thousand tapes, petabytes of distributed storage, an HPC cluster with thousands of cores, etc. etc.
- MLOps Specialist - help putting machine learning into the hands of the scientists in research and development.
- Platform Engineer - be part of the team running our on-premise Data Science Platform, which allows everybody in research and development to store, work with, find, and analyze data. Python, databases, Jupyter notebooks, and automation.
- Lab Operations Specialist - networks, VLANs, firewalls and making things work across lab equipment vendors' odd choices of operating systems and setups.
If you like Linux, are good at any of these things, and are curious and interested - apply! I can recommend not using AI for your application, you'll be standing out ;-)
🕓︎ - 2025-11-30 - 🟊 3 - ♺ 4

When I abuse bend nntp creatively, sometimes there is a slight impedance mismatch between what a newsreader assumes and what I need.
Case in point: in traditional news (and mail), articles don't change. So keeping them in memory is a nice way to improve user experience. But for my ActivityPub server, Illuminant, articles do (slightly) change - for instance when a Like (star) or Repeat (boost) activity arrives. I could have implemented those arrivals by superseding the article in question, but that would make Likes and Repeats quite noisy (similar to getting a notification for each in, say, Mastodon), and I like that they are not grabbing a lot of attention.
Instead articles just have a header that shows the current Likes and Repeats, so when my newsreader, Gnus, keeps the article in memory, it is effectively hiding any updates.
So I went on a quest to figure out how to disable this; here is the solution I arrived at:
- In the Topic Parameters of the fediverse topic, I added
(gnus-keep-backlog nil)
- In
~/.gnus I do this:
(defun asjo-gnus-flush-original-article-buffer (&optional arg)
"To force reloading from server (if gnus-keep-backlog is nil)."
(gnus-flush-original-article-buffer))
(advice-add 'gnus-summary-show-article :before #'asjo-gnus-flush-original-article-buffer)
Gnus defaults to keeping the latest 20 articles in a backlog, so I disable that.
It also keeps the latest original article around in a buffer, so even with the backlog disabled, Gnus won't reload the same article over nntp again, as it just gets it from the original article buffer - I didn't find a way to turn this off, so instead I flush it just before showing an article. Less elegant, but it works.
🕛︎ - 2025-11-14 - 🟊 1

If you run a webserver you have probably got hits from OpenAI's crawlers.
They nicely announce themselves in the User-Agent header, and include a link to a page: https://openai.com/gptbot - from which I quote here:
OpenAI uses the following robots.txt tags to enable webmasters to manage how their sites and content work with AI. [...] Disallowing GPTBot indicates a site’s content should not be used in training generative AI foundation models.
For my little calendar project, Sundial, the robots.txt has since day 0 said:
User-agent: *
Disallow: /
Because, as you can imagine, there are a lot of pages in a digital calendar.
So it was surprising to see the OpenAI crawler requesting page after page from all manner of years, a lot of them negative numbers, even.
The robots.txt clearly says that crawlers should not request anything!
5 days ago I extended the robots.txt to explicitly tell the OpenAI crawlers to not crawl the calender, in case * wasn't explicit enough for OpenAI.
They are still at it:
74.7.227.55 - - [14/Nov/2025:23:23:42 +0100] "GET /-1000/03/ HTTP/1.1" 200 6031 "-" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.3; +https://openai.com/gptbot)"
So they are basically lying to people's faces, when they say that robots.txt allows webmasters to manage how OpenAI crawls their websites.
(Yes, I realize they are trying to weasel-word their way around what robots.txt means, by writing that they interpret disallowing crawling as an indication that the content should not be used for training their models - but why crawl an infinite number of pages you won't be using!?!)
It is quite the shit show.
On the positive side, I have apparently unintendedly created an AI crawler tarpit.
🕛︎ - 2025-11-02 - 🟊 4 - ♺ 2
The thing I value most in a fellow programmer or developer is a rather ill defined and loose concept of "good taste".
"Good taste" is a very subjective thing, which encompasses a lot of stuff - it can be all kinds of things, from naming things consistently, choosing a reasonable level of abstraction, using tools in ways where they work well, all the way to not using tabs but spaces, and having a favourite text editor¹.
One side effect of using AI to program I think I have observed recently happens during code review.
Sometimes, when doing code review, I will see some solution that I don't like or understand, and I will ask why it was done in that way, and not in [my preferred] way.
Usually this sparks a discussion: the author will "defend" their choice of solution, perhaps pointing out some good reasons that I have overlooked (as a reviewer, you are usually not as deep into the code and problem space as the author, so this happens more often than you'd like/think) or some drawback of the alternative I am proposing that I didn't think of.
It is like the IKEA effect, where if you have put something together, however shabbily, you tend to get more attached to it.
I, as a reviewer, learn about how they are thinking, what made them get to the solution they implemented, and hopefully my questions give them some different perspectives on what might be worthwhile to consider.
Regardless of whether we agree on keeping the current solution or changing it, we both learn something about the code, the solution, and each other's way of thinking. Either of us, or both, expand our idea a little bit about what our shared concept of "good taste" is.
However, recently, I think I have observed this not happening. Instead of engaging in a discussion of the solution, the author will just say "Ok, I will change it", and ask their AI agent to rewrite the code in the way I suggested.
Since I am always right and always knows better than everybody else, that's of course always great.
Except... is it, now? I don't learn anything about why they made those "wrong" choices, and they don't learn why they should have done differently in the first place. And if my suggestion was bad, for some reason or another, my feedback doesn't get challenged and, perhaps, rejected. Sometimes a third solution might have emerged from our discussion.
When you think about a solution and implement it, you feel some sort of responsibility for - and maybe even pride in - in your choices. If you just had an AI minion do it for you, and it can redo it easily, however big the change might be, you don't care.
But somebody has to care.
Somebody has to care about solving the problem in a reasonable way, about making things that last, about quality.
Somebody has to have good taste.
AI doesn't.
"Ah, but that's where I as a human manager of AI agents come into the picture," I hear someone answer. Yes, perhaps, right now, or in the beginning, but will you keep doing that? Will your taste improve? Or will you become lazy and just accept what slop the AI spits out with less and less discrimination, as time goes by? Will you hone your skills and become better at what you do? I don't think so. The feedback loop isn't there, the gradual improvement of our common understanding of good taste is lost.
I don't want to be a manager of AI agents, and I think you don't want to either. Not if you want to develop good taste, learn, and become better at what you do.
Yes, a wise man once said that laziness is one of the great virtues of programmers - I think I'll add that it doesn't mean mental laziness, but rather laziness expressing itself in avoiding spending time doing repetitive and unchallenging tasks.
¹ While I prefer Emacs, it doesn't matter which one. (Although I will deduct points for having an IDE as your favourite, of course.)
🕗︎ - 2025-09-12
A couple of weeks ago I went to a Smashing Pumpkins concert.
This evening I have put Siamese Dream on the jukebox.
What a great album.
