koldfront

show-paren-local-mode #emacs

🕚︎ - 2026-02-07 - ♺ 1

Today I found the solution to a small annoyance I have had for a while: in general while programming I like Emacs' 'show-paren-mode', but when I use jabber.el, I don't. So I was toggling it on and off, as it's a global setting - until I learned that the solution is:

(add-hook 'jabber-chat-mode-hook (lambda ()
                                   (show-paren-local-mode -1))

'show-paren-local-mode' been there since Emacs 28.1, released in 2021. I'm running 31. Hah!

Blocked by IT #security #theatre

🕦︎ - 2026-01-23 - 🟊 1 - ♺ 2
Screenshot of a blocked asjo.org in a browser at work

Today I learned that on my employers network my personal domain asjo.org is blocked by Microsoft Defender Smartscreen.

asjo.org contains my public collection of photographs, an outdated list of my music collection, and a couple of links.

On illuminant.asjo.org I have my 1 person ActivityPub server on the fediverse.

I have a hard time imagining that any of those are dangerous enough to warrant a blocking by my "organization".

It does seem that some people, especially in Asia, use asjo.org as the From address when sending spam, perhaps that's the trigger.

At some point I also had some sort of weird DNS DoS attack against my nameservers serving asjo.org, so I had to move the domain to a hosting provider who is better at handling that sort of thing than I was. Maybe that's more likely to be the trigger?

Puzzling nonetheless.

But great for my bad boy reputation at work, I'm the one with the blocked domain!

Feedbase year 9 #feedbase #rss

🕤︎ - 2026-01-01 - ♺ 1
Feedbase

Looking at #feedbase here, it looks like I forgot to write about the 8th year of Feedbase last year.

Let's forget about that and write about the 9th year then. I'm still using Feedbase as a convenient means to keep up with RSS feeds ranging from news and podcasts, to Youtube channels, status information and what have you.

Rachel by the Bay created a "Feed Reader Behaviour Project" a while back, which nudged me to add a last_fetch timestamp to the groups, and skip fetching a group if it was less than an hour since last time.

This summer I also switched the website from the library Spock to twain instead, as it was easier when upgrading to Debian 13 (trixie), and I had already switched my other websites to twain.

Oh, and I have added a way to "properly" rename groups, where the original name stays around - but is filled with artificial articles all telling you to subscribe the the new name.

The number of commits added up to 34 in 2025, up from 21 in 2024, and down from 51 in 2023. I still haven't cleaned up the code and released it for public consumption.

The development in connections over the years:

2021 2022 2023 2024 2025
Unique IPs 479 506 1358 2859 3093
IPv4 324 369 1126 1753 2291
IPv6 155 137 232 1106 802
Only once 138 225 483 1224 1224

Quite spooky that the number of IPs only connecting once was exactly the same in 2024 and 2025!

Interestingly, I have not been the one connecting the most often since 2022 - I am down to 7th in 2025! The 6 most connecting IPs have connected more than a thousand times in the past year.

The total number of articles is nearing the 10 million mark.

HTML at the end of 2025

🕝︎ - 2025-12-29 - 🟊 2 - ♺ 1
Screenshot of WordPerfect 4.2 startup - yes, I downloaded WordPerfect 4.2 and ran it in DOSBox to make this image

Once upon a time I read the HTML specifications whenever a new one was published.

Heck, I also used to try all the menus in WordPerfect (4.2), just to know what they did. Yes, computers were quite boring before the internet became widely accessible.

But I haven't been keeping up, and as I learn that every element with a hyphen in the name is valid, I thought it would be a good time to look through the list of elements defined in HTML these days.

Luckily such a list appears on in the left column on MDN if you visit an element and your browser window is wide enough. Here are the ones I hadn't noticed:

Quite a lot - and I even skipped the ones I noticed last year, details and datalist, date and datetime-local!

Jobs at Novonesis in Denmark (Lyngby) #novonesis #biotech #job

🕜︎ - 2025-12-01 - 🟊 3 - ♺ 10
- Novonesis logo

Our area in Novonesis has just posted 4 job openings:

  • Site Reliability Engineer - I think this is what we used to call system administrator; hands on a bunch of machines running Ubuntu in a dedicated server room, take care of a backup robot with a thousand tapes, petabytes of distributed storage, an HPC cluster with thousands of cores, etc. etc.
  • MLOps Specialist - help putting machine learning into the hands of the scientists in research and development.
  • Platform Engineer - be part of the team running our on-premise Data Science Platform, which allows everybody in research and development to store, work with, find, and analyze data. Python, databases, Jupyter notebooks, and automation.
  • Lab Operations Specialist - networks, VLANs, firewalls and making things work across lab equipment vendors' odd choices of operating systems and setups.

If you like Linux, are good at any of these things, and are curious and interested - apply! I can recommend not using AI for your application, you'll be standing out ;-)

Disabling newsreader short term caching for creative nntp uses #illuminant #gnus #nntp

🕓︎ - 2025-11-30 - 🟊 3 - ♺ 4
Partial screenshot showing like and boost icons in a Gnus summary buffer

When I abuse bend nntp creatively, sometimes there is a slight impedance mismatch between what a newsreader assumes and what I need.

Case in point: in traditional news (and mail), articles don't change. So keeping them in memory is a nice way to improve user experience. But for my ActivityPub server, Illuminant, articles do (slightly) change - for instance when a Like (star) or Repeat (boost) activity arrives. I could have implemented those arrivals by superseding the article in question, but that would make Likes and Repeats quite noisy (similar to getting a notification for each in, say, Mastodon), and I like that they are not grabbing a lot of attention.

Instead articles just have a header that shows the current Likes and Repeats, so when my newsreader, Gnus, keeps the article in memory, it is effectively hiding any updates.

So I went on a quest to figure out how to disable this; here is the solution I arrived at:

  • In the Topic Parameters of the fediverse topic, I added (gnus-keep-backlog nil)
  • In ~/.gnus I do this:
    (defun asjo-gnus-flush-original-article-buffer (&optional arg)
      "To force reloading from server (if gnus-keep-backlog is nil)."
      (gnus-flush-original-article-buffer))
    (advice-add 'gnus-summary-show-article :before #'asjo-gnus-flush-original-article-buffer)

Gnus defaults to keeping the latest 20 articles in a backlog, so I disable that.

It also keeps the latest original article around in a buffer, so even with the backlog disabled, Gnus won't reload the same article over nntp again, as it just gets it from the original article buffer - I didn't find a way to turn this off, so instead I flush it just before showing an article. Less elegant, but it works.

OpenAI crawling websites #ai

🕛︎ - 2025-11-14 - 🟊 1
OpenAI logo with goatse hands grabbing it

If you run a webserver you have probably got hits from OpenAI's crawlers.

They nicely announce themselves in the User-Agent header, and include a link to a page: https://openai.com/gptbot - from which I quote here:

OpenAI uses the following robots.txt tags to enable webmasters to manage how their sites and content work with AI. [...] Disallowing GPTBot indicates a site’s content should not be used in training generative AI foundation models.

For my little calendar project, Sundial, the robots.txt has since day 0 said:

User-agent: *
Disallow: /

Because, as you can imagine, there are a lot of pages in a digital calendar.

So it was surprising to see the OpenAI crawler requesting page after page from all manner of years, a lot of them negative numbers, even.

The robots.txt clearly says that crawlers should not request anything!

5 days ago I extended the robots.txt to explicitly tell the OpenAI crawlers to not crawl the calender, in case * wasn't explicit enough for OpenAI.

They are still at it:

74.7.227.55 - - [14/Nov/2025:23:23:42 +0100] "GET /-1000/03/ HTTP/1.1" 200 6031 "-" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.3; +https://openai.com/gptbot)"

So they are basically lying to people's faces, when they say that robots.txt allows webmasters to manage how OpenAI crawls their websites.

(Yes, I realize they are trying to weasel-word their way around what robots.txt means, by writing that they interpret disallowing crawling as an indication that the content should not be used for training their models - but why crawl an infinite number of pages you won't be using!?!)

It is quite the shit show.

On the positive side, I have apparently unintendedly created an AI crawler tarpit.

Lille langebro

Today

Jules Verne (198).

Tomorrow

Mia Farrow (81).

Tuesday

Deep blue wins against Kasparov (30).

Thursday

Charles Darwin (217).

Abraham Lincoln (217).

First colour transmission in Danish television (57).

Friday

DASK (68).

Robbie Willams (52).

Saturday

Valentine's Day.

I ❤️ Free Software Day.

2026-02-16

Anne Frank (100).

BBS (48).

2026-02-18

Yoko Ono (93).

2026-02-20

Python (35).

Mir space station (30).

2026-02-21

International modersmålsdag (27).

2026-02-22

streetkids.dk (26).

2026-02-23

mozilla.org announced (28).

Ørsted satellite in orbit (27).

2026-02-26

seistrup.dk (27).