Lanz Mining

In early 2025 a friend of mine was consuming talkshow from the public broadcast media. He's wondering why some people are always there, making the same points over and over again. He's very annoyed about it. We were chatting about this for a little while and I tried to find public data regarding the invitation frequencies of people in ARD/ZDF. Didn't find much. The RND (Redaktions Netzwerk Deutschland (Editorial Network Germany)) published very rough statistics on that topic. This is commercial media so I didn't expect to get any publicly available data, so I did a little bit of scraping using Obsidians Webclipper (to prevent the accusation I automatically scraped - german law). In the end I got some data and thought : Yes why not taking this as an excuse to practise data wrangling and visualisation. The project is since living at lanz-mining.arrrrrmin.dev.

First time holding talks

In the end this was a very fun project and I even got to present it at the GPN23 that was running under the sloga Hidden Patterns and the days of digital freedom (TDF4) in Tübingen. The GPN was the first time I presented somehing beyond studies or office scope. That was a very frightening experience for me, but the people in the chaos scene were very friendly and helpful. Experienced people like @leyrer published extremely helpful material that prepares you for something you cannot know. The support I got from his workshop material like this was huge. Otherwise I would not have held this talk. Another learning: Show love to the experienced people that share their knowledge. Amongst other things, these are the most important take aways for me: The 10-20-30 rule, duplication is fine, take non-sparkling water to the stage and practise, practise, practise! handover keywords in presentation mode right and bottom edges are often cut away. Anyways here is the GPN23 recording if you'r interested.

Moving to framework

After the GPN23 talk I met @stk. He's very experienced with linked data and helps with the wikidata project. We were chatting, because he has been doing experiments in parallel to my work. His idea was to find a good process to we the wikidata community can create the data in a linked data fashion. This is exactly what I'd like to have because this is first curated and second a structure that's not depending on my code's interpretation of the data. That means a reliable data source where I can build my data visualisation on top. Ideally its a SPARQL query and all the data is available in the frontend. That in turn reminded of Observable Framework where you can build dashboards and data apps. Wanted to try this for a while now. So using Framework and Observable's Plot library I rebuild all the visualisations. Now I'm waiting for good SPARQL queries and as long as there's data coming it'll be a flexible application with all the fine things that we need to build a talkshow data dashboard.