Notes from PyGrunn 2025: The View from the Stage (and the Audience)
Earlier this year, I had the pleasure of attending and speaking at the Python conference in Groningen, known as PyGrunn. It wasn't my first time in Groningen, nor my first time at the venue where the event took place. I felt quite comfortable driving there, especially considering that my route took me over the Afsluitdijk - the dam that ate part of the sea. But to be honest, I wasn't fully relaxed because it had been a while since I last spoke at a conference, and it was also the first time I had to do it in English. Well, for those who want to see my nervous performance, here's the recording:
Ideas Behind My Talk
As one could assume, I have a strong interest in streaming and real-time data processing. Therefore, my talk fell within that domain. However, the deeper idea behind it was to demonstrate that there is a streaming life beyond the Java heavyweights. Don’t get me wrong — I’m not saying to stop using Kafka and its friends because they’re too complex. What I want to say is: try to look beyond the Internet hype and choose tools that are good enough for the level of complexity you’re dealing with. Sometimes (in reality, very often), a message bus is just a message bus — it doesn’t need to be the most complex or most reliable solution on the market. Other times, you don’t need to start with bleeding-edge technologies; your idea can often be implemented as a POC using much simpler tools within a short timeframe.
These thoughts led me to the idea of a simple, Python-friendly toolbox: Python itself, Redis, Apache Superset, and so on. Considering modern infrastructure patterns, secured by tools like Kubernetes, you can achieve good scalability and performance without having to manually manage your applications.
The Talk
The talk itself was accompanied by a demo project and was designed to help people experience the simplicity of the ideas behind it. I chose FastStream as a stream processing framework. The package is still young, but I like the ideas and the vision behind it. It works with multiple streaming solutions, including Redis Streams (which I used for the demo) and, of course, Apache Kafka. The maintainers are committed to implementing windowing and stateful operations, which would be a great addition. Even more impressive is their plan to implement multistream support — connecting to different streaming tools and combining data. I’m excited to keep an eye on this project and see how their roadmap unfolds.
Inspiration
While at the conference, I also attended a few talks and left with a huge dose of inspiration. The first talk I attended was given by Ansgar Grüne (you can watch it here). It was about semantic vectors, which isn’t something I’m particularly familiar with. It served as a great reminder that there’s always something new to learn and room to grow beyond your current expertise.
After that, I moved to the room where my talk was scheduled later and attended an unplanned talk by Aivars Kalvans. Aivars’ presentation highlighted an important nuance: high-level abstractions, like Django ORM, often do us a disservice and can negatively impact the performance of our applications. Understanding how your persistent storage works is crucial when building data-intensive systems.
Finally, before heading home, I decided to catch a talk by Ivor Bosloper on geospatial data processing. It might come as a surprise, but I hold a bachelor’s degree in geodesy and even spent some time working in the field, later converting measurements into digital formats. Well, that was more than 15 years ago, but this topic still triggered some nostalgia. It was fascinating to see how modern tools handle connecting geospatial data from different sources to ultimately produce something useful, like a map. After the presentation, I had a short chat with Ivor and was genuinely impressed by the work they’re doing.
Debrief and Looking Forward
I really enjoyed the atmosphere of the conference, and I think the organizers did an excellent job. I’m considering submitting something interesting for next year and hopefully getting accepted. 😉
Meanwhile, I’m preparing for PyData Amsterdam, where I’ll be talking about some of the more boring details of Apache Kafka. So, if you find yourself bored enough around September 26th, you’ll know where to find me.