Recent & Upcoming Talks

Letting AI Move: Robotics Demos Powered by Python (PyConDE Darmstadt 2026)

Artificial intelligence can be difficult to explain, especially to people outside of tech. We often rely on slides, diagrams, or on-screen demos, but the real impact does not always stick. For people encountering AI for the first time—such as students or non-technical audiences—AI terminology and concepts can remain abstract and disconnected from real-world experience.

Wenn Ideen verpuffen - So gehen AI Use Cases endlich live (DSAG Technologietage 2026)

Joule Studio, Document AI, AI Core – die Tools und Produkte sind vorhanden, um KI auf der SAP BTP erfolgreich umzusetzen. Also, was hält die SAP-Community noch auf? An passenden Ideen für AI Use Cases mangelt es meist nicht – wohl aber an der Umsetzung. Oft laufen mehrere AI-Initiativen im Unternehmen parallel; es fehlt ein zentraler AI Backlog, um Use Cases zu sammeln, zu priorisieren und strukturiert zu realisieren.

Bringing NLP to Production (PyConDE Berlin 2023)

Models in Natural Language Processing are fun to train but can be difficult to deploy. The size of their models, libraries and necessary files can be challenging, especially in a microservice environment. When services should be built as lightweight and slim as possible, large (language) models can lead to a lot of problems.

XAI meets Natural Language Processing (PyConDE Berlin 2022)

As people tend to be more aware of AI systems and their impact, AI ethics and transparency become more and more relevant. Explainable AI (XAI) is a not-so-new term to collect methods and techniques to make predictions of AI systems more understandable. Which data points build the basis for model fitting? How is the model trained, based on which premises and assumptions? Which decisions, which parameters lead to the optimized outcome? And, most important, which model weights and decision paths result in which predictions?