top of page
2025 summit.png
Untitled design (4)_edited.jpg
Workshop

When not to trust your GenAI Agent

In this session we will explore examples how LLM’s can make mistakes, hallucinations or reasoning mistakes. We will try to look at the underlying reasons why these mistakes can occur and what you can do to prevent or minimize these issues. We will touch on simple fixes in the direction of prompt engineering and model selection but also look closer into context engineering for RAG solutions and hallucination detection routines.

Speaker(s):
bottom of page