It's coming, and you aren't ready—your first generative AI chatbot incident. GenAI chatbots, leveraging LLMs, are revolutionizing customer engagement by providing real-time, automated 24/7 chat support. But when your company's virtual agent starts responding inappropriately to requests and handing out customer PII to anyone who asks nicely, who are they going to call? You. You've seen the cool prompt injection attack demos and may even be vaguely aware of preventions like LLM guardrails; but are you ready to investigate and respond when those preventions inevitably fail? Would you even know where to start? It's time to connect traditional investigation and response procedures with the exciting new world of GenAI chatbots. In this talk, you'll learn how to investigate and respond to the unique threats targeting these systems. You'll discover new methods for isolating attacks, gathering information, and getting to the root cause of an incident using AI defense tooling and LLM guardrails. You'll come away from this talk with a playbook for investigating and responding to this new class of GenAI incidents and the preparation steps you'll need to take before your company's chatbot responses start going viral—for the wrong reasons. By: lyn Stott | Senior Staff Engineer, Airbnb Full Abstract Available: https://ift.tt/YnoUATF
source https://www.youtube.com/watch?v=QfUdKtkBRjA
Subscribe to:
Post Comments (Atom)
- 
Germany recalled its ambassador to Russia for a week of consultations in Berlin following an alleged hacker attack on Chancellor Olaf Scho...
 - 
Android’s May 2024 security update patches 38 vulnerabilities, including a critical bug in the System component. The post Android Update ...
 
No comments:
Post a Comment