AI red teaming has proven that eliminating prompt injection is a lost cause. Worse, many developers consider guardrails a first-order security control and inadvertently introduce serious horizontal and vertical privilege escalation vectors into their applications. When the attack surface of AI-driven applications increases with the complexity and agency of their model capabilities, developers must adopt new strategies to eliminate these risks before they become ingrained across application stacks. Our team has surveyed dozens of AI applications, exploited their most common risks, and discovered a set of practical architectural patterns and input validation strategies that completely mitigate natural language injection attacks. This talk will address the root cause of AI-based vulnerabilities, showcase real exploits that have led to critical data exfiltration, and present threat modeling strategies that have proven to remediate AI-based risks. By the end of the presentation, attendees will understand how to design/test complex agentic systems and how to model trust flows in agentic environments. They will also understand what architectural decisions can mitigate prompt injection and other model manipulation risks, even when AI systems are exposed to untrusted sources of data. By: David Brauchler III | Technical Director | AI/ML Security Practice Lead, NCC Group Presentation Materials Available at: https://ift.tt/5IhPjHQ
source https://www.youtube.com/watch?v=iLX4OdAEznY
Subscribe to:
Post Comments (Atom)
-
Germany recalled its ambassador to Russia for a week of consultations in Berlin following an alleged hacker attack on Chancellor Olaf Scho...
-
Android’s May 2024 security update patches 38 vulnerabilities, including a critical bug in the System component. The post Android Update ...
No comments:
Post a Comment