Tuesday, 21 April 2026

SecTor 2025 | Is Your Data Canadian Yet?

As governments, regulators, and customers sharpen their focus on data sovereignty, the gap between marketing promises and technical realities continues to grow. In this engaging discussion, we'll explore the evolving landscape of digital sovereignty – from China's isolationist model to the EU's regulatory rigor, and Canada's ongoing search for its place on the map. We'll unpack what sovereignty actually means (and doesn't), examine the risks of "sovereign-washing," and share practical guidance on how to communicate clearly as a vendor and remain diligent as a customer. Whether you're building, buying, or just trying to keep up, this talk offers grounded insight into the realities of sovereignty – and how to strike the right balance between business goals, regulatory pressure, and risk management. By: Kevin Fox | Customer Cybersecurity Advocate, Aiven Jamie Arlen | CISO, Aiven.io Presentation Materials Available at: https://ift.tt/kNzXyFO

source https://www.youtube.com/watch?v=R0PM_gjSg7k

SecTor 2025 | Interactive Network Visualization of Data Poisoning Attacks

What if we could not only visualize poisoned training data, but also interact with it? As data poisoning becomes a growing threat to the integrity of machine learning systems, understanding its effects requires more than static visualizations. This talk introduces GraphLeak, an open-source, interactive web tool designed to visualize how poisoned training data alters network structure. We will explore how adversarial data manipulation impacts graph-based representations. Building on network science concepts, this session will go deeper: not just showing how poisoning affects structure, but allowing users to directly interact with poisoned vs. clean datasets in real time. We'll walk through how the app ingests CSV or JSON data, builds networks, and renders them via layouts. The presentation of this tool emphasizes accessibility through making data poisoning tangible and transparent, allowing security practitioners and non-experts to understand how data poisoning attacks distort model behavior. By making threats visible, we make the defenses of these threats more approachable, democratizing insight into machine learning vulnerabilities and supporting the development of more robust, transparent systems. By: Maria Khodak | Security Engineer Presentation Materials Available at: https://ift.tt/Si7joR5

source https://www.youtube.com/watch?v=VSOmZKHbbew

Black Hat Stories | Or Yair, Security Research Team Lead at SafeBreach

In this episode of Black Hat Stories, we sit down with Or Yair, the Security Research Team Lead at SafeBreach. With multiple years of experience attending Black Hat — including presenting at Black Hat Europe 2025 — Or shares his unique perspective on vulnerability research, curiosity, and the real purpose of the Black Hat community. 🎥 Watch the full story: https://youtu.be/rNtuyrXPIc0?si=zgkZJsWfJQWImoM3 🔗 Visit our site: https://blackhat.com/ 📧 Subscribe to our free newsletter: https://ift.tt/JrKGTo9 #BlackHatStories #BlackHat #cybersecurity

source https://www.youtube.com/shorts/_4VR_GnbLbo

Monday, 20 April 2026

SecTor 2025 | EDR Bypass Testing: A Systematic Approach to Validating Endpoint Defenses

Endpoint Detection and Response (EDR) solutions have become a cornerstone of modern cybersecurity strategies. However, their very success has made them prime targets for attackers who now routinely incorporate EDR evasion and bypass techniques into their toolsets, as evidenced by recent cybercrime leaks. This escalating threat necessitates a shift from reactive defense to proactive, systematic validation of EDR capabilities. This presentation will detail the comprehensive EDR bypass tracking and testing program developed and implemented at eSentire. We will explore the common EDR attack surfaces (user-mode components, kernel callbacks, tamper protections like PPL) and general bypass methodologies. The core of the talk will introduce our systematic approach, including the EDR Bypass Matrix—an internal framework for tracking techniques and test results across a group of supported EDR products. We will showcase our custom testing methodology, automation infrastructure (including a Sandbox Manager application), and provide concrete examples of bypasses, along with their variants and mitigation strategies. The session aims to equip attendees with insights into building robust EDR testing programs and fostering a more resilient security posture. By: Jacob Gajek | Principal Security Researcher, eSentire Ryan Hasmatali | Software Developer, eSentire Presentation Materials Available at: https://ift.tt/p4it8lg

source https://www.youtube.com/watch?v=59GFy4-gIcQ

SecTor 2025 | Tracing Adversary Steps through Cyber-Physical Attack Lifecycle

Cyber operations are increasingly being militarized, with cyber commands being moved under national Ministries/Departments of Defense or simply military forces. In this new setting, cyber-physical security is destined to become a potent weapon. But is the mostly civilian defense ready to deal with such a capable adversary? Ten years ago, at BH USA 2015, I presented a cyber-physical attack lifecycle, the first and to date the only attack lifecycle which specifically describes the steps the attacker needs to take to architect and practically implement an attack that leads to a desired physical impact. After the initial release and highly positive feedback, I further refined the attack lifecycle and extensively verified it on several complex cyber-physical systems such as traffic lights and moving bridge systems. The truth is that, to date, mostly state-associated types of users benefited from the framework, while the civilian sector is still struggling to find pragmatic approaches to cyber-physical risk assessments and adversary emulation exercises. Vendors similarly lack a structured approach to assess their solutions for both exploitability and post-exploitability. This talk will present the finalized version of the cyber-physical attack lifecycle, with two attack stages, and illustrate its utility with the example of designing a targeted attack on a Real-Time Locating System (RTLS), a class of localization solutions used for, e.g., medical patients' location tracking, safety geofencing, contact tracing, and more. Starting from a vulnerability in a communication protocol and ending with fooling the solution operators, the talk will demonstrate numerous nontrivial hurdles the attacker needs to overcome to reach the desired outcome. Spoiler: math and geometry are involved. The talk will conclude with a close examination of how rapid advancements in AI technologies are expected to streamline the process of designing high-precision cyber-physical attacks by automating previously manual or highly laborious tasks and partially replacing the need for SME inputs. Last but not least, the talk touches upon the relevant threat landscape in Canada to date. By: Marina Krotofil | Cyber Security Engineer, Critical Infrastructures, mk|security Presentation Materials Available at: https://ift.tt/hT1FI0k

source https://www.youtube.com/watch?v=12-iW20pBuI

Sunday, 19 April 2026

SecTor 2025 | Unmasking a North Korean IT Farm

This session exposes a real-world covert remote-control system developed by a North Korean IT worker operating undetected within a legitimate organization. The forensic investigation revealed a sophisticated ecosystem that leveraged Address Resolution Protocol (ARP)-based payload delivery, WebSockets for stealthy command and control, and Zoom for covert persistence and remote access. Through technical analysis and a live attack demo, we'll unpack how the attacker: -Built an advanced C2 infrastructure using WebSockets to control infected machines. -Used ARP packets as a payload transport mechanism, embedding commands inside network traffic to execute commands without traditional TCP/IP communication. -Weaponized Zoom as a Remote Access Trojan (RAT), launching meetings without user interaction and auto-approving remote-control access via HID injection techniques. -Covertly executed commands through a Python script, allowing keystroke and mouse movement emulation, bypassing endpoint logging. -Enabled remote execution through a command client, which persistently reconnected to the C2 when the user was active. By reverse-engineering the threat actor's toolkit, the investigation uncovered previously undocumented techniques for network protocol abuse and application-layer persistence. In this session, we'll not only highlight how these tactics were deployed but also how defenders can detect and disrupt them before they escalate into full-scale espionage. Attendees will leave with a deeper understanding of offensive tradecraft and practical strategies for detection, threat hunting, and forensic response. By: Avi Sambira | Director, Client Leadership, Sygnia Full Presentation Materials Available at: https://ift.tt/oNFdtXM

source https://www.youtube.com/watch?v=wUQJ5pjZDgo

SecTor 2025 | How Adversaries Beat User-Mode Protection Engines for Over a Decade

Following the largest global IT outage in history in July 2024, many took to the public stage advocating to prohibit endpoint security vendors from deploying kernel-based components, even prompting regulators to weigh in. That launched an effort to evaluate the impact of the proposed design shift, as many endpoint-oriented security solutions, from different malware analysis tools to various commercial products (like AVs, EDRs and sandboxes), already include user mode-based engines. The research started with examining open-source projects and publications such as SysWhispers and FireWalker, and continued by analyzing and reverse-engineering malware families of all types in the wild, including infamous names like Emotet, SmokeLoader, HijackLoader, FormBook, DarkGate, Hive ransomware and Winnti, among others. Over 55 different data sources were ingested, all in all, mapping the entire threat landscape and tracking the evolution of adversaries for more than a decade. Curating the ultimate collection on the subject yielded in-depth understanding and insights into attackers' tradecraft and made it clear that this is the most prolific post-exploitation technique yet, surpassing even code injection methods. This session will explore all 27 unique methods which security researchers and malware authors have developed to beat user mode-based protection engines, cataloged under 3 main tactics: Hook Evasion, Argument Forgery and Engine Disarming. The trade-offs of the various methods will be highlighted as well. In addition, the session will include detection schemes, focusing on runtime and forensic indicators, to aid malware researchers, incident responders, threat hunters and detection engineers tackling these issues. By: Omri Misgav | Security Researcher, Independent Presentation Materials Available at: https://ift.tt/1m24De7

source https://www.youtube.com/watch?v=ox2lq9vsC8Q

Saturday, 18 April 2026

SecTor 2025 | Investigate & Respond to Attacks on GenAI Chatbots

It's coming, and you aren't ready—your first generative AI chatbot incident. GenAI chatbots, leveraging LLMs, are revolutionizing customer engagement by providing real-time, automated 24/7 chat support. But when your company's virtual agent starts responding inappropriately to requests and handing out customer PII to anyone who asks nicely, who are they going to call? You. You've seen the cool prompt injection attack demos and may even be vaguely aware of preventions like LLM guardrails; but are you ready to investigate and respond when those preventions inevitably fail? Would you even know where to start? It's time to connect traditional investigation and response procedures with the exciting new world of GenAI chatbots. In this talk, you'll learn how to investigate and respond to the unique threats targeting these systems. You'll discover new methods for isolating attacks, gathering information, and getting to the root cause of an incident using AI defense tooling and LLM guardrails. You'll come away from this talk with a playbook for investigating and responding to this new class of GenAI incidents and the preparation steps you'll need to take before your company's chatbot responses start going viral—for the wrong reasons. By: Allyn Stott | Senior Staff Engineer, Airbnb Presentation Materials Available at: https://ift.tt/jPmDCT9

source https://www.youtube.com/watch?v=Iah5epX_3AY

SecTor 2025 | From Days to Hours: Accelerating Cyber Threat Response with AI Agents

Identifying and responding to emerging threats before they escalate into widespread attacks is one of the hardest challenges in cybersecurity today. Threats often surface first in informal channels, long before official advisories are published. By the time traditional detection systems catch up, it's often too late. In this session, we will present a collaborative AI-agent framework built to act as a threat intelligence and threat hunting accelerator. The system ingests and semantically processes large volumes of structured and unstructured data - including CISA alerts, CVE databases, vendor reports, EXA and Perplexity search results, and social media signals. Using a custom LLM-based clustering engine, the system groups early threat signals by topic, CVE, and campaign, allowing for real-time insight into what's emerging across the security landscape. Each agent in the framework plays a specialized role: surfacing relevant threats, analyzing and prioritizing them based on relevance and severity, extracting TTPs and IOCs, and generating hunting queries. We'll walk through the system design, share implementation insights (including hallucination control, prompt chaining and evaluation), and showcase how this setup enables teams to reduce the time between "first appearance" and "first action" to hours or even minutes. Attendees will leave with a deep understanding of how LLM-based agents can be used as proactive actors in cyber threat intelligence and response workflows. By: Yuval Zacharia | Director R&D, Security Research & AI, Hunters Presentation Materials Available at: https://ift.tt/0ulbgK9

source https://www.youtube.com/watch?v=Q1-9IABavgw

SecTor 2025 | What Happens When Your Digital Voice Clone Goes Rogue

"Speak for Me" was envisioned as a Windows accessibility feature designed to replicate a user's voice with just a few samples, storing it locally as an AI model trained on the user's voice. This innovative feature aimed to enhance the existing Text-To-Speech interface, offering capabilities such as creating a virtual microphone for seamless use in conferencing apps like Microsoft Teams. Our team performed an internal security audit of this feature, revealing that it is very hard to protect. The potential attacks spanned across multiple vectors. Ultimately, our audit led to this feature being released with Custom Neural Voices (CNV) Azure service only. In this session, we will walk you through the various attack scenarios and vulnerabilities found, showcasing the difficulties of protecting AI based user voices on client devices. We will start our presentation with a number of critical vulnerabilities discovered in the project. These include classical remote code execution on the victims' machines, but more interestingly, either directly stealing the model itself, or abusing the cloud infrastructure to obtain a model of arbitrary persona. Both client and web side of the app had multiple defensive mechanisms such as consent voice recording, model encryption, watermarking embedded into voice samples and others that were supposed to prevent the infrastructure from being abused to produce deepfakes by bad actors. All of these could easily be bypassed and ultimately, the attacker could gain the ability to impersonate a victim with relatively low effort. This project will serve as a case study to demonstrate the challenges and vulnerabilities of AI security on devices, particularly on generic Windows platforms that were not designed to protect highly sensitive AI models. We will examine the current state of the Windows security ecosystem and its relevance to AI model security. By: Andrey Markovytch | Senior Security Researcher, Microsoft Presentation Materials Available at: https://ift.tt/096XJKz

source https://www.youtube.com/watch?v=49odcoAoqYw

Thursday, 16 April 2026

Black Hat Stories | David Oswald, Cyber Security Professor at Durham University

In this episode of Black Hat Stories, David Oswald, Professor in Cyber Security at Durham University, shares why Black Hat is essential for academics at every level. With a background spanning research and real-world security challenges, David has attended Black Hat multiple times and sees it as a unique bridge between academia and industry. Unlike traditional academic conferences, Black Hat offers practical, hands-on insights that bring fresh perspective to research and teaching. Hear David's perspective on how Black Hat connects theory with real-world application and why it's a must-attend for anyone in security and academia. 🎟️Join us at Black Hat USA: https://ift.tt/64pvRUJ 🔗 Visit our site: https://blackhat.com/ 📧 Subscribe to our free newsletter: https://ift.tt/5sULWKB #BlackHatStories #BHEU #BlackHat #cybersecurity

source https://www.youtube.com/watch?v=U6ZV6m4hOaQ