Friday, 20 February 2026

Black Hat USA 2025 | Exploiting DNS for Stealthy User Tracking

Who needs AI when raw statistics can do the job just as well—if not better? Every Domain Name System (DNS) query leaves a trail, and with the right statistical techniques, you can uncover user behaviors, fingerprint devices, and even track individuals across networks. This session dives into how simple yet powerful methods like frequency analysis, correlation metrics, and anomaly detection can turn DNS traffic into a goldmine of intel. We dissected over 1.5 billion DNS requests from 30,000 iOS and Android devices over a 30-day period, and the results are eye-opening. Within just minutes of observing DNS traffic, devices begin to reveal their unique fingerprints. Given only a few hours, accurate identification becomes a certainty. But here's where it gets even more interesting—iOS devices flood the network with repetitive DNS requests, hitting the same domains over and over, while Android devices operate nearly 10x more efficiently, generating far less noise. This difference isn't just a curiosity—it's the key to our findings. With as little as 20% of DNS traffic for both iOS and Android, device tracking becomes shockingly precise. Our research shows that simple statistical techniques are more than enough to achieve highly accurate tracking—no need for AI or complex models. This paves the way for real-world applications, especially in resource-constrained environments like routers, and, in general, in embedded systems. The combination of simplicity, accuracy, and scalability makes the technique a great candidate for large-scale deployments. Of course, where there's a method, there's a defense. We'll also explore countermeasures to mitigate these vulnerabilities. To this end, DNSSEC and other secure protocols offer some level of protection—though as we'll demonstrate, true privacy is much harder to achieve than most expect. By: Bela Genge | Senior Security Researcher, Bitdefender Ioan Padurean | Junior Security Researcher, Bitdefender Dan Macovei | Director of Product Management Presentation Materials Available at: https://ift.tt/5XLF28r

source https://www.youtube.com/watch?v=xQy1YcLK1Ak

Black Hat USA 2025 | From Prompts to Pwns: Exploiting and Securing AI Agents

The flexibility and power of large language models (LLMs) are now well understood, driving their integration into a wide array of real-world applications. Early use cases, such as retrieval-augmented generation (RAG), followed rigid, predictable workflows where models interacted with external systems in tightly controlled sequences. While these systems were easier to optimize and secure, they often resulted in inflexible, single-purpose tools. In contrast, modern agentic systems leverage expanded input modalities, such as speech and vision, and use more sophisticated inference strategies, such as dynamic chain-of-thought reasoning. These advancements allow them to act independently on users' behalf to automate increasingly complex workflows, often involving sensitive data and systems. As their utility increases, so too does their attack surface: more usability means broader access to data, greater ability to execute actions, and significantly more opportunity for exploitation. In this talk, we will explore the emerging security challenges posed by agentic AI systems. We demonstrate the implications of this significant shift through internal assessments and proof-of-concept exploits developed by our AI Red Team, targeting a range of agentic applications, from popular open-source tools to enterprise systems. These exploits all leverage the same core finding: that LLMs are uniquely vulnerable to malicious input, and exposure to such input can have a significant impact on the trust of downstream actions. In short, we lay out what can go wrong when agentic systems vulnerable to adversarial inputs are deployed within enterprise environments. We conclude by discussing how NVIDIA addresses the security of emerging agentic workflows, and our principles for designing agent interactions in ways that mitigate risk, emphasizing a security-first foundation for safe and scalable adoption. By: Rebecca Lynch | Offensive Security Researcher, NVIDIA Rich Harang | Principal Security Architect, NVIDIA Presentation Materials Available at: https://ift.tt/FjcC9HR

source https://www.youtube.com/watch?v=zipgr080EQU

Thursday, 19 February 2026

Black Hat Europe 2025 Highlights | Record‑Breaking 4,500+ Attendees

Setting a new attendance record with more than 25% growth, Black Hat Europe 2025 brought together more than 4,500 security professionals from across the globe, showcasing the research, insights, and innovations shaping the future of cybersecurity. This year’s event delivered: ✔️Cutting‑edge content from top researchers and practitioners ✔️Hands‑on learning through labs, workshops, and demos ✔️A high‑energy Business Hall featuring the world’s leading security organizations From breakthrough briefings to unmatched networking opportunities, Black Hat Europe 2025 set the stage for the next evolution of cyber defense. Upcoming Black Hat events: https://ift.tt/CBkqYPK Become a sponsor: https://ift.tt/QPgoqcV #BlackHatEurope #BHEU #Cybersecurity #InfoSec #BlackHat

source https://www.youtube.com/watch?v=tfptvW07N-E

Wednesday, 18 February 2026

Black Hat USA 2025 | Locknote: Conclusions & Key Takeaways from Black Hat USA 2025

Join Black Hat USA Review Board Members for a compelling discussion on the most pressing issues facing the InfoSec community today. This distinguished panel will analyze key conference takeaways and provide valuable insights on how emerging trends will shape future security strategies. Don't miss this opportunity to hear candid perspectives from some of cybersecurity's most influential voices. By: Heather Adkins | Security Engineering Daniel Cuthbert | Global Head of Security Research Aanchal Gupta | Chief Security Officer, Adobe Jason Haddix | CEO, Hacker & Trainer, Arcanum Information Security Jeff Moss | Founder, Black Hat and DEF CON Full Session Details Available at: https://ift.tt/Ne6kX0d

source https://www.youtube.com/watch?v=DmXlafnjn0M

Tuesday, 17 February 2026

Black Hat USA 2025 | Advanced Active Directory to Entra ID Lateral Movement Techniques

Is there a security boundary between Active Directory and Entra ID in a hybrid environment? The answer to this question, while still somewhat unclear, has changed over the past few years as there has been more hardening of how much "the cloud" trusts data from on-premises. The reason for this is that many threat actors, including APTs, have been making use of known lateral movement techniques to compromise the cloud from AD. In this talk, we will take a deep dive together into Entra ID and hybrid AD trust internals. We will introduce several new lateral movement techniques that allow us to bypass authentication, MFA and stealthily exfiltrate data using on-premises AD as a starting point, even in environments where the classical techniques didn't work. All these techniques are new, not really vulnerabilities, but part of the design. Several of them have been remediated with recent hardening efforts by Microsoft. Very few of them leave useful logs behind when abused. As you would expect, none of these "features" are documented. Join me for a wild ride into Entra ID internals, undocumented authentication flows and tenant compromise from on-premises AD. By: Dirk-jan Mollema | Security Researcher, Outsider Security Presentation Materials Available at: https://ift.tt/X4g86EP

source https://www.youtube.com/watch?v=rzfAutv6sB8

Friday, 13 February 2026

Black Hat USA 2025 | Keynote: Threat Modeling and Constitutional Law

The legal system is terrible at threat modeling. It trusts the wrong insiders, overreacts to outsider threats, and is stodgy and sclerotic when circumstances shift. In this talk, Jennifer Granick examines constitutional law doctrines' longstanding mistakes in threat modeling—mistakes that civil libertarians have warned about for years. These missteps make it particularly difficult to for Congress, the Courts, and the public to navigate the evolving legal and political landscape ushered in by the Trump Administration. By: Jennifer Granick | Surveillance and Cybersecurity Counsel, ACLU Full Session Details Available at: https://ift.tt/y7hK3XE

source https://www.youtube.com/watch?v=H0bM5q5TtC0

Wednesday, 21 January 2026

Your Traffic Doesn't Lie: Unmasking Supply Chain Attacks via Application Behaviour

Supply chain compromises like the 2020 SolarWinds breach have shown how devastating and stealthy these attacks can be. Despite advances in provenance checks (i.e., SLSA), SBOMs, and vendor vetting, organizations still struggle to detect compromises that come in via trusted apps. In this talk, we unveil BEAM (Behavioral Evaluation of Application Metrics), an open source tool that contains a novel technique for detecting supply chain attacks purely from web traffic—no endpoint agents, no code instrumentation, just insights from the network data you're probably already collecting. We trained BEAM using over 40 billion HTTP/HTTPS transactions across thousands of global organizations. By applying LLMs to map user agents to specific apps, extracting 65 behavioral signals, and building application-specific baselines, BEAM detects deviations with over 95% accuracy—and up to 99% for highly predictable applications. It's fast, automated, and doesn't rely on vendor cooperation or manual tuning. We'll walk through how BEAM works under the hood: from enriching noisy traffic data to behavioral modeling and surfacing anomalies that reveal active compromises. Alongside prebuilt models for eight popular applications, we'll also show how organizations can build custom models for internal apps, enabling scalable monitoring for both off-the-shelf and bespoke software. This approach is new, highly effective, and purpose-built for threats that continue to bypass traditional defenses. By focusing on how applications behave—not just who built them or where they came from—BEAM gives defenders a powerful new signal against a threat that's been challenging to defend against. This session includes a live demo and practical takeaways for defenders, researchers, and security engineers alike. By: Colin Estep | Principal Engineer, Netskope Dagmawi Mulugeta | Staff Threat Research Engineer, Netskope Presentations Materials Available at: https://ift.tt/pAB1ezW

source https://www.youtube.com/watch?v=UGB5W-yJCrQ

Wednesday, 7 January 2026

Weaponizing Apple AI for Offensive Operations

Apple's on device AI frameworks CoreML, Vision, AVFoundation enable powerful automation and advanced media processing. However, these same capabilities introduce a stealthy attack surface that allows for payload execution, covert data exchange, and fully AI assisted command and control operations. This talk introduces MLArc, a CoreML based C2 framework that abuses Apple AI processing pipeline for payload embedding, execution, and real time attacker controlled communication. By leveraging machine learning models, image processing APIs, and macOS native AI features, attackers can establish a fully functional AI assisted C2 without relying on traditional execution mechanisms or external dependencies. Beyond MLArc as a standalone C2, this talk explores how Apple's AI frameworks can be weaponized to enhance existing C2s like Mythic, providing stealthy AI assisted payload delivery, execution, and persistence. This includes the below list of Apple AI framework used for embedding Apfell Payload. CoreML - Embedding and executing encrypted shellcode inside AI models. Vision - Concealing payloads/encryption keys inside AI processed images and retrieving them dynamically to bypass detection. AVFoundation - Hiding and extracting payloads within high frequency AI enhanced audio files using steganographic techniques. This research marks the first public disclosure of Apple AI assisted payload execution and AI driven C2 on macOS, revealing a new class of offensive tradecraft that weaponizes Apple AI pipelines for adversarial operations. I will demonstrate MLArc in action, showing how Apple's AI stack can be abused to establish fileless, stealthy C2 channels that evade traditional security measures. This talk is highly technical, delivering new research and attack techniques that impact macOS security, Apple AI exploitation, and red team tradecraft. By: Hariharan Shanmugam | Lead Red Teamer Full Session Details Available at: https://ift.tt/d6lYhvI

source https://www.youtube.com/watch?v=UooCY59nQSQ

Tuesday, 6 January 2026

From Spoofing to Tunneling: New Red Team's Networking Techniques for Initial Access and Evasion

Gaining initial access to an intranet is one of the most challenging parts of red teaming. If an attack chain is intercepted by an incident response team, the entire operation must be restarted. In this talk, we introduce a technique for gaining initial access to an intranet that does not involve phishing, exploiting public-facing applications, or having a valid account. Instead, we leverage the use of stateless tunnels, such as GRE and VxLAN, which are widely used by companies like Cloudflare and Amazon. This technique affects not only Cloudflare's customers but also other companies. Additionally, we will share evasion techniques that take advantage of company intranets that do not implement source IP filtering, preventing IR teams from intercepting the full attack chain. Red teamers could confidently perform password spraying within an internal network without worrying about losing a compromised foothold. Also, we will reveal a nightmare of VxLAN in Linux Kernel and RouterOS. This affects many companies, including ISPs. This feature is enabled by default and allows anyone to hijack the entire tunnel, granting intranet access, even if the VxLAN is configured on a private IP interface through an encrypted tunnel. What's worse, RouterOS users cannot disable this feature. This problem can be triggered simply by following the basic VxLAN official tutorial. Furthermore, if the tunnel runs routing protocols like BGP or OSPF, it can lead to the hijacking of internal IPs, which could result in domain compromises. We will demonstrate the attack vectors that red teamers can exploit after hijacking a tunnel or compromising a router by manipulating the routing protocols. Lastly, we will conclude the presentation by showing how companies can mitigate these vulnerabilities. Red teamers can use these techniques and tools to scan targets and access company intranets. This approach opens new avenues for further research. By: Shu-Hao Tung | Threat Researcher, Trend Micro Presentation Materials Available at: https://ift.tt/2ldANpQ

source https://www.youtube.com/watch?v=terPgwzk3dc

Monday, 5 January 2026

Clustered Points of Failure - Attacking Windows Server Failover Clusters

Windows Server Failover Cluster (WSFC) implementations represent a critical yet underexamined attack surface in enterprise environments. This research exposes how WSFC's architectural design inadvertently creates exploitable abuse paths and presents novel attack methodologies demonstrating how the compromise of a single cluster node can lead to complete cluster takeover, lateral movement across clustered infrastructure, and ultimately, domain compromise. This Briefing will present previously undiscovered techniques for extracting and leveraging cluster credentials, manipulating Kerberos authentication, and exploiting excessive permissions granted to cluster objects. This "set it and forget it" high-availability infrastructure represents a significant blind spot for organizations. You will leave with a better understanding of WSFC's internal security architecture, strategies for enumerating and abusing these new attack paths, and concrete defensive guidance for protecting organizations from these new abuses. By: Garrett Foster | Senior Security Researcher, SpecterOps, Inc.

source https://www.youtube.com/watch?v=FSRmPwfMYs0

Friday, 2 January 2026

Out Of Control: How KCFG and KCET Redefine Control Flow Integrity in the Windows Kernel

Virtual Secure Mode, or VSM, on Windows marked the most significant leap in security innovation in quite some time, allowing the hypervisor to provide unprecedented protection to the Windows OS. With VSM features like Credential Guard, preventing in-memory credential theft and Hypervisor-Protected Code Integrity (HVCI), protecting against unsigned kernel-mode code, VSM has significantly reshaped the way many offensive security practitioners and threat actors alike think about tradecraft. In the exploitation world, similar shifts have occurred with both Control Flow Guard (CFG) and Intel Control Flow Enforcement Technology (CET) being readily available in user-mode. However, we don't hear or read much about their kernel-mode counter parts, KCFG and KCET. Why is this if CFG and CET are both relatively well-established exploit mitigations in user-mode? At the time when CFG in user-mode was first released, kernel mode was the highest security boundary available on Windows – therefore making the implementation of CFG, or any CFI mitigation in kernel mode, impossible. However, since we now have a higher security boundary on Windows, thanks to the hypervisor, it is now possible to robustly implement CFG and CET in the Windows kernel! This talk will cover what kernel mode CFI would look like without the presence of a hypervisor; why KCFG and KCET rely on VTL 1; how these mitigations differ from their user-mode counterparts; known limitations which exist today, including the recent deprecation of the next iteration of CFG known as eXtended Control Flow Guard (XFG); and the future of kernel-mode exploitation on Windows in the presence of KCFG and KCET. By: Connor McGarr | Software Engineer, Prelude Security Presentation Materials Available at: https://ift.tt/5jlwhHR

source https://www.youtube.com/watch?v=LflYlvJ4vSU