For more than five years, firewall vendors have been under a persistent, cyclical struggle against a well-resourced and relentless China-based adversary that has expended considerable resources developing custom exploits and bespoke malware expressly for the purpose of compromising enterprise firewalls in customer environments. In this first-of-its-kind presentation, I will walk attendees through the complete history of the campaign, detailing the full scope of attacks and the countermeasures one firewall vendor developed to derail the threat actors. The presentation will provide rich detail into the exploit development targeting specific firewalls, how the exploits were deployed and leveraged to compromise customers, and characteristics of the malware deployed inside the firewall's operating system as a result of these attacks. Fundamental to this presentation is the fact that the adversary behind this campaign has not targeted only one firewall vendor: Most of the large network security providers in the industry have been targeted multiple times, using many of the same tactics and tools. So this serves not merely as a warning to the entire security industry, but as an urgent call to the companies that make up this industry to collectively combat this ongoing problem. Because at the end of the day, we all face the same threat, and we cannot hope to withstand the tempo and volume of these attacks alone. We must work together. By: Andrew Brandt | Hacker Presentation Materials Available at: https://ift.tt/5oFN68L
source https://www.youtube.com/watch?v=z4COrX9YHcU
The Cyber Stream
Latest News for Cyber Security & Technology
Friday, 13 March 2026
Thursday, 12 March 2026
Black Hat USA 2025 | Clue-Driven Reverse Engineering by LLM in Real-World Malware Analysis
IDA Pro feat. MCP (Model Context Protocol) is truly amazing! Through interactive chat windows, LLM can automatically complete reverse engineering tasks and even assist in generating malware analysis reports. At first glance, this technology seems to offer malware analysts the ability to "clock out early." But is this truly the case? Not quite! Malware analysis is not a CTF competition, the adversaries certainly won't reveal the correct answer. In the absence of ground truth, analysts must meticulously trace every step performed by the LLM, deeply understanding why the LLM reached a particular conclusion. Moreover, LLMs' generative nature tends to prioritize producing outputs whenever possible, even when lacking sufficient information, resulting in reasonable yet incorrect answers. In complex programs with highly interdependent functions, incorrect answers can snowball into catastrophic mistakes, ultimately leading to entirely inaccurate reverse engineering results. Therefore, blindly relying on LLM output is unreliable. Analysts often need to spend even more time verifying and correcting these outputs to ensure accuracy and reliability. To address these challenges in LLMs in automated malware analysis, we propose a clue-driven reverse engineering framework. By generating high-quality clues, such as API information and magic constants, in decompiled code. Then, devising analysis strategies based on these clues, our framework effectively reduces the errors generated by LLMs in uncertain situations and significantly improves the accuracy and stability of the results. Additionally, we designed validation mechanisms by integrating entropy-based evaluation methods with attention tracking technology to ensure that LLM outputs are based on reliable clues, preventing the further propagation of errors. This study demonstrates the potential of combining clue generation, clue-driven analysis strategies, and stabilization mechanisms to deliver novel, efficient technical solutions for malware analysis. By: Tien-Chih Lin | Research Team Lead, CyCraft Technology Wei Chieh Chao | Senior Cybersecurity Researcher, CyCraft Technology Zhao-Min Chen | Cybersecurity Researcher, CyCraft Technology Presentation Materials Available at: https://ift.tt/Lm5WafA
source https://www.youtube.com/watch?v=Ofo2RRaqVwU
source https://www.youtube.com/watch?v=Ofo2RRaqVwU
Black Hat USA 2025 | Hack to the Future: Owning AI-Powered Tools with Old School Vulns
Harder, Better, Faster, Stronger isn't just the title of a Daft Punk song; it's also what developers hope to get out of the current wave of generative AI. As developers work to shove AI into everything and optimize every aspect of their workflow, the hard-won security lessons of the past are discarded in favor of shiny new objects, with devastating consequences. AI-powered developer tools and agents are meant to add efficiency and speed, but can also add attack surface and amplify vulnerabilities, creating issues where there weren't any previously. These tools often erode security boundaries, contain excess functionality, or are deployed with elevated permissions, a seemingly happy trade for developers looking to optimize. However, this trade creates real-world consequences for organizations and development teams who may have no idea how vulnerable the tools they use are or how exposed they may be. In this presentation, we demonstrate the impact of the regression away from common security practices with vulnerabilities we identified in developer productivity tools used by millions of developers across the globe. We spotlight specific trends and themes from the current wave of generative AI-based development and cover these attack categories, allowing others to quickly focus on addressing what matters most. We also cover generative AI-based quirks in operations and architecture that will continue to lead to security issues in the future. If you missed what it was like to hack in the early days when everything was insecure, now's your chance to go back in time! By: Nathan Hamiel | Senior Director of Research, Kudelski Security Nils Amiet | Lead Prototyping Engineer, Kudelski Security Full Presentation Materials Available at: https://ift.tt/Hy4cTJA
source https://www.youtube.com/watch?v=oaU6a8nuyT8
source https://www.youtube.com/watch?v=oaU6a8nuyT8
Black Hat USA 2025 | How to Secure Unique Ecosystem Shipping 1 Billion+ Cores?
Security research has historically been focused on securing well-known, widely replicated ecosystems—where problems and solutions are shared across the industry. But what happens when you build something no one else has? How do you secure an architecture that's both proprietary and deployed at billion-core scale? In 2016, NVIDIA began transitioning its internal Falcon microprocessor—used as a logic controller in nearly all GPU products—to a RISC-V-based architecture. Today, each chipset includes 10 to 40 RISC-V cores, and in 2024, NVIDIA surpassed 1 billion RISC-V cores shipped. This success came with unique security challenges—ones that existing models couldn't solve. To address them, we developed a custom software and hardware security architecture from scratch. This includes a purpose-built Separation Kernel software, novel RISC-V ISA extensions like Pointer Masking and IOPMP (later ratified), and unique secure boot and attestation mechanisms. But how do you future-proof a proprietary ecosystem against tomorrow's threats? In this talk, we'll share what we learned—and what's next. From hardware-assisted memory safety (HWASAN, MTE) to control-flow integrity (CFI) and CHERI-like models, we'll explore how NVIDIA is preparing not only its RISC-V ecosystem for the evolving threat landscape. If you care about real-world security at an unprecedented scale, this is a journey you won't want to miss. By: Adam Zabrocki | Director of Offensive Security, NVIDIA Marko Mitic | System Software Manager, NVIDIA Presentation Materials Available at: https://ift.tt/uCXUP7Z
source https://www.youtube.com/watch?v=JmAXnQJZbWg
source https://www.youtube.com/watch?v=JmAXnQJZbWg
Tuesday, 10 March 2026
Black Hat USA 2025 | Vulnerability Haruspicy: Picking Out Risk Signals from Scoring System Entrails
Vulnerability scoring is supposed to bring order to the chaos of risk management, but in practice, it can feel more like reading tarot cards or poking at entrails than applying science. CVSS performs monkey math to force fractal bell curves, EPSS tries to predict exploitation with statistical black magicks, and SSVC ditches math entirely in favor of structured gut feelings. Meanwhile, defenders mix and match shortcuts — KEV lists, vendor advisories, and lived experience — to separate the truly urgent from the merely annoying. But are we actually making better risk decisions, or just using these frameworks to justify what we were going to do anyway? This talk will dig into the strengths, weaknesses, and absurdities of CVSS, EPSS, and SSVC, comparing them to the reality of how security teams actually handle vulnerabilities. This talk will explore where these models help, where they mislead, and whether any of them are meaningfully better than rolling a D20 saving throw vs exploitation. Expect debate, disagreements, and plenty of astrology jokes. By: Tod Beardsley | VP of Security Research, runZero Presentation Materials Available at: https://ift.tt/bnu5d0o
source https://www.youtube.com/watch?v=CW0Awo7pN5M
source https://www.youtube.com/watch?v=CW0Awo7pN5M
Black Hat USA 2025 | How Tree-of-AST Redefines the Boundaries of Dataflow Analysis
In recent years, vulnerability discovery has largely relied on static analysis tools with predefined pattern matching and taint analysis. These traditional methods are not as efficient for complex codebases that span multiple files and utilize atypical input processing techniques. While successful for common vulnerability patterns, they frequently miss sophisticated attack vectors that operate across multiple functions, and sometimes multiple files. In this talk, we will be covering Tree-of-AST, a new framework that combines large language models with abstract syntax tree analysis to address the limitations above. This approach leverages a unique Locate-Trace-Vote (LTV) methodology that enables autonomous tracking of data flows within large-scale projects, even in the absence of predefined source patterns. We will be sharing conclusive benchmark analysis showing that the Tree-of-AST method outperforms established tools by discovering previously undetected vulnerabilities. The study was done on widely-used open-source projects. Further, we demonstrate that our system autonomously generates working exploits with a success rate above the industry average for similar tools. We would wrap up the talk by examining practical defensive strategies developers could implement to protect their codebases from similar emerging techniques, and discuss how automatic exploitation capabilities reshape the modern digital security landscape. By: Sasha Zyuzin | Student, Bachelor's Degree, University of Maryland Ruikai Peng | Founder, Pwno Presentation Materials Available at: https://ift.tt/MBxqKGU
source https://www.youtube.com/watch?v=VNBEoLE_bGA
source https://www.youtube.com/watch?v=VNBEoLE_bGA
Sunday, 8 March 2026
Black Hat USA 2025 | Digital Dominoes: Scanning the Internet to Expose Systemic Cyber Risk
Policymakers and risk owners face significant challenges in managing systemic cyber risk, largely because few tools use empirical data to accurately identify and quantify it. But that data is essential to (1) identify vendors and technologies that require targeted measures, (2) track how systemic cyber threats evolve compared to non-cyber risk, and (3) assess the effectiveness of targeted interventions. Traditional approaches rely on backward-looking models or hypothetical scenarios—methods that can't keep pace with today's fast-moving, complex digital infrastructure. What's needed are real-time, data-driven insights that empower decision-makers to take meaningful action. We address this gap by leveraging internet-scale scanning to build a dynamic, empirical map of concentration risk—showing how systemic vulnerabilities spread across networks, technologies, and vendors. In a first-of-its-kind live demonstration, we will unveil a new risk visualization platform that highlights how risk concentrates within and across sectors, including those supporting critical national functions. Our findings challenge conventional wisdom. Many assumed sources of systemic risk have limited real-world impact, while some overlooked technologies (e.g., large industry-specific white label SaaS vendors) carry significant potential for cascading failures across society. Drawing from real-world examples in sectors such as financial services and manufacturing, we demonstrate how this platform—and the dynamic models behind it—can support more informed, data-driven policy interventions. Participants will leave with a clearer understanding of the systemic risk landscape, as well as actionable insights for developing smarter, more resilient national cyber strategies. Participants will be able to: - Define the Unseen: Understand systemic cyber risk in the real world—down to specific technologies, vendors, and interdependencies in the digital supply chain. - Track, Quantify, Predict: Monitor how cyber threats evolve, compare risk levels across sectors, and assess impact alongside traditional risk categories. - Test What Works: Evaluate potential policy interventions using dynamic, empirical models grounded in real infrastructure data—not theoretical scenarios. By: Morgan HervĂ©-Mignucci | Head of ERM Analytics, Coalition, Inc. Presentation Materials Available at: https://ift.tt/Rc1SdmN
source https://www.youtube.com/watch?v=sPyhJykSLUw
source https://www.youtube.com/watch?v=sPyhJykSLUw
Black Hat USA 2025 | Detecting Taint-Style Vulnerabilities in Microservice-Structured Web Apps
Microservice architecture has become increasingly popular for building scalable and maintainable applications. A microservice-structured web application (shortened to microservice application) enhances security by providing a loose-coupling design and enforcing the security isolation between different microservices. However, in this paper, our study shows microservice applications still suffer from taint-style vulnerability, one of the most serious vulnerabilities (e.g., code injection and arbitrary file write). We propose a novel security analysis approach, named MTD, that can effectively detect taint-style vulnerabilities in real-world, evolving-fast microservice applications. Our approach mainly consists of three phases. First, MTD identifies the entry points accessible to external malicious users by applying a gateway-centric analysis. Second, MTD utilizes a new data structure, i.e., service dependence graph, to bridge inter-service communication. Finally, MTD employs a distance-guided strategy for selective context-sensitive taint analysis to detect vulnerabilities. To validate the effectiveness of MTD, we applied it to 25 open-source microservice applications (each with over 1,000 stars on GitHub) and 5 industrial microservice applications from a world-leading fintech company, i.e., Alibaba Group. We found that MTD effectively vetted these applications, discovering 59 high-risk zero-day vulnerabilities. Among these, vulnerabilities in open-source applications resulted in the allocation of 31 CVE identifiers, including CVE-2024-22263 in the Spring Projects, which has a CVSS score of 9.8. In the industrial microservice applications, we discovered 20 vulnerabilities, including groovy code injection and arbitrary command execution. These vulnerabilities could compromise the entire web server, severely affecting the integrity of millions of users' private data and the security of company systems. MTD effectively detected these high-value vulnerabilities (worth $50,000 in bounties) and successfully safeguarded enterprise security. By: Fengyu Liu | Ph.D Student, Fudan University YouKun Shi | Postdoctoral Researcher, Hong Kong Polytechnic University Tian Chen | Master's Student, Fudan University Bocheng Xiang | Fudan University Junyao He | Senior Security Engineer, Alibaba Group Qi Li | Senior Security Engineer, Alibaba Group Guangliang Yang | Assistant Professor, Fudan University Yuan Zhang | Professor, Fudan University Min Yang | Professor, Fudan University Presentation Materials Available at: https://ift.tt/LWVH8UX
source https://www.youtube.com/watch?v=DhJphVrsof4
source https://www.youtube.com/watch?v=DhJphVrsof4
Saturday, 7 March 2026
Black Hat USA 2025 | Death by Noise: Abusing Alert Fatigue to Bypass the SOC (EDR Edition)
Many security incidents today don't occur due to a lack of alerts—they happen because the right ones are ignored. In this talk, we demonstrate how attackers can achieve their goals while triggering only medium and low severity alerts, which make up the majority of SOC alerts and are often overlooked or not thoroughly investigated. Instead of disabling EDRs or relying on highly complex techniques, attackers can blend into the noise. We walk through how adversaries adapt common TTPs across platforms to bypass SOC operations. By targeting endpoints and cloud workloads protected by CrowdStrike, SentinelOne, and Microsoft Defender for Endpoint, we show how default critical/high-severity alerts can be consistently downgraded to medium/low or suppressed — all while maintaining attack effectiveness. Our goal is to expose critical SOC blind spots in the ways SOC teams interpret, prioritize, and act on alerts. In many environments, even custom detections that could close critical gaps are deprioritized because they add to the overwhelming volume of low and medium severity alerts. Without rethinking how alerts are created, prioritized and investigated, defenders will continue missing threats. We'll discuss custom detections to detect these TTPs and automation is the key to scale the investigations. By: Rex Guo | CEO/Co-Founder, Culminate Inc. Khang Nguyen | Founding Security Researcher, Culminate Inc. Presentation Materials Available at: https://ift.tt/x1JvHfs
source https://www.youtube.com/watch?v=Xd4y4hkXprE
source https://www.youtube.com/watch?v=Xd4y4hkXprE
Black Hat USA | LLMs-Driven Automated YARA Rules Generation with Explainable File Features & DNAHash
Malware on the cloud is growing massively every day, and an automated rule generation solution is needed to improve operational efficiency. YARA is a widely used tool for creating malware signatures and detection rules, however, existing YARA-based automated rules generation solutions suffer from limitations in three key areas: rule quality, false positive rates, and the interpretability of features. These shortcomings restrict their effectiveness in real-world malicious threat detection scenarios. In this presentation, we will introduce LLMDYara, which is an automated rule generation solution that integrates expert knowledge with large language models. We first utilize expert knowledge to pre-extract string, function, and file DNAHash features. Subsequently, we design a function signature algorithm and an efficient querying similarity search mechanism to filter these features against a billion-scale white database, thereby enhancing feature quality. We then leverage large models for string feature evaluation and functional identification of function fragments, where the latter enhanced the interpretability of opcode features. Finally, we generated YARA rules through an ensemble decision based on selected features. Our newly introduced file DNAHash feature ensures rule usability even when other features have lower quality, further reducing false positives. Our automated rule generation solution has made efforts to address challenges such as reducing false positives, enhancing feature interpretability, and improving rule quality. Additionally, we will share our experiences in feature engineering and large language model fine-tuning, with the hope that these insights will help advance the application of large language models in the program analysis domain. By: Xiaochen Wang | Security Engineer, Alibaba Cloud Yiping Liu | Security Engineer, Alibaba Cloud Xiaoman Wang | Security Engineer, Alibaba Cloud Cong Cheng | Senior Security Engineer, Alibaba Cloud Presentation Materials Available at: https://ift.tt/nusEShv
source https://www.youtube.com/watch?v=0i8UhpUgw_0
source https://www.youtube.com/watch?v=0i8UhpUgw_0
Friday, 6 March 2026
Black Hat USA 2025 | Invoking Gemini for Workspace Agents with a Simple Google Calendar Invite
Invitation Is All You Need! Invoking Gemini for Workspace Agents with a Simple Google Calendar Invite Over the past two years, we have witnessed the emergence of a new class of attacks against LLM-powered systems known as Promptware. Promptware refers to prompts (in the form of text, images, or audio samples) engineered to exploit LLMs at inference time to perform malicious activities within the application context. While a growing body of research has already warned about a potential shift in the threat landscape posed to applications, Promptware has often been perceived as impractical and exotic due to the presumption that crafting such prompts requires specialized expertise in adversarial machine learning, a cluster of GPUs, and white-box access. This talk will shatter this misconception forever. In this talk, we introduce a new variant of Promptware called Targeted Promptware Attacks. In these attacks, an attacker invites a victim to a Google Calendar meeting whose subject contains an indirect prompt injection. By doing so, the attacker hijacks the application context, invokes its integrated agents, and exploits their permission to perform malicious activities. We demonstrate 15 different exploitations of agent hijacking targeting the three most widely used Gemini for Workspace assistants: the web interface (www.gemini.google.com), the mobile application (Gemini for Mobile), and Google Assistant (which is powered by Gemini), which runs with OS permissions on Android devices. We show that by sending a user an invitation for a meeting (or an email or sharing a Google Doc), attackers could hijack Gemini's agents and exploit their tools to: Generate toxic content, perform spamming and phishing, delete a victim's calendar events, remotely control a victim's home appliances (connected windows, boiler, and lights), video stream a victim via Zoom, exfiltrate emails and calendar events, geolocate a victim, and launch a worm that tarets Gemini for Workspace clients. Our demonstrations show that Promptware is capable to perform (1) inter-agent lateral movement (triggering malicious activity between different Gemini agents), and (2) inter-device lateral movement, escaping the boundaries of Gemini and leveraging applications installed on a victim's smartphone to perform malicious activities with physical outcomes (e.g., activating the boiler and lights or opening a window in a victim's apartment). Finally, we assess the risk posed to end users using a dedicated threat analysis and risk assessment framework we developed. Our findings indicate that 73% of the identified risks are classified as high-critical, requiring the deployment of immediate mitigations. By: Ben Nassi | Cybersecurity Expert, Technion Or Yair | Security Research Team Lead, SafeBreach Stav Cohen | PhD Student, Technion Full Session Details Available at: https://ift.tt/l4T3LqO
source https://www.youtube.com/watch?v=nmMUMzLxBkU
source https://www.youtube.com/watch?v=nmMUMzLxBkU
Subscribe to:
Comments (Atom)
-
Germany recalled its ambassador to Russia for a week of consultations in Berlin following an alleged hacker attack on Chancellor Olaf Scho...
-
Android’s May 2024 security update patches 38 vulnerabilities, including a critical bug in the System component. The post Android Update ...