Emerging AI Threats:
Enterprises Will Start Treating AI Systems as Insider Threats.
Josh Taylor, Lead Security Analyst, Fortra
As agents gain system-level permissions to act across email, file storage, and identity platforms, companies will need to monitor machine behavior for privilege misuse, data leakage, etc. The shift happens when organizations realize their AI assistants have broader access than most employees and operate outside traditional user behavior analytics.
The first time an AI agent gets compromised through prompt injection or a supply chain attack and starts quietly exfiltrating customer data under the guise of “helping users,” organizations will realize they built privileged access with no monitoring.
AI Extortion Scams.
John Wilson, Senior Fellow, Threat Research, Fortra
Hyper-personalized extortion scams driven by AI. For example, someone might receive a highly customized email stating that their Tesla has been hacked. The attacker will send it over a cliff the next time their loved ones are in the car unless the victim pays $X in bitcoin. The attacker would mention the color and model of the car, as well as the name of the intended victim’s family members. The threat might include details such as a place they often visit or a road the victim often drives on.
AI will help customize the lure, based on inputs from the victim’s social media + breach data. The AI would even calculate the right amount to ask for…a high school student might be asked for $300 while a CEO might receive a $250k demand to keep their family safe.
In other words, AI will figure out what is dear to the victim, determine a threshold where the victim is more likely to pay vs. contact police, then craft a unique threat tailored to the victim.
Agentic AI will become the next major attack surface in 2026.
Patrick Sullivan, CTO, Security Strategy
As autonomous AI agents increasingly handle security tasks like triaging, writing API calls, and chaining systems, they’ll introduce unpredictable vulnerabilities. Expect attackers to aim to exploit AI agents’ access, scale, and autonomy to manipulate defenses or trigger harmful operations. Traditional API authentication and governance models will struggle to contain entities that can generate their own credentials or modify access paths on the fly. This shift will demand new disciplines such as AI behavior forensics or AI threat modelling to monitor and validate machine decision-making. In 2026, the balance between AI as defender or attacker will hinge on securing the AI agents themselves.
AI vs. AI warfare will accelerate the cybersecurity arms race.
Bernard Regan, Principal, Baker Tilly
AI is transforming the way organizations and nations operate, but it’s also fundamentally changing the rules of the cybersecurity battlefield.
While governments and organizations leverage AI for faster insights, improved decision-making, and operational efficiency, attackers — including state-sponsored groups — are using the same engines to probe, manipulate, and exploit systems at unprecedented speed. The tools designed to enhance productivity are now fueling a global cyber arms race, where offense and defense are evolving in near real-time.
This isn’t just about tricking a corporate AI system — nation-states and cyberattackers are developing AI-driven tactics to identify vulnerabilities, steal intellectual property, or disrupt critical infrastructure. Clever actors can circumvent safeguards, exploit confidential data, and automate attacks far faster than traditional human-led methods. In this interconnected environment, AI allows adversaries to operate across borders (often faster than defenders can react), amplifying the potential impact on global markets, supply chains, and geopolitical stability.
The dual-use nature of AI creates a delicate balance. Organizations and governments want to deploy it for legitimate, productive purposes, but every advancement can also be weaponized. Failure to recognize this reality risks being outpaced by attackers who can automate reconnaissance, adapt strategies, and launch sophisticated campaigns across industries and nations.
The takeaway for 2026 is clear: AI doesn’t just enhance defenses — it also enhances threats on a global scale.
Social engineering will have a renaissance.
Mark St. John, COO and Co-Founder, Neon Cyber
The ever-accelerating ability for AI to mimic brands, applications, human voice, and video is going to take fraud in 2026 to new, dystopian levels. What we are witnessing with attacks like the video-driven ClickFix phishing attacks, which are already wildly successful, will be a blueprint for future attacks in which something that seems completely normal, spurred with urgency, will fool not just the indiscriminate user but also the more tech-savvy and aware. We are going to witness fraud that has the sophistication of a blockbuster movie start appearing in our inboxes, documents, and search engines. User training will be essential, as will more controls over user inputs across systems. I wish us all luck navigating it, we will learn a lot very quickly.
Evolving cyberthreats through AI.
Ben McCarthy, Lead Cyber Security Engineer, Immersive
In 2026, the way cybercriminals conduct extortion is expected to change. Instead of simply threatening to release data, they may threaten to sell it to AI companies desperate for new training material.
The base level of script kiddies will become slightly more effective as AI improves. New AI security-researcher agents can uncover vulnerabilities in open-source software, potentially giving novice script kiddies usable exploits they do not fully understand.
However, the sophistication of threat actors also relies on stealth, which cannot be replicated by AI. Operational security and protecting themselves from detection after attacks are often the most challenging aspects for attackers, and AI will not assist with this.
We are also likely to see a rise in LLM-assisted malware capable of calling AI APIs for new code and adapting in real time to its environment. While mass “spray and pray” attacks will persist, targeted attacks will remain profitable, with threat actors potentially selling stolen data to AI companies eager for training material.
Atlas, NPUs, and the Rise of On-Device Zero-Day Malware.
Carl Froggett, CIO, Deep Instinct
The convergence of agentic browsers, like OpenAI’s Atlas, and new neural processing units (NPUs) embedded in modern chips (Apple, AMD, Intel, Qualcomm, etc.) is creating a dangerous shift in the cyber threat landscape. These processors enable local large language models (LLMs) and AI assistants to run efficiently on laptops and mobile devices, allowing attackers to generate and execute weaponized code entirely on the endpoint. With a single malicious prompt or payload, a threat actor can instruct a local AI model to assemble and deploy malware in real time, collapsing the traditional kill chain – reconnaissance, weaponization, delivery, exploitation, command and control, and actions on objective. Once the threat actor is inside, perimeter defenses lose their value entirely.
Traditional defense-in-depth architectures were never designed for this level of local autonomy. Theoretically, attackers could have attempted something similar on conventional CPUs, but the power draw and latency would have made it obvious. NPUs eliminate that friction, making on-device malware creation fast, quiet, and power efficient. Antivirus and network monitoring tools usually overlook the plain-text prompts that trigger malicious code generation, and signature-based detection cannot keep pace with zero-days created locally on demand. The only sustainable path forward is prevention: blocking malicious behavior at delivery, hardening model access on endpoints, and extending identity and data-level controls so AI agents cannot act as invisible insiders.
Attackers won’t just use AI for social engineering—they’ll sell it as a service.
Rik Ferguson, Forescout’s VP of Security Intelligence, and Daniel dos Santos, Vice President of Research, Forescout Vedere Labs
In 2026, SEaaS—“social-engineering-as-a-service”— will become the criminal world’s hottest subscription model. We’ll see SEaaS take off, with ready-made, buyable kits that bundle AI voice cloning, scripted call flows, and fake “authorize app” links. Higher-cost options will offer the services of experienced social engineering experts who are looking to distance themselves from subsequent incidents and the interest of law enforcement. These turnkey packages will let even inexperienced attackers impersonate employees and bypass multifactor authentication through convincing helpdesk or chat interactions. As voice and chat automation proliferate, defenders must treat every conversation as untrusted input and bake verification into every workflow.
AI-generated data sprawl will trigger the first major breach from forgotten “data exhaust.”
George Gerchow, Faculty, IANS Research & Chief Security Officer, Bedrock Data
The tipping point for data sprawl is already here, hiding in lower-level development environments like QA sandboxes and integration systems. In 2026, we’ll see the first major breach directly attributed to AI-generated “data exhaust” that nobody inventoried: a forgotten vector database or prompt log from an abandoned pilot, left open with customer data or secrets exposed. Organizations are multiplying derivative data faster than they can track it. The solution requires treating AI exhaust as Tier 1 data with mandatory lineage tags and time-to-live (TTL) policies at write, implementing 30-60 day default retention in lower environments, restricting access with short-lived credentials, and purging orphaned artifacts monthly. Visibility must be designed in from the start, or organizations will lose control entirely.
Shadow AI will emerge as the #1 threat to organizations.
Dr. Darren Williams, Founder and CEO, Blackfog
The explosive growth in AI usage represents the single greatest operational threat to organizations, putting intellectual property (IP) and customer data at serious risk. While AI adoption is growing rapidly, enterprises are increasingly exposed to risks related to data security, third‑party AI tools, shadow AI usage, and governance issues. Organizations must monitor not only sanctioned AI tools but also the growing ecosystem of “micro‑AI” extensions and plugins that can quietly extract or transmit data.
Broken Trust and Reckoning After the “Wild West” of AI Deployment.
Karl Holmsqvist, Founder and CEO, Lastwall
The unchecked “Wild West” rush to deploy AI without proper safeguards will trigger a major security and trust reckoning in 2026. Over the past year, countless AI tools and systems rolled out with minimal oversight… and the fallout is coming due. We anticipate the first high-profile security breach caused directly by an autonomous AI agent in 2026, validating warnings that poorly governed AI can create new failure modes. Attackers are already leveraging AI as a force multiplier: classic threats like phishing are being supercharged by flawless deepfake voices and personalized automation, allowing minor vulnerabilities to chain into major breaches at machine speed. In 2026, smart organizations will rein in some of their initial AI deployments with rigorous security assessments, access controls, and real-time monitoring of AI behaviors. However, some will not, and the results will be devastating.
Deepfakes go mainstream in cybercrime.
Pavel Minarik, VP of Product Security, Progress Software
Threat actors will leverage AI to automate persuasion: producing personalized, high-quality content that fuels phishing and misinformation campaigns at scale. As AI empowers defenders and attackers alike, detection and digital provenance will become non-negotiable.
Relaxed Agentic AI Access will Trigger the Next Identity Crisis.
Grayson Milbourne, Security Intelligence Director, OpenText Cybersecurity
Experts predict agentic identities will outnumber human ones by 100 to 1, each operating independently, making decisions, and often accessing data, critical data to be efficient. Most organizations aren’t ready for this level of identity sprawl. In the 2026 rush to deploy AI agents, many will over-permission agents or skip proper guardrails altogether. This will lead to a new wave of breaches where AI is tricked into sharing data, performing unauthorized tasks, or opening doors for attackers. Cybersecurity starts with identity. Those who fail to modernize their IAM strategies for agentic AI and shortcut access permissions will face a rise in security risk and operational chaos.
APIs and the Agentic AI Security Challenge.
Srikara Rao, CTO, R Systems
As Agentic AI grows, it is multiplying API usage. Each agent may call dozens of APIs in pursuit of goals, creating vast, non-human identity traffic. This surge, however, brings on new risks, including expanded attack surfaces, unpredictable call patterns, and prompt injection via APIs. Because of this, APIs will remain the top targets for attacks because they expose core business data and are often misconfigured. For example, API gateways, adoption of the OWASP API Top 10, API Security Posture Management, and behavioral analytics, plus shift-left security in CI/CD pipelines. AI itself will be both a threat and defender – used for automated reconnaissance and polymorphic attacks, but also for behavioral threat detection, policy generation, and bot/agent management. Moving into 2026 and beyond, API security will be machine-speed battles between offensive and defensive AI systems.”
The Rush to Adopt AI will Outpace Security, Expanding Risk Across the Supply Chain.
Bob Maley, Chief Security Officer, Black Kite
The rapid push to embed AI into every workflow, driven by top-down pressure to innovate, increase productivity, and stay competitive, will create increased opportunities for attackers. While some companies will attempt to impose strict AI governance requirements, existing AI risk frameworks remain fragmented and inconsistent across industries. For many, assessing AI vendors and your vendors’ use of AI remains a highly manual and complex process, often viewed as too disruptive to procurement and innovation. As businesses across all industries integrate AI vendors at record speed, threat actors will increasingly exploit the expanding web of interconnected AI systems. Every layer of the supply chain is now experimenting with AI, multiplying exposure across ecosystems. Small, specialized AI providers with deep data access but weak security will become high-risk links throughout the vendor network.
The misuse of AI.
David Norlin, CTO, Lumifi Cyber
I think we’re in a phase of extreme acceleration with AI, especially around misuse. We are likely going to see major compromises associated with AI-connected services in email, workplace tools, and AI-enabled SaaS applications. As soon as we start connecting agents that receive input from the wider world, we are creating new attack surfaces for exploitation. It’s no different than the waves of SQL injection and other input or injection-type attacks we’ve seen in the past. You now have a semi-intelligent, autonomous system with tools at its disposal (i.e., agentic systems) that can receive input that may not be filtered by any governing system or external gateway. To do their job, they have to be connected to backend sources of data that feed into context. This is ripe for misconfiguration as administrators race them into production and don’t audit the data sources to which they’re connected. Don’t connect sensitive data to AI agents unless they are internal, protected, and accessible to only those who have a need for their services.
AI Hallucinations.
Adam Arellano, Field CTO, Harness
Hallucinations are improving, but they’re not going away. Modern LLMs still make confident mistakes because they generate probabilities, not truths. The real progress is in how engineering teams design systems around that reality: grounding models in trusted internal data, enforcing strict policies, and validating AI-driven changes before they impact users or environments. Our recent State of AI-Native Application Security 2025 report highlights why this matters: 62% of organizations say they have no visibility into where LLMs are in use. If teams can’t see where AI exists, they definitely can’t understand where hallucinations could introduce risk. From our vantage point, helping customers ship software, the pattern is clear: companies don’t expect AI to be perfect — they expect it to be observable, governable, and easy to correct. That’s what ultimately makes AI safe to use in production workflows, even if hallucinations never fully disappear.
Synthetic Identities Trigger a Collapse in Digital Trust.
Siggi Stefnisson, CTO, Gen’s Cyber Safety
AI will now generate entire identity kits – realistic IDs, bills, selfies, and even live video – that pass most basic verification checks. Criminals will use these fabricated personas to secure loans, open accounts, and commit cross-platform fraud at scale.
As “identity fusion” attacks expand across financial, tax, wallet, and service ecosystems, static credentials will no longer be enough.
Identities Will Become AI’s Primary Attack Surface.
Heath Thompson, President and Chief Strategy Officer, Quest Software
The AI revolution has created a surge in identities, service accounts, and access points, presenting complex security challenges and more attack opportunities. In fact, there are an estimated 80 non-human entities to every one human entity. Identity systems like Active Directory and Entra ID are the backbone of IT environments, but they are also the number one target for attackers. In 2026, protecting these environments will require automation that detects compromised accounts and restores access fast.
The rise of agentic AI raises new questions for defenders: how do you verify that an AI agent is acting within policy, and how do you know if it has been hijacked? Identity threat detection and response will become the foundation of zero trust in the AI era.
Model Poisoning Will Open the Door to AI Manipulation.
One Identity
As enterprises train or fine-tune AI models on proprietary data, adversaries will seek to distort them quietly rather than disable them outright. Subtle manipulation of training inputs could bias decisions, skew analytics, or compromise automation in ways that look legitimate.
“AI assurance will become inseparable from identity assurance,” said Nicolas Fort, director of product management at One Identity. “Organizations will need to track not just who accessed a model, but who influenced it and who still has access. Every training event, prompt, and parameter change must be tied to an authenticated identity.”
The Next AI Crisis Will Be Accountability.
Abe Ankumah, Chief Product Officer, 1Password
As agentic AI becomes a part of everyday workflows, regulators, investors, and customers will expect clear proof that AI decisions reflect human intent. When incidents involving autonomous systems reveal new risks, organizations will need to treat trust, risk, and security as a continuous framework for business performance.
While the discovery of AI agents will remain important, the greater challenge will be establishing trust. Each AI agent will need to be linked to a responsible human with clear delegated authority, creating a clear line of accountability for every autonomous action. This will become a defining business requirement and a new measure of enterprise trustworthiness.
Human accountability will form the foundation for AI Trust, Risk, and Security Management (TRiSM) frameworks that unify governance, runtime security, and risk oversight. Organizations that operationalize TRiSM will set a new standard of digital trust and transform it from a compliance checkbox into a board-level KPI and a public signal of integrity in the era of agentic AI.”
The AI OAuth weak link.
Gianpietro Cutolo, Staff Threat Research Engineer, Netskope
Given how attackers exploited OAuth and 3rd-party app tokens in the recent Salesforce and Salesloft incidents, the same threat pattern is now emerging in AI ecosystems. As AI agents and MCP-based systems increasingly integrate with 3rd-party APIs and cloud services, they inherit OAuth weakest links: over-permissive scopes, unclear revocation policies, and hidden data-sharing paths. These integrations will become prime targets for supply-chain and data exfiltration attacks, where compromised connectors or poisoned tools allow adversaries to silently pivot across trusted AI platforms and enterprise environments.
AI will be leveraged throughout an entire malicious campaign.
Liora Ziv, Cyber Threat Intelligence Analyst, CyberProof
Throughout 2025, most AI use by threat actors focused on making more of what they already do well: producing multilingual phishing campaigns, crafting personalised lures, and producing simple malware
variants. The shift we expect in 2026 is toward AI assisting with entire campaigns. That includes automated victim profiling, adaptable ransomware negotiations in real time, and on-demand malware
variants tailored to the victim environment. If this trend continues, mid-tier groups will gain capabilities that previously required significantly more resources or expertise, making the gap between top-tier and opportunistic actors much narrower.
Rogue AI Agents Become an Incident Class for Mobile Security.
Rishika Mehrotra, Chief Strategy Officer, AppKnox
“As AI agents start integrating directly with apps, security needs a mindset shift. Most mobile defenses still treat automation as a simple ‘bot or not’ problem. When an AI agent starts filling forms or querying APIs, is it assisting a user or stealing their data? That’s the question developers will have to answer in 2026. Agent behavior tests that simulate real-world AI misuse, from sandbox escapes to data leaks, are the next frontier of mobile security. In other words, understanding not just who’s accessing your app, but what their intent is.”
Cryptographic Keys in Mobile Apps Will Become a Critical Weak Point.
Dan Shugrue, Product Director, Digital.ai
Attackers will increasingly use AI tools to locate and extract poorly protected keys inside mobile apps. Organizations will adopt white-box cryptography and runtime key management to safeguard signing keys, API tokens, and device-pairing workflows—especially as mobile credentials begin to rival server-side secrets in value.
Defensive AI Trends:
The First Major AI-Agent-Driven Breach Reshapes Training.
Tiffany Shogren, Director of services enablement and cybersecurity education, Optiv
By 2026, a high-profile breach caused by an autonomous AI agent will force companies to retrain employees on how to safely collaborate with AI systems. This event will redefine cyber education, introducing “AI oversight” and “human-in-the-loop” modules across industries.
AI’s role in cyber.
Rishi Kaushal, CIO, Entrust
In 2026, AI will become both the attacker and the defender – and trust will determine who wins. The most significant cyber risks won’t just come from the speed of AI-enabled attacks, but from their realism. Generative AI is already being weaponized to create hyper-realistic deepfakes, synthetic identities, and voice spoofs that can bypass authentication systems. These threats will intensify as executive impersonation and social engineering become nearly indistinguishable from real human interaction.
To stay ahead, CIOs must treat AI not only as a productivity enabler, but as a core part of the enterprise defense fabric. AI systems will need to continuously monitor, detect, and neutralize emerging threats while also learning to resist manipulation through data poisoning, prompt injection, or model tampering.
Ultimately, resilience in 2026 will depend on the trustworthiness of the AI itself and the authenticity of the identities behind every digital interaction.
Countermeasures against weaponized generative AI.
Bindu Sundaresan, Director, LevelBlue
By 2026, cybersecurity will increasingly become a contest of machine intelligence. Attackers are already leveraging generative AI and large language models to produce convincing phishing emails, deepfake audio, and synthetic identity profiles. These tools automate social engineering at scale and tailor messages based on behavioral data. In response, defenders are implementing autonomous detection systems powered by AI and machine learning to identify manipulation attempts before sensitive data enters operational workflows. These defensive systems continuously analyze data provenance, correlation patterns, and anomaly scores. They use techniques such as federated learning to detect cross-system inconsistencies without consolidating sensitive data. Digital watermarking and hash-based data integrity checks are being adopted to authenticate inbound information and prevent data poisoning attacks that can corrupt training datasets. AI red-teaming initiatives are also expanding, designed to simulate adversarial attacks that test and strengthen model resilience against fake or manipulated content.
Cybersecurity infrastructure.
Georgeo Pulikkathara, CIO and CISO, iMerit
By 2026, cybersecurity will be top-of-mind in safeguarding the critical pillars of our national infrastructure. Medical systems that deliver care, agricultural supply chains that feed our people, and mobility networks that facilitate commerce cannot afford to be compromised.
In 2026, the strength of our cybersecurity infrastructure will depend on how quickly we can integrate AI responsibly: pairing machine speed with expert human judgment, embedding AI into security operations, and ensuring transparency and governance in its use. AI will be both our greatest risk and our greatest opportunity. Threat actors are already using generative AI to automate phishing, accelerate reconnaissance, and exploit vulnerabilities at scale. The U.S. has seen ransomware surge nearly 150% in the past year, with AI-driven tactics fueling much of that growth. At the same time, AI offers unprecedented capabilities in defense, from real-time anomaly detection to automated response and resilience building. A proactive stance will be needed to ensure both compliance and trust, paving the way for responsible AI innovation.
AI-SPM becomes the essential security platform as human biohacking enters the threat landscape.
George Gerchow, Faculty, IANS Research & Chief Security Officer, Bedrock Data
As AI agents proliferate, AI Security Posture Management (AI-SPM) emerges as the new security platform for monitoring systems, while MCP servers become the components those agents rely on that AI-SPM must inventory, test, and enforce. Meanwhile, human biohacking crosses into cyber: expect executive impersonation and access fraud blending implants or wearables with deepfake voice technology. Verification protocols must replace awareness training as the critical control. Audit focus shifts entirely to evidence: insurers and regulators will demand AI change logs, adversarial test results, and decision provenance. Vendor and model supply chains become primary attack surfaces, making traceability shift from optional to essential. The metrics that matter are data-first: time to inventory new AI pipelines, artifacts expired on schedule, and cost per incident defended.
SIEM solutions will be replaced by prevention strategies and AI.
Dr. Darren Williams, Founder and CEO, Blackfog
Organizations will realize that Security Information and Event Management (SIEM) solutions often create more problems than solutions. The high costs of implementation, ongoing software maintenance, and the human capital required to operate these systems have become unsustainable. In their place, AI-based solutions that prevent attacks in real-time—such as anti-data exfiltration technologies—will allow organizations to get ahead of threats rather than react after it’s too late.
AI Will Drive the Rise of Specialized Red Teams.
Trevin Edgeworth, Practice Director, Red Team, Bishop Fox
As AI reshapes both enterprise environments and attacker capabilities, Red Teams will be forced to evolve in kind. AI-driven systems are now woven into everyday business operations, creating new and often poorly understood attack paths. At the same time, adversaries are already using generative models to map environments faster, craft sharper social engineering pretexts, and produce deepfakes of executives and trusted stakeholders. In 2026, this shift will accelerate the move toward highly specialized Red Teams, i.e., dedicated groups focused on OT, AI systems, business processes, and other niche attack surfaces. Organizations will recognize that generic testing can’t keep pace with AI-enabled threats, and tailored Red Teams will become essential to uncovering the risks hiding in these rapidly expanding domains.
AI’s role in SOC and defense tools.
Mike Hamilton, Field CISO, Lumifi Cyber
AI is coming into products, with many trained on LLMs that simulate attack traffic and are mapped to the Mitre ATT@CK framework. These NDR and SIEM products produce high-fidelity alerts and are an improvement over signature- and rules-based alerting. In SOC operations, we will see the advent of the Tier-1 AI SOC agent. These agents will ingest alerts and create tickets, perform the first-pass investigation, and route the ticket to a human analyst for confirmation and response. This will lead to a measurable reduction in the mean time to detect and mean time to respond. We are already seeing SIEM platforms treat AI agents and AI Bots as distinct from users, accounts, and assets, further underscoring this new way of thinking about security.
Governing AI.
Gary Brickhouse, SVP, CISO, GuidePoint Security
While there will be plenty of work around AI in 2026, the focus for cybersecurity teams will be AI governance. Throughout 2025, organizations proved AI could deliver value, gaining buy-in across various lines of business as well as 2026 budget dollars. Cybersecurity teams will be busy establishing appropriate policies and procedures for acceptable use, managing the new risks, and ensuring responsible use as AI becomes embedded across the enterprise. As cybersecurity becomes increasingly inseparable from business strategy, cybersecurity leaders will adopt risk quantification in 2026 to help executives and board members connect security spending to measurable business impact and enable more informed, prioritized risk-based decisions. Ransomware will continue to cause operational disruption, with organizational downtime translating directly into financial loss. Cybersecurity teams will prioritize cyber resiliency, strengthening their organization’s posture to withstand and recover from these incidents without crippling operations or losing trust. This will drive better integration between cybersecurity and the broader business to build a holistic enterprise-wide continuity strategy.
Hallucinations Won’t Die, They’ll Just Get Contained.
James Wickett, CEO, DryRun Security
Developers are realizing that hallucinations aren’t something you can patch out; they’re something you have to manage. In 2026, the smartest teams will stop trying to eliminate them entirely and start treating them like background noise that needs control. The focus will shift from perfection to precision — bounding the error, not erasing it.
Expect to see more layered AI architectures where secondary or “judge” agents validate the work of other agents, score confidence, and discard low-quality or low-truth outputs before they ever reach users. It’s quality control at the model level. The goal isn’t to make models flawless but to make their mistakes predictable and observable. The future of AI accuracy won’t only come from larger models; it will also come from architectures designed to keep hallucinations inside safe, measurable limits.
AI Agents Will Redefine How We Govern Access.
Jacob DePriest, CISO/CIO, 1Password
As AI agents make our teams more productive, the next phase of identity and access management will shift from visibility and control to focus on how security teams govern agents. As humans grant AI agents increased access to data and systems, both in personal and corporate settings, security teams will need to track this activity across identity, endpoint, and data protection surfaces. They will need to handle agents operating with employee permissions, differentiate their actions from humans, and control the credentials granted to them. Those who gain visibility and control of agent access will set the standard for trusted AI ecosystems.
AI Agents Will Break Identity Silos and Force a Security Revolution.
Anand Srinivas, VP Product and AI, 1Password
Today, few organizations have deployed agentic AI in production. But, as more companies begin to operationalize agentic AI at scale, its unpredictable interactions will expose a new class of identity and access management challenges. Up until now, identity, secrets, and access management solutions have been siloed across different organizations responsible for application or workforce identity security. That worked when applications were deterministic, well-bounded entities all operating within centralized policy frameworks. However, agentic AI behaves as both traditional software and as a user that operates outside existing identity systems, thereby introducing new identity threat vectors.
Securing this new paradigm will require breaking down the identity silos and creating a unified, policy-driven identity fabric that governs access deterministically, not probabilistically. In doing so, a new generation of cohesive, secure-by-default identity management solutions will emerge that protect all access for human, machines, and AI identities.
The Credential Is The New Compute.
Nancy Wang, SVP, Head of Engineering and AI, 1Password
For the last decade, the competitive edge in AI was defined by compute: whoever had the biggest GPU clusters won. That era is ending. In the next phase, the constraint isn’t model capacity; it’s access. The most capable agents will be the ones that can act on behalf of users and systems, not just predict text. Every meaningful AI capability, from automation to decision-making, depends on credentials: API keys, OAuth tokens, service accounts. That’s what turns intelligence into action. And that’s why the companies that master secure credential brokering: verifying who an agent is, what it can access, and how it uses that access, will define the next generation of AI infrastructure. The future of AI scale won’t be measured in FLOPs; it’ll be measured in permissions.
Privacy-first AI Shifts from SaaS to Secure Deployments.
Ray Canzanese, Director, Netskope Threat Labs, Netskope
The escalating deployment of sophisticated AI, particularly for tasks involving sensitive or proprietary data, will drive a significant shift away from a purely Software-as-a-Service (SaaS) model toward more privacy-protecting and sovereign deployments. Organizations in regulated industries like finance and healthcare, or those with significant intellectual property, will intensify their move to frameworks like Amazon Bedrock to ensure data remains within their own secure perimeters, or is never used for model training by the provider. This focus on data sovereignty, IP protection, and compliance with regulations like GDPR and HIPAA will push a new class of “secure-by-design” AI adoption, where control over the data’s location and usage becomes the primary factor, even if it introduces slightly more complexity than a traditional SaaS offering.
General AI Trends:
The impacts of AI.
Rishika Desai, Threat Researcher and Writer, PreCrime Labs, BforeAI
By 2026, the misuse of AI in creating highly sophisticated deepfake-driven social engineering campaigns is expected to expand significantly. This trend is growing due to different AI providers offering free credits with minimal oversight of how generated content might be exploited. For the average user, identifying deepfakes is already challenging, as subtle gaps to detect are masked when packaged as social media posts. With AI-generated content being actively promoted across platforms like Google and social networks through ad features, its reach to the masses becomes exponential. Collectively, these factors make the current trajectory of social media manipulation increasingly difficult to detect and counter proactively.
As for organizations on the defensive side, the most effective approach is to sanitize both the inputs and outputs associated with AI systems that generate audio and video content. On the input front, every user submission should be contextually evaluated and sanitized to understand why a deepfake video is being created. If the request involves a brand, public figure, or prominent personality, the system should not only assess the intent but also flag such accounts for potential future misuse. Malicious patterns often repeat, and early identification of prompts and accounts that request such prompts enables long-term monitoring. As well as on the output side, AI models must be trained to recognize suspicious elements, such as abnormal voice modulation, text-to-speech patterns, and the contextual cues embedded within prompts. For example, phrases like “please send your CVV details” or “share your bank information to verify KYC,” even when delivered through a deepfake voice, should trigger automatic detection, unless whitelisted. All such harmful prompts and their linguistic variations should be maintained as a continuously updated repository. This enables proactive filtering and real-time detection of fraudulent audio or video content as it is being generated.
The First “AI Liability” Lawsuit.
Josh Taylor, Lead Security Analyst, Fortra
By Q2 2026, we will likely see a company sue an AI-assisted system after the AI makes a decision that causes measurable business harm, such as leaked confidential information, violated a regulatory requirement, or made a commitment the company can’t honor.
AI systems are moving from advisory roles to decision-making roles. A lawsuit will likely involve an AI agent that had access to privileged information and disclosed it inappropriately, or an AI assistant that shares proprietary data. This will force the industry to answer questions nobody wants to ask: Who is liable when an AI you gave permission to act on your behalf and does something harmful? The vendor? The company? The AI itself?
Governments Will Shift From Encouraging AI Innovation to Imposing Guardrails on Corporate Deployment.
Josh Taylor, Lead Security Analyst, Fortra
By 2026, the regulatory narrative around AI will shift from innovation enablement to accountability enforcement, as governments recognize that corporate adoption has outpaced governance. With the novelty of AI worn off and widespread misuse becoming clear, regulators will begin imposing strict requirements for transparency, auditability, and explainability in enterprise AI deployments.
Rebuilding AI after a cyber event will be a massive operational challenge.
Bernard Regan, Principal, Baker Tilly
As organizations rely more heavily on AI, a critical gap is emerging between technology, business operations, and leadership. The reality is that many executives and boards don’t fully understand how AI systems are embedded into operations or what’s at stake if those systems fail. Increasingly, teams are using services like ChatGPT for decision-making, often without stress-testing outputs or validating reliability. The assumption that these systems are infallible is risky.
If a major AI outage occurs, recovery could be slow, complex, and expensive. Companies have often spent years building processes around AI, sometimes combining internal and external models. Petabyte-scale AI systems aren’t easy to back up, and few organizations have robust contingency plans.
In the event of a failure, organizations may need to revert to manual processes, which are not only labor-intensive and costly but also heighten the risk of revenue losses. The risk is compounded if the AI solution cannot be rebuilt to its prior standard — a challenge that has already emerged in cyber business interruption claims.
The lesson is clear: Organizations must stress-test AI, prepare contingency plans, and ensure that leadership understands the operational risks. Without these steps, a major AI outage could reveal serious vulnerabilities, creating operational challenges and long-term costs. The post-event rebuild isn’t just a technical challenge — it’s a test of organizational readiness and resilience.
AI Moves Beyond the Hype Barrier.
Frank Downs, Senior Director of Proactive Services, BlueVoyant
Looking ahead to 2026, artificial intelligence is moving beyond hype and becoming embedded in daily cybersecurity operations. AI will play an active role in managed detection and response (MDR) workflows, acting as a Microsoft Copilot-style assistant for security analysts by providing real-time support, speeding response, and meaningfully strengthening skills.
At the same time, leaders are beginning to confront AI’s limits. Much like the early days of personal computing, fears of job loss are giving way to a recognition that AI can accelerate work but cannot replace human judgment. Its benefits, however, are increasingly limited by energy and compute demands, pushing organizations to be more intentional about where AI truly adds value. As a result, organizations will place greater emphasis on governance, prioritizing clear AI policies, stronger access controls, and effective data loss prevention. Cybersecurity teams will guide this shift and help businesses adopt AI responsibly while protecting trust and core operations.
Mass Hysteria of AI Replacing Developers is Overblown.
Steve Boone, Director of Product Marketing, Checkmarx
The idea that AI will replace developers misses the point. In practice, it changes what “writing code” means. Developers are becoming editors and reviewers of generated work. Their value shifts toward judgment. They must know when to trust a suggestion and when to discard it.
The best tools in 2026 won’t be the ones that generate the most code. They will be the ones that help developers see how a line of code came to be, what assumptions it carries, and whether it meets the team’s standards.
Developers will shift from coders to curators, guiding and auditing machine-generated output. The next phase isn’t full automation–it’s disciplined collaboration, where transparency, traceability, and trust define maturity in an AI-driven software development life cycle.
Composable intelligence will replace monolithic AI.
Paul Aubrey, Director of Product Management, NetApp Instaclustr
The next frontier in AI/ML isn’t about building bigger models; it’s about making smaller ones work together. The rise of Model Context Protocol (MCP) and agentic frameworks will turn AI into a composable ecosystem of reusable, discoverable micro-agents. Organizations will deploy fleets of ML models, each powering specialized classification, prediction, and recommendation tasks, each behind MCP endpoints that plug directly into the agent mesh.
The true cost of AI for SMBs will be decision overload.
Tyler Moffitt, Senior Security Analyst, OpenText Cybersecurity
In 2026, the greatest impact of AI on small and mid-sized businesses will go beyond attack sophistication to the volume of security decisions they’re suddenly forced to make. Scammers will use generative AI to craft realistic messages, calls, and videos that feel personal and urgent – from deepfakes and voice cloning to fake invoices that mirror trusted communications almost perfectly. The biggest threat won’t be a single breach, but the risk of IT teams – often already stretched thin – being drained by the constant influx of alerts and gray-area threats that demand attention. As both noise and legitimate risks increase, SMBs will need to lean on automation and identity-driven controls to separate malicious activity from real communication, turning AI from an attack method into a defender’s equalizer.
Poor data classification could undermine AI’s promise.
Niels Fenger, Advisory Practice Director, Omada
“In 2026, organizations will continue to struggle with foundational data governance. Despite the widespread adoption of AI-driven tools, most enterprises still lack formal data classification frameworks, which is a prerequisite for risk-based security and trustworthy AI. Without structured and governed input, AI systems will only amplify existing weaknesses, not fix them. The result: “Shaky Input, Shaky Output.” Until organizations align with standards like ISO 27002 and NIST and treat classification as strategic, AI will potentially be more of a liability than an advantage.”
From LLMs to AGI and Agentic Systems.
Srikara Rao, CTO, R Systems
“In the next year, work on AGI will proceed, but no timetables can be given. AGI would constitute a human-level understanding and reasoning ability in all domains. The benefits will be enormous. I think we will see things like scientific breakthroughs and productivity leaps, but risks include cyber threats, such as autonomous malware, and misaligned behaviors. This means that autonomous systems should plan, act, and adapt using APIs. Current API uses include code generation, data analysis, and customer service, but the technology is already evolving toward self-improving multi-agent ecosystems. Its strong sides are autonomy and adaptability, while its weaknesses are connected with security, some aspects of unpredictability, and complexity. These are all areas that can be addressed, but in order for us to continue moving forward with AGI, work still needs to be done.”
50% of AI Vendors Will Fail as Rising Costs Drive Consolidation.
Jeffrey Wheatman, SVP Cyber Risk Strategist, Black Kite
By the end of 2026, we predict that as much as half of today’s AI vendors will go out of business, leaving customers stranded with unsupported and deeply embedded technologies. The market is oversaturated with small AI startups, many founded by lean teams and backed by short funding cycles, that will be unable to survive intense competition and cost pressures. The cost of computers is being driven up by major AI providers like OpenAI, Anthropic, and Google, which will force consolidation. This consolidation will also reshape the broader business landscape. Smaller and mid-sized companies will find themselves priced out of embedding AI into their operations, unable to compete or innovate at the pace of larger, better-funded enterprises.
The context behind using AI.
Eran Barak, Co-Founder & CEO, MIND
Artificial Intelligence has already redefined the boundaries of what’s possible in cybersecurity, but we’re only at the beginning. By 2026, AI will no longer be an optional accelerator – it will be embedded, expected, and essential to every layer of data security. The why is simple: security at human speed can’t scale. AI brings the ability to operate at machine speed and scale, detecting risks, classifying data, and remediating incidents faster and with more precision than any team alone.
But the how is just as critical. Successful AI-powered security in 2026 won’t rely on large language models bolted onto legacy systems. It will require intelligent platforms purpose-built for the complexity of unstructured data, shadow IT, and dynamic access patterns. The future belongs to AI that understands context, not just content. Organizations that treat AI not as a purpose-built enhancement but as an engine for proactive prevention will move from reactive defense to enduring resilience.
Employee-Facing AI Bridges the Gap Between Consumer and Back of the House.
Shash Anand, Senior Vice President of Product Strategy, SOTI
AI use cases will continue to expand, but they will move in the direction of bridging the gap between the consumer and the back office. Right now, AI is primarily deployed for customer-facing interactions via chatbots or assistants or manual administrative tasks. This will change in 2026, expanding beyond these initial deployments to help employees on the floor and in the warehouse improve productivity, efficiency, and the overall customer experience.
Beyond the frontline, AI will also empower IT teams with diagnostics and device management tasks. AI could leverage real-time access to device and system data to provide instant answers about configurations, status, and performance. This enables IT teams to resolve issues faster and spend less time on repetitive administrative work, improving responsiveness and overall operational efficiency across the organization.
Agentic Systems Will Go Mainstream and Security Will Struggle to Keep Up.
James Wickett, CEO, DryRun Security
By 2026, multi-agent architectures will be everywhere. You’ll have discrete sub-agents that plan, execute, evaluate, and report, all talking to each other. It’s going to make systems faster and smarter, but also way harder to secure. Every one of those agents has its own permissions, context, and sometimes its own toolchain. You’ve basically multiplied your attack surface by the number of agents in your environment.
The problem is most organizations won’t realize it until something goes wrong. You’ll see a lot of “why did this agent access that database” moments. The mitigation isn’t flashy; it’s basic engineering: limit tool access, monitor execution, and keep visibility on how agents communicate. We’ve learned the hard way that when one of them goes off-script, it’s not a small problem that’s easily understood or replicated. It took us years to develop robust testing and processes to optimize and secure these systems. The OWASP Top 10 for LLM applications provides a great starting point for organizations heading down this path.
Agentic AI will spark the return of the AI Wild West.
Thordis Thorsteins, Lead AI Data Scientist, Panaseer
In 2026, Agentic AI will pull businesses back into an experimental phase, where the risks rise as fast as the opportunities. The early days of GenAI resembled a tech “Wild West”. Organizations experimented with AI without fully understanding its limitations, resulting in frequent errors, such as engineers giving valuable source code to ChatGPT, essentially leaking IP to the entire world. Over time, however, organizations brought much of this chaos under control through stronger governance, clearer policies, and more mature operational practices.
Agentic AI will open the gates again, shifting the risk landscape and raising the stakes even higher. Because these systems act autonomously, there is even less oversight and greater potential for chaos to spiral out of control. Even minor issues, such as authentication faults or misconfigurations within the AI system or its dependent processes, could cascade across companies, exposing sensitive data or triggering unintended actions. As organizations increasingly delegate decision-making to AI agents, these mistakes will be amplified, making proactive governance essential.
2026: The Year Enterprise AI Requires Proof.
Aaron Fulkerson, CEO, OPAQUE
The bottleneck in AI adoption isn’t models – it’s trust. Enterprises can encrypt data at rest and in transit, but not while it is being used by models. That gap is now the fault line of enterprise AI risk, and it’s driving the rise of Confidential AI. According to Gartner’s Emerging Tech: Confidential AI Drives Innovation and Unlocks New Opportunities, Confidential AI techniques are becoming essential for securing GenAI workflows and protecting the sensitive data that models rely on.
For this reason, 2026 will be the tipping point. Enterprises won’t settle for promises of privacy or compliance. They need runtime proof that:
- Data stayed private during computation
- Model weights weren’t exposed
- Policies were verifiably enforced exactly as written
This is how AI moves from “usable” to “deployable” on the data that actually matters.
SaaS companies threatened by AI will place significant restrictions on their APIs.
Ev Kontsevoy, CEO, Teleport
“As AI agents become more capable, the need for traditional user interfaces (UI) will decline. This will disintermediate the value of legacy SaaS vendors to behave more as a data repository, with AI accessing their data directly. As a result, many will restrict AI access to APIs in order to stall or prevent disintermediation.”
AI-native talent will be in short supply.
Ev Kontsevoy, CEO, Teleport
“Every technology boom creates talent market disruptions. The narrative of “AI replacing jobs” is not correct. Instead, we face a shortage of highly-skilled, AI-native talent. Every CEO is — or should be — worried about recruiting and training these AI operators who are capable of utilizing AI tools in the most effective way.”
GenAI’s Evolution.
Melissa Ruzzi, Director of AI, AppOmni
“True AGI (Artificial General Intelligence) may not be achieved before the next decade, but as GenAI evolves, it may be called AGI (which would then force the market to create a new acronym for the true AGI). The big risk in AGI is similar to GenAI, where the focus on functionality clouds proper cybersecurity due diligence. By trying to make AI as powerful as it can be, organizations may misconfigure settings, leading to overpermissions and data exposure. They may also grant too much power to one only AI, creating a major single point of failure.
“In 2026, we’ll see other AI security risks heighten even more, stemming from excessive permissions granted to AI and a lack of instructions provided to it about how to choose and use tools, potentially leading to data breaches. This will come from increased pressure from users expecting AI agents to become more powerful, and organizations under pressure to develop and release agents to production as fast as possible. And it will be especially true for AI agents running in SaaS environments, where sensitive data is likely already present and misconfigurations may already pose a risk.”
MCP Is Not Secure.
Nancy Wang, SVP, Head of Engineering and AI, 1Password
MCP is one of the most important standards in AI right now, but it’s not a security standard. It was designed for interoperability, not containment. The issue isn’t Anthropic’s implementation; it’s the absence of security primitives in the protocol itself. There’s no built-in identity, no least-privilege enforcement, no audit trail. Once an agent connects, it’s effectively operating with the same access as the user who configured it. That might be fine for local experiments, but at enterprise scale, it’s a liability.
As MCP becomes the lingua franca of agentic AI, it will need a trust layer: a way to verify which agents exist, who they represent, and what they’re allowed to do. Without that, MCP could become the next attack surface for AI supply chain compromise. The right next step isn’t to abandon MCP, it’s to secure it. The ecosystem needs credential brokering, runtime policy enforcement, and verifiable auditability. Those are the ingredients that will turn MCP from a developer playground into an enterprise backbone.
Local Agents Will Win.
Nancy Wang, SVP, Head of Engineering and AI, 1Password
The future of AI isn’t in the cloud; it’s on your device. The agents that matter most won’t live on vendor servers or behind APIs; they’ll run locally, with your context, your data, and your credentials. These agents will understand your environment better than any hosted service ever could, and they’ll act instantly, without waiting for a remote model call. Cloud AI will still have a role in reasoning and coordination, but the real leverage comes from agents that can do things: open your IDE, rotate a key, or update a configuration. Those actions happen on endpoints, not in data centers. That’s why local agents will define the next generation of productivity and security. In this model, privacy and performance converge. The safest place for an agent to operate is also the fastest. The platforms that enable local, verifiable, and policy-aware agents will own the agentic era.
Governance & Compliance.
Jason Meller, VP of Product, 1Password
AI Turns SaaS Management Into a Governance Imperative. The next frontier of SaaS management is governance that extends beyond humans. As autonomous agents embed themselves into email, calendars, and CRMs, unmanaged SaaS applications will expand beyond traditional visibility tools or audit trails. This won’t only add new apps to secure, they’ll become part of the stack itself, generating SaaS usage that slips past traditional governance systems. This shift will push shadow SaaS from a mid-level concern into a top security and governance priority.
Without visibility into AI-driven SaaS usage, organizations’ shadow IT challenge will be unlike ever before. The organizations that succeed in this next era of SaaS management will be those that extend governance and access visibility beyond humans to the machines now operating inside their SaaS environments.
Developers Will Become Obsolete If They Don’t Become AI Developers.
Nancy Wang, SVP, Head of Engineering and AI, 1Password
In 2026, every developer will need to know how to build LLMs or risk becoming irrelevant. The next phase of agentic AI is not about one specialized AI team – it’s about every developer. As organizations embed agentic AI into every product and process, they’ll require every engineer to understand how to design, build, and deploy these agentic AI systems. This means developers will need to seek out ways to upskill to learn new tools, frameworks, and methods. Those who resist or ignore this evolution won’t only fall behind; they’ll be left out entirely.
AI Governance Will Start At The IDE.
Nancy Wang, SVP, Head of Engineering and AI, 1Password
AI risk won’t start in production; it’ll start in the Integrated Development Environment (IDE). As developers integrate LLMs into workflows, the boundary between human-written and AI-generated code is disappearing. Credentials are getting embedded, API calls are being auto-completed, and sensitive data is being exposed to AI systems before it ever reaches a security review. That means governance has to shift left, not just for infrastructure, but for cognition. The IDE becomes the first line of defense for AI governance. Secure development environments need to know when an agent is accessing credentials, what systems it’s calling, and whether those actions align with enterprise policy. Every company that ships software with AI will need to instrument their development stack for agent observability and compliance. Just as DevSecOps embedded security into CI/CD, the next evolution will embed AI governance directly into development tools.
The AI Revolution Is Happening In Trades, Not Tech.
Nancy Wang, SVP, Head of Engineering and AI, 1Password
The real AI revolution isn’t about chatbots, it’s about agentic AI disrupting entrenched enterprise workflows. The industry is moving past wrapper applications, and AI is disrupting trade industries. From plumbers to HVAC technicians, AI agents are predicting customer needs, scheduling jobs, managing supply, and even driving upsell opportunities. Imagine an AI agent calling a homeowner after air conditioner repairs to proactively schedule a filter replacement. That’s not fiction, that’s happening now. Companies that embed agentic AI into their daily workflows will unlock efficiency gains and new revenue streams, and those that don’t will be outpaced by competitors who do.
2026: The Year of the ‘AI Champion’ Employee.
Katya Laviolette, Chief People Officer, 1Password
2026 will mark a turning point where AI stops being a technology story and becomes a people story. As AI agents continue to transform the workplace and introduce new security risks at every level, the real differentiator won’t be who has simply deployed and integrated the most advanced AI tools – it’ll be who’s empowered their people with AI fluency to use them wisely and safely.
The best companies will invest in developing the ‘AI champion’ employee – someone who not only knows how to harness these systems responsibly, question their outputs, and innovate safely, but also sets the cultural standard for how it’s applied across the organization. These AI-confident workforces are the ones that will define the next era of work.
Regulators will demand that autonomous agents become glass boxes, not black boxes.
Paul Walker, Field Strategist, Omada
“The EU AI Act and California’s transparency laws now mandate that organizations document every decision made by AI agents, justify its reasoning, and maintain complete audit trails of what systems agents accessed and what actions they took. High-risk AI systems must enable users to interpret outputs and understand how decisions were made. Translation: if your agent autonomously executes a transaction, fires an employee, or denies a loan, you’ll need to explain exactly why it made that decision in terms regulators and affected individuals can understand. The age of “the AI decided” as an acceptable answer is over.”
China Introduces LLMs That Shift the Global Balance.
Anurag Gurtu, CEO, Airrived
Perhaps the most geopolitically important trend of 2026 will be the rapid rise of Chinese LLMs that rival and in some cases surpass the performance of leading Western models at a fraction of the training and inference cost.
Chinese labs have leaned heavily into:
- Low-precision training.
- Lightweight architectures.
- Massive multilingual corpora.
- Integrated agentic reasoning modules.
These models will expand quickly across Asia, Africa, LATAM, and emerging economies, forcing Western enterprises and regulators to confront a new competitive reality: AI supremacy is no longer regional.
The global cost curve for AI will fall even faster, and the competitive pressure this creates will be profound.
Threats and Attacks Trends:
Elevated Risk of Compromise Through Third-Party Exposure and Supply-Chain Attacks.
Lina Dabit, Field CISO, Optiv
- Over 70% of organizations experienced at least one material third-party incident in the last 12 months, including several third-party compromises that resulted in widespread or significant impacts.
- More than 30% of all breaches originate from a third-party / supply chain compromise, and this will increase each year. In 2024, the incidence of supply chain attacks increased by 42%, and it’s highly likely that 2026 will see a significant rise as well, further underlining the devastating ripple effect of these breaches.
- High-profile supply-chain compromises continue to remain at a high risk to enterprises and consumers and have steadily increased over the last 3-5 years.
- Rapid detection, followed by containment and isolation of affected systems, should be the priorities. Timely and factual communication to both internal and external stakeholders, as well as regulators if necessary, are key to ensuring transparency with a focus on early notification. Do not wait for complete information before implementing a communication strategy.
SMB and Mid-Market Organizations Will Become Prime Targets in 2026.
Nicole Reineke, Senior Distinguished Product Leader, AI, N-able
2026 will expose the long-standing misconception that bad actors primarily target large enterprises. With AI-powered automation allowing attackers to cast massive nets across millions of URLs, SMBs and mid-market companies will face a higher attack volume than ever before. Attackers no longer need to choose victims; instead, they crawl, harvest, and hit any vulnerable endpoint. This shift will force smaller organizations to adopt enterprise-grade AI defenses and cyber resilience models they historically assumed they didn’t need.
Nation-State Operations Will Expand to Target Commercial Enterprises.
Josh Taylor, Lead Security Analyst, Fortra
Advanced persistent threat actors will increasingly target private-sector companies for economic disruption, IP theft, and espionage aligned with geopolitical goals. Enterprises must adopt nation-state-grade defenses and treat geopolitical risk as part of their cyber threat Model.
Fraud.
John Wilson, Senior Fellow, Threat Research, Fortra
A complete end-to-end Fraud-as-a-Service platform. Just as we’ve seen consolidation in the cybersecurity industry, Fraud-as-a-Service operators will consolidate every phase of the fraud chain into a unified platform. The platform will have a fraud “app store.” For example, someone might have a really good spamming engine. Someone else might have lists of potential victims’ email and other details. Another provider might offer money laundering services. Using the platform, a would-be cybercriminal would just sign up, click which features they want, and without any real technical knowledge, they’d be able to run any type of scam. It would be like opening up an Etsy store, but for fraudsters.
Data leaks are the new ransom.
Mark St. John, COO and Co-Founder, Neon Cyber
Ahh, ransomware, the heavily profitable industry that has plagued us for decades now. We have fought so hard to silo and protect our data, protect systems accordingly, and plan for recovery. These days, people seem to forget that data is the goal and the crown jewels; very rarely does a ransom exist without the threat of disclosure.
In 2026, the amount of data sprawl within organizations to SaaS and AI vendors will create another profitable way to hold data hostage, simply by harvesting what organizations have exposed to these applications and language models. Professionally, there is plenty of scrutiny around data standards of vendors, but not enough around protection, data reselling, LLM usage, and subprocessor controls. Given the level of identity abuse in 2025, more and more SaaS vendors will be targeted by malicious actors. The adoption of AI browsers will lead to plenty of accidental disclosure to public LLMs that someone keen enough can learn to harvest quickly. Heavy scrutiny of all vendors who touch your data is now more important than ever.
Misconceptions About Deepfakes Will Undermine Effective Security Testing.
Alethe Denis, Senior Security Consultant, Red Team, Bishop Fox
The relentless media hype surrounding deepfakes is creating a distortion in the security market, and it’s leading many organizations to misallocate resources on fear-based testing. While deepfakes are a real threat to individuals, they often represent a high-effort, low-probability attack for most corporate social engineering. The biggest challenge for security teams will be to resist the urge to chase the buzzword. Instead, they should refocus adversarial testing to be hyper-relevant to their organization’s actual risk profile to ensure resources are spent on strengthening fundamental technical controls and mitigating the most common forms of impersonation.
Threat actors will exploit SaaS app permissions instead of passwords.
Rik Ferguson, Forescout’s VP of Security Intelligence, and Daniel dos Santos, Vice President of Research, Forescout Vedere Labs.
Attackers are shifting focus from stolen passwords to the permissions granted to connected apps. By abusing OAuth consents and refresh tokens from legitimate integrations in platforms like Microsoft 365, Salesforce, and Slack, they can quietly move between tenants and keep access even after passwords are reset. In 2026, these “token-hopping” campaigns will rival traditional phishing as the most effective path to compromise—and with passwordless authentication gaining ground, the day OAuth abuse surpasses phishing is getting ever closer. Defenders should build an inventory of authorized apps, limit what each can do, and regularly revoke unused or suspicious tokens.
Attackers will accelerate their exploitation of edge devices and IoT.
Rik Ferguson, Forescout’s VP of Security Intelligence, and Daniel dos Santos, Vice President of Research, Forescout Vedere Labs.
Expect routers, firewalls, VPN appliances, and other edge devices, as well as IP cameras, hypervisors, NAS, and VoIP in the internal network—all outside the reach of endpoint detection and response—to further become prime targets. Custom malware for network and edge devices is rising, frequently abusing legitimate admin tools for stealthy command-and-control. In 2025, over 20% of newly-exploited vulnerabilities targeted network infrastructure devices. In 2026, that number could grow to over 30% as the exploitation of unmanaged assets provides the perfect foothold for initial access and lateral movement. Extending inventory and enforcement to every device, agented or not, will define the next phase of exposure management.
Hacktivists will turn confusion into a weapon.
Rik Ferguson, Forescout’s VP of Security Intelligence, and Daniel dos Santos, Vice President of Research, Forescout Vedere Labs.
Hacktivist campaigns have learned that sowing doubt can be just as disruptive as causing downtime. In 2026, hacktivists, faketivists, and state-aligned actors will increasingly pair public claims with light hands-on interference in OT systems, forcing operators into precautionary shutdowns even when no actual damage occurs—especially in critical sectors like water, energy, and healthcare. Many of these “announce-first, prove-later” operations will exaggerate their impact to pressure operators into shutting systems down voluntarily. The only defense is clear visibility, threat detection, and segmentation that separate rumor from reality.
Critical Infrastructure Under Siege as Nations Brace for Conflict.
Karl Holmqvist, Founder and CEO, Lastwall
Cyberattacks on critical infrastructure will reach new levels in 2026, fueled by global conflict preparations and rivalries. Defense experts have pegged 2027 as a potential flashpoint for great-power conflict, and nations are racing to harden their grids and supply lines in advance. This urgency is well-founded: recent discoveries revealed Chinese state-backed hackers (Volt Typhoon) pre-positioning malware in U.S. power and water networks, ostensibly to disrupt these services if tensions over Taiwan or other issues explode into open conflict. In 2026, critical systems like energy grids, water treatment facilities, healthcare networks, and telecommunications will increasingly come under stealthy intrusion and attack attempts by multiple groups of nation-state actors. We will likely see more aggressive probing of infrastructure defenses and even limited-scope sabotage to test resiliency. It is also likely that we will see disruption campaigns demonstrate capabilities in coordinated bursts.
The line between cyber espionage and acts of war is blurring: a well-timed grid takedown or pipeline sabotage can now be as crippling as a physical strike. Governments and industry will respond with unprecedented measures. Expect more intensive public-private cyber defense drills, mandatory incident reporting, and fortified network monitoring on infrastructure control systems. In some cases, authorities may even seize greater control over private networks, and 2026 could see several governments nationalize or restrict telecom infrastructure to secure it against foreign interference. Overall, critical infrastructure cybersecurity will shift from an IT concern to a core element of national defense strategy.
Reputation manipulation will become the fifth layer of extortion in 2026.
Patrick Sullivan, CTO, Security Strategy
As ransomware evolves beyond encryption, theft, and DDoS, attackers will weaponize misinformation to erode trust and amplify pressure. Instead of just leaking stolen data, threat groups may turn to fabricating or altering content-falsified emails, AI-generated screenshots, or deepfaked statements to damage reputation and force payment. In this new world, organizations must blend cybersecurity with crisis communications, digital forensics, and rapid-response verification to counter false narratives. In 2026, the ability to prove authenticity faster than attackers can spread lies will define resilience.
API extortion will redefine ransomware in 2026.
Rupesh Chokshi, SVP and GM, Application Security
As organizations and AI systems increasingly rely on vast webs of interconnected APIs, attackers will shift from encrypting files to hijacking the APIs themselves—disrupting operations or holding critical functions hostage without touching a single file. A SolarWinds-scale breach could emerge from a single compromised API dependency, silently cascading through systems. Existing API security standards focused on access and validation may prove inadequate against these next-level attacks. In 2026, defending APIs like core infrastructure, through visibility, least privilege access, and behavioral monitoring, will be essential to prevent the next generation of digital extortion.
Legacy OT Is a Luxury the Adversary Can’t Afford to Ignore.
Sean Tufts, Field CTO, Claroty
By 2026, legacy OT systems won’t just be outdated; they’ll become more vulnerable to attacks. Nation-state actors and advanced ransomware gangs will increasingly target unmodernized sites, exploiting gaps left by slow budget cycles or siloed security programs. Smaller, seemingly inconsequential sites could trigger outages and supply chain chaos on a national scale. As these risks mount, expect to see stronger pushes from policymakers and industry leaders for “secure by design” modernization and mandatory patching frameworks to bring legacy OT in line with today’s cyber realities.
Mobile becomes the new national attack surface.
Vijay Pawar, SVP, Product, Quokka
In 2026, the mobile device will officially graduate from being a personal security risk to a vector of national concern. What once appeared as isolated consumer scams or rogue apps has grown into a structural enterprise vulnerability — and a public-sector one, too. The same compromised mobile app that leaks user data can just as easily infiltrate corporate systems through bring-your-own-device (BYOD) policies and SaaS integrations. As organizations continue to blend personal and professional ecosystems, the boundary between consumer and enterprise exposure will fully collapse, forcing security leaders to treat mobile as critical infrastructure.
The privilege creep problem will continue to worsen, especially for machines.
Paul Walker, Field Strategist, Omada
“With human users, we at least have some natural forcing functions. People change roles, leave companies, and trigger offboarding workflows. Not ideal, but it’s something. Over-permissive access is the norm, with identities being granted more permissions than necessary, increasing the likelihood of privilege abuse and unauthorized actions. Unlike humans, where we might notice someone has “Finance Analyst + HR Admin + Sales Manager” roles, machine identities accumulate permissions across platforms in ways that are completely opaque.
Here’s the dirty secret that’s hard to admit: access reviews are failing. That’s not because people don’t do them, but because they’re rubber-stamping exercises that miss the real risk.
SaaS scale is what makes this extremely challenging: NHIs are managed ad hoc by different teams like DevOps, IT, and data science, without clear security accountability. Nobody owns these identities. The developer who created it left two years ago. The project moved teams three times. When you try to right-size permissions, nobody can tell you what it actually needs versus what it has. A primary obstacle in managing non-human identities is the difficulty in identifying their status accurately due to ambiguous ownership.”
Supply chain attacks and vendor attacks targeting client data for ransom will continue with renewed vigor.
Kasey Best, Director of Threat Intelligence, Silent Push
Regrettably, 2025 saw significant numbers of supply chain attacks leading to extremely serious ransomware incidents, further underscoring that the targeting of support vendors is a reliable attack strategy for criminal groups. We suspect that open-source code attacks will also continue, with worms like Shai Hulud showing how even simple code can have a substantive impact. In addition, we expect to see more novel supply chain attacks via browser plugins and extensions, as well as via prompt injection attacks targeting emerging AI-centric browsers.
Most supply chain attacks in 2025 focused on acquiring cryptocurrency either directly from targeted users or through corporate ransomware efforts. We expect threat actors will continue to be arrested for operational security (opsec) mistakes when trying to launder or cash out their take from these attacks. We also expect to see more threat actors from the larger hacks fall for money laundering honeypots set up by law enforcement and encourage those efforts to continue. Crypto laundering tools like Tornado Cash and similar efforts will no doubt also remain in use. 2026 will also see more countries pushing back, like Canada now stepping in to enforce their anti-money laundering laws, which included their takedown of the TradeOgre Exchange, which resulted in seizing $56 million.
Identity-driven attacks.
John Laliberte, CEO and Founder, ClearVector
In 2026, identity-driven attacks will cause material damage in the physical world. The proliferation of AI and non-human identities, combined with the adoption of deepfake technologies, will enable adversaries to assume any identity at any given moment. With the mid-term elections approaching in 2026, adversaries will exploit this convergence – where manufactured identities in the physical world collide with the explosion of identities in the cyber realm – and attempt to influence electoral outcomes. The fundamental question becomes: how do you prove who you are?
One major manufacturer will suffer an eight-figure financial losses from a supply-chain cyber incident.
Jeffrey Wheatman, SVP Cyber Risk Strategist, Black Kite
Most manufacturers continue to focus on traditional supply chain risks, such as labor shortages, safety compliance, sanctions violations, and logistics disruptions, as these challenges are highly visible, heavily regulated, and long-standing compliance priorities. In contrast, cyber risks within the supply chain (vulnerabilities, ransomware susceptibility, and other digital exposures) are often under-prioritized. This is evident by the fact that manufacturing remains ransomware’s number-one target for the fourth consecutive year, with supply chain exposure being a significant driver. Given this, we predict to see one major manufacturer suffer an eight-figure financial loss from a supply chain-related cyber incident in 2026.
Ransomware Will Surge as Threat Groups Consolidate.
Ferhat Dikbiyik, Chief Research and Intelligence Officer, Black Kite
Following last year’s law enforcement takedowns of several major ransomware groups, we’re seeing smaller operators now merging to form larger, more capable collectives. Historically, when dominant groups are dismantled, a surge in ransomware attacks follows, and 2026 will be no different. Ransomware campaigns will increasingly target interconnected supply chains, disrupting operations across entire ecosystems. Manufacturing, already ransomware’s number-one target, will remain at the top of the list as attackers exploit its vast, connected networks.
“The Year of the SaaS Breaches.”
Roei Sherman, Head researcher and Senior Director, Mitiga Labs
We’re approaching what I’d call “the Year of the SaaS Breaches”, and the main fuel is this combination of stolen identity and AI-accelerated exploitation. Organizations migrated to SaaS for security benefits, but they brought over-privileged identity models with them. Service accounts with global admin rights, OAuth tokens with excessive scopes, federated trust relationships that bypass MFA. Every SaaS platform becomes an entry point, and every compromised identity becomes a skeleton key. The old CVEs mentioned in the report are still exploitable because fixing them requires touching identity and access architecture that was never designed with a “zero trust” model in mind. We’re not dealing with a technical vulnerability problem anymore. We’re dealing with an architectural identity debt that’s coming due.
Compliance as the new Attack Surface.
Liora Ziv, Cyber Threat Intelligence Analyst, CyberProof
In 2026, we expect extortion groups to increasingly use regulatory exposure as deliberate leverage. After the major consumer-facing breaches and stricter reporting rules that followed, attackers have
realised that the threat of investigations, fines, and public scrutiny can be more damaging to organisations than the breach itself. Ransom notes are already referencing GDPR, UK reporting timelines, and sector-specific disclosure rules, with some groups threatening to notify regulators directly or leaking small data samples to force mandatory reporting. This turns compliance obligations into an attack surface: the pressure doesn’t only come from encrypted systems or stolen data, but from the legal and reputational consequences adversaries now know how to exploit.
AppSec fatigue will peak in 2026.
Roni Fuchs, CEO and Co-Founder, Legit Security
“Security teams are already drowning in alerts, and AI is about to multiply them. In 2026, AppSec fatigue will reach a point where manual triage just can’t keep up. The answer isn’t hiring more engineers to face the same overwhelming volume; it’s building smarter automation and intervening earlier. We’ll see intelligent security agents reviewing and remediating AI-generated code on their own while engineers focus on what truly matters. Automation won’t replace AppSec teams. But AppSec teams will become managers of AppSec AI agents. The good news is that once they adopt these solutions, it will finally give them room to breathe.”
DDoS Goes Stealth Mode.
Eva Abergel, Senior Product Marketing Manager, Radware
DDoS is no longer “noise.” It’s getting sharper, smarter, and harder to spot.
Attackers are using AI to build autonomous botnets that learn in real time. These aren’t the old-school “flood everything and hope something breaks” botnets. These are calculated, stealthy, and tuned for bypassing mitigation, especially at Layer 7.
The real threat won’t be size anymore. It’ll be invisibility.
Security teams will need to treat DDoS as a business risk. Because when the application layer slows to a crawl in the middle of your sales cycle, that’s not an IT problem—that’s a revenue problem.
Cybersecurity Trends:
Evolution of Security Teams.
Nathan Wenzler, Field CISO, Optiv
Security teams will start to split into two distinct groups:
- A formal security operations (SecOps) team to manage tools, execute policy, and perform data collection and management for security
- 2A traditional security team who will focus on governance, compliance, policy creation, and management, and will serve as the primary liaisons for the business and non-technical stakeholders in the organization
Cyber Education.
Tiffany Shogren, Director of Services Enablement and Cybersecurity Education, Optiv
- Embedding Security Culture into the Organization: Organizations are shifting from viewing security training as a compliance task to embedding it within everyday behavior and leadership expectations. The focus is moving toward cultivating a security-first mindset where every employee sees themselves as part of the defense strategy.
- Regulation Will Demand Evidence-Based Cyber Training: New regulatory frameworks will require organizations to prove the real-world effectiveness of their cybersecurity training programs. Rather than checking a box, companies will need to demonstrate measurable behavioral change and documented improvements in risk reduction.
Compliance-as-Code & GRC Requirements.
John Allison, Sr. Director of Federal Advisory Services, Optiv and ClearShark
- Companies facing increasing cybersecurity compliance requirements will turn to compliance-as-code to reduce manpower while maintaining acceptable risk.
- Infrastructure-as-code tools will mature, enabling compliance-as-code within the DevSecOps lifecycle.
- GRC tools will need to support compliance-as-code or risk losing market share.
In-person communication will rise in popularity as cybersecurity’s last line of defense.
Grayson Milbourne, Security Intelligence Director, OpenText Cybersecurity
Deepfakes have reached a new level of sophistication. With synthetic voices and video now indistinguishable from the real thing, attackers can impersonate anyone in real time. In response, organizations will reintroduce traditional trust-building tactics. Executives will meet in person for high-stakes decisions, “safe words” will return as verification tools, and face-to-face will regain its value. In a world where we can no longer trust what we see and hear online, physical presence will become a new pillar of security strategy.
NIS2-covered industries will face a steep security learning curve.
Niels Fenger, Advisory Practice Director, Omada
“The NIS2 directive has ushered in stricter cybersecurity measures and reporting for a wider range of critical infrastructure and essential services across the European Union. For industries newly brought under this directive, including manufacturing, logistics, and certain digital services, 2026 will bring new growing pains. The sectors, many long accustomed to minimal compliance oversight, now face strict governance and reporting requirements. In contrast, mature sectors like finance and healthcare will adapt more smoothly. The disparity will expose structural weaknesses in organizations unfamiliar with continuous compliance, making them attractive targets for attackers exploiting regulatory confusion.”
MCP will become the backbone of a new digital trust fabric.
Benoit Grange, Chief Product and Technology Officer, Omada
“2025 showed us what happens when autonomy outpaces accountability. AI systems began acting across business processes with little visibility into who or what was making decisions. This exposed a critical gap: governance frameworks built for human users are insufficient for autonomous agents acting at runtime.
At the same time, the Model Context Protocol (MCP) emerged as a promising foundation for secure collaboration between AI systems, defining how agents exchange context, identity, and authorization in real time. This could be the backbone of a new digital trust fabric.”
Pen Testing Isn’t Dying, It’s Multiplying.
Vinnie Liu, CEO & Co-Founder, Bishop Fox
Every year, someone calls time of death on pen testing. They’ve been wrong for twenty years, and they’ll still be wrong in 2026. Hey, at least they’re consistent. As the attack surface grows and adversaries move faster, testing is needed more often, not less. Proactive testing will be delivered both continuous and point in time to help people understand their security posture. Just like your Apple Watch doesn’t replace your annual checkup, new tech won’t replace expert testing. It’ll just make it smarter and more available.
Evolving Threat Hunting.
Dave Spencer, Director of Technical Product Management, Immersive
As conversations about automating threat hunting intensify, it’s clear that technology alone won’t define resilience. Signature-based detection still has its place, but attack methodologies evolve too quickly for static indicators to keep up. The best teams hunt for behavior and intent, not alerts. While AI may excel at spotting patterns, human judgment will remain the deciding factor.
Recent attacks on zero trust architectures have underscored this tension. Even the most “secure” designs can be subverted when adversaries log in rather than break in. This shift will demand AI-driven pattern detection to spot subtle, credential-based threats that humans alone can’t process fast enough. But it also demands proof that automation will act safely and effectively when it matters most.
True resilience will come from neither technology nor people alone, but from proving that both can respond together under pressure, with confidence earned through evidence, not assumption.
Digital trust/Identity.
Tony Ball, President of Payments & Identity and Incoming CEO, Entrust
The concept of an identity “perimeter” is obsolete. It’s time to acknowledge that we are creating amazing customer experiences that invite in risk and fraud — and focus on building trust into those experiences.
In 2026, we will see the fight fully shift from keeping threat actors “out” to securing every identity, and in doing so, protecting the people, devices, and data that make trust possible. Forward-looking organizations will lean into securing the identity lifecycle, working in a continuous, adaptive identity-centric framework built on the reality that fraud is inevitable but not fatal. Interconnected identity ecosystems will replace isolated checkpoints, ensuring that when one node is compromised, the broader system endures.
The future of digital trust will be defined by resilience, not rigidity — by systems designed to evolve as fast as the threats that challenge them.
Secure by design.
Antoine Harden, Regional Vice President of Federal, Sonatype
“In 2026, ‘secure by design’ will become table stakes for federal teams managing the inherent risks of open source software. Facing shrinking timelines and expanding mandates, agencies increasingly rely on open source and AI tools to develop at scale. But as usage of these tools grows, so does their potential for exploitation if not deployed securely. The winners of this new era in software will be those who prioritize governance and visibility from the onset.
While federal policy has accelerated momentum toward safer development, federal agencies must move beyond awareness to embed security into every stage of design or risk opening the door to attackers. This is especially critical as AI model integrity now extends beyond code to encompass machine learning pipelines and training data. Just as teams govern software components, they must ensure AI systems are trustworthy by design, prioritize transparency, enable governance, verify data integrity, and eliminate unapproved code.”
Cyber resilience must be centralized — or it will fail.
Asdrúbal Pichardo, CEO, Squalify
Multinational corporations are recognizing the necessity of centralizing cyber resilience management across their subsidiaries as regulations and threats become increasingly difficult to contain within individual entities. Fragmented governance models will collapse under the weight of overlapping mandates and complex supply chains. Centralized visibility into risk posture, incident data, and financial exposure is becoming the new baseline for compliance and continuity. In 2026, resilience will be measured not by localized response plans but by an organization’s ability to unify, quantify, and govern risk across its global footprint.
Zero Trust and Passwordless Authentication Are Finally Here.
Sumesh Rahavendra, CBO, JumpCloud
Zero Trust is finally getting real. It’s no longer just a buzzword on a slide. It’s becoming a mandate, driven by data privacy laws that demand to know exactly who is accessing what data and why. We’re moving from “trust, but verify” to a healthier “never trust, always verify” for every access request. At the same time, passwords are finally on the way out. The market is sprinting toward “unphishable” authentication, with passkeys and biometric systems that use cryptography to prove your identity and make phishing attacks nearly impossible.
Identity as Currency: M&A Redefines the Security Landscape.
Chase Doelling, Principal Strategist & Director, JumpCloud
In 2026, mergers and acquisitions will increasingly have premiums for identity and access capabilities. Companies with advanced identity architectures, privileged access controls, and AI-driven IAM platforms will become strategic targets for acquisition. Investors recognize that secure, adaptive identity frameworks are central to enterprise resilience and retention. This shift will transform dealmaking, turning identity into another multiple accelerator. Sending a clear signal to organizations to continuously innovate their security architectures to remain attractive in a market where trust is the ultimate asset.
Developers will become the first line of defense.
Vijay Pawar, SVP, Product, Quokka
The regulatory and reputational stakes for developers will rise sharply by 2026. App marketplaces and governments will start requiring Software Bills of Materials (SBOMs) and secure SDK disclosures as part of app submissions, making code transparency a baseline expectation. Meanwhile, AI-assisted development environments will automatically flag risky libraries, outdated encryption, or privacy violations before code ever ships. Secure-by-design will evolve from a principle to an enforceable standard, embedding cybersecurity into every stage of mobile app development.
Adtech lessons applied to human risk in 2026.
Nicole Jiang, CEO and co-founder, Fable Security
For years, the cybersecurity industry assumed human behavior could not be changed, but that assumption is being turned on its head.
In 2026, cybersecurity training will finally take its cues from an industry that has long mastered influencing behavior at scale: adtech. Adtech has spent decades learning how to read signals, predict intent, and deliver the right message at the exact right moment. Security teams will begin doing the same. Instead of relying on generic, one-size-fits-all content, organizations will begin using real-time behavioral signals to deliver immediate and personalized guidance at the exact moment risky behavior occurs. And just as marketers fine-tune timing, tone, and communication channel for maximum impact, security teams will use data to understand what actually drives people to take safer actions.
Security training will shift from a compliance activity to true behavior change, powered by timely nudges, ongoing reinforcement, and constant experimentation. The prediction is simple. In 2026, cybersecurity will stop treating people as the weakest link and start using proven behavioral science to turn them into one of the strongest defenses.
Third-Party and Supply-Chain Cyber Risk Will Become a Boardroom Priority.
Jeffrey Wheatman, SVP Cyber Risk Strategist, Black Kite
We predict that supply-chain and third-party cybersecurity risk will move firmly into the boardroom agenda in 2026. Recent high-profile incidents, including the F5, Salesloft/Drift, breaches, Jaguar Landrover and Asahi breaches, have demonstrated how a single vendor failure can trigger widespread operational disruption and reputational damage across the supply chain. As a result, boards are beginning to view supply-chain resilience not as a technical concern, but as a fundamental business risk. Expect to see boards ask CSO/CISOs to include visibility into cyber risk in their supply chains and to share high-risk exposures, along with greater investment in continuous vendor monitoring and the ability to quickly mitigate potential exposures. The conversation will evolve from “Do we have controls in place?” to “How do we continuously validate the cyber health of our entire ecosystem?”
The industrial cybersecurity landscape.
Daniel Gaeta, Managing Security Engineer with the OT Security Practice, GuidePoint Security
In 2026, the industrial cybersecurity landscape will continue to mature from awareness toward measurable resilience. While many ICS environments still rely on legacy assets that cannot easily be replaced, organizations are making tangible progress by aligning modernization with risk-based investment. “Rip and replace” remains unrealistic for most operators, but practical strategies such as secure segmentation, architecture validation, and compensating controls are bridging the gap between outdated technology and modern threats. As geopolitical tensions and supply chain dependencies grow, adversaries are increasingly targeting the OT layer not only to disrupt operations but to achieve strategic influence.
AppSec’s evolution.
Jeff Williams, Co-Founder & CTO, Contrast Security
In 2026, organizations will realize that the only way forward is to ground AppSec in reality. You can’t defend what you can’t see. You can’t prioritize what you don’t understand. And you can’t keep throwing point solutions at a fundamentally systemic problem.
That’s why this is the year AppSec moves into production. The center of gravity will shift from hypothetical vulnerabilities to observed behavior. From scanning artifacts to continuously watching running systems. From static problem lists to dynamic understanding.
This shift clears the fog that’s clouded AppSec for decades. When you can finally see how your applications behave in the real world: what’s active, what’s vulnerable, and what’s actually under attack, you can focus your time and energy where it counts: the 5% of issues and incidents that truly matter.
The future role of IAM.
Matt Mullins, Head Hacker, Reveal Security
Inevitably, IAM and identity security as a whole will dominate organizational security focuses. With that being stated, there will always be some aspect of local footprint for systems, and thus there will still be machine identity/objects/etc. that will show up on the map. I predict we will start seeing a shift amongst defenders toward using identity security beyond just the perimeter. Using identity security as a lens, identification of behavioral anomalies in the environment – after authentication – will be critical when thinking about these local/internal systems. We have seen a large uptick in malware-less breaches (signed tools, stolen creds from other breaches, etc.), and given that bit of information, the value of having anomaly detections around identities of any sort will provide a way for organizations to harden their security posture.
Biosurveillance Becomes a Core Operational Capability.
Eric Adolphe, Founder & CEO, Forward Edge-AI
“Real-time molecular detection becomes essential across health, border security, and critical infrastructure.”
The pandemic taught the world that biology no longer functions as a passive threat but instead presents an active operational risk. Virus hunters I speak with predict that the next pandemic is less than 5 years away. And the next one will make COVID look tame.
The New Economics of Cyber Risk Require Continuous Validation.
Kara Sprague, CEO, HackerOne
As economic headwinds persist, security leaders are no longer asking what to cut—they’re asking what delivers measurable risk reduction. In this environment, security can’t afford to be static, theoretical, or siloed. It must be continuous, validated, and tied to business impact.
If your budget were halved, which controls would you keep? The answer increasingly points to what delivers real-time insight into what’s exploitable—not just what’s theoretically vulnerable.
In 2026, the shift toward operationalized exposure management will accelerate. Inspired by frameworks like Continuous Threat Exposure Management (CTEM), security leaders will prioritize ongoing visibility, adversarial validation, and faster remediation.
The Next Leap in Innovation for Offensive Security.
Nidhi Aggarwal, Chief Product Officer, HackerOne
In 2026, offensive security will be essential for enabling confident adoption of emerging technologies. Teams will move from reacting to alerts to transforming programs with continuous threat exposure management (CTEM). CTEM is a framework that shifts security from a point-in-time exercise to a dynamic process that adapts as new threats, technologies, and business priorities emerge. By integrating human oversight, it ensures that context, judgment, and accountability remain at the heart of every decision.
The next leap in innovation will come from deepening that collaboration between human skill and AI’s scale. Nearly 70% of security researchers now use AI in their workflows, and more than half are expanding their expertise in AI- and machine-learning-based security. That mix of automation and judgment is how trustworthy, self-testing systems will take shape. 2026 will be the year we stop chasing every vulnerability and start continuously reducing real risk.
JIT access becomes the mainstream standard.
Brian Read, CTO, KeyData Cyber
Standing privilege is finally on its way out. In 2026, Just-in-Time access will become the dominant model for privileged access. Granting rights only when needed and for the minimum duration dramatically reduces lateral movement risk and is becoming a priority at both the vendor and C-suite levels.
If you are planning a deeper dive, I can connect you with one of KeyData Cyber’s security experts for a conversation on these shifts and what they mean for enterprises.
The End of UI / UX As We Know It.
Nancy Wang, SVP, Head of Engineering and AI, 1Password
We’re entering the post-UI era. As AI agents take on more operational work, software interaction shifts from clicks to intent. The agent becomes the interface. Instead of navigating a CRM or running a query, you tell the agent what outcome you want, and it acts across systems to deliver it. That changes the hierarchy of software design. The UX layer becomes less important than the policy layer. The question isn’t “What can the user click?”, it’s “What can the agent do on their behalf?” Security, access control, and auditability move from back-end concerns to first-class design elements. The companies that figure out how to safely collapse the UI into the agent will redefine enterprise software. The app that requires no interaction, yet acts perfectly within policy, will be the ultimate user experience.
In 2026, closed messaging platforms will need to improve their security regulations and responsibilities.
Andrew Northern, Senior Security Researcher, Censys
Platforms such as Discord, Slack, and WhatsApp have closed ecosystems – meaning, security researchers only have a limited amount of visibility to analyze potential threats. These platforms are common attack vectors to exfiltrate stolen data, execute C2 attacks, and more. Organizations that own private messaging platforms either need to update its terms and conditions, or evaluate its own security defense practices to play a more active role in addressing these types of attacks.
Multi-Cloud Will Become the Default Operating Model.
John Waller, Cloud & Security Practice Lead at UltraViolet Cyber
A big part due to recent disruptions, multi-cloud is going to become the default operating model in 2026. As we see an acceleration in multi-cloud strategies, we will move beyond vendor-specific Infrastructure-as-Code (IAC) and more toward tools like Terraform to build and deploy across multiple clouds.
The Collapse of Point-Product Cybersecurity.
Anurag Gurtu, CEO, Airrived
Point products have officially reached their saturation point. Enterprises are running 70–120 tools across security and IT, yet still struggle with alert fatigue, analyst burnout, and fragmented data.
In 2026, the market finally acknowledges what insiders have known for years: no single-feature product can compete with an AI system that can read, reason, act, and improve over time.
AI agents are now capable of executing complex workflows end-to-end, from threat detection to investigation, response, and audit. Instead of stitching together dashboards, enterprises are shifting to horizontal, agentic platforms that break down silos. This is the first year CISOs begin sunsetting tools rather than adding more.
Expect the valuation gap between AI-first platforms and legacy cybersecurity vendors to widen sharply. A wave of consolidation will follow.
Real-Time, Self-Evolving App Defense Will Become the New Enterprise Standard.
Dan Shugrue, Product Director, Digital.ai
Enterprises will shift from static application security controls to continuous, agentic AI–driven defenses that evolve in real time. As attackers increasingly use LLMs to reverse-engineer apps within hours, organizations will adopt AI agents that autonomously apply code obfuscation, anti-tampering, and runtime protections in every build. This “moving target” security model will become expected—not experimental—because anything slower will leave applications exposed.
The convergence of Identity and Zero Trust.
Chirag Shah, Global Information Security Officer, Model N
Identity is becoming the control center for everything, making it a prime target. Attackers will go straight for identity infrastructure as regulations intensify. Reporting rules and board oversight will tighten faster than most companies expect. To meet these new demands, organizations are turning to Zero Trust. By 2026, companies will stop assuming vendors are safe and start verifying every connection, every time. Boards are asking tough questions, and regulators want proof, not promises. Expect short-lived credentials, strict identity checks, and real-time monitoring of system-to-system traffic. The pressure is shifting. Vendors won’t just say they’re secure; they will have to show it with integrity checks, detailed software inventories, and contractual obligations for incident reporting.
Quantum:
Quantum readiness will (finally) move to the forefront.
Rik Ferguson, Forescout’s VP of Security Intelligence, and Daniel dos Santos, Vice President of Research, Forescout Vedere Labs.
Quantum risk is no longer theoretical, and anyone treating it as such is in for a big wake-up call. Next year is the year that forward-leaning organizations will finally realize that every unmanaged device they deploy today is a future emergency waiting to happen. Networks with at least five-year hardware lifespans must begin crypto migration planning, mapping which assets can’t support post-quantum algorithms, isolating crypto-fragile systems, and discussing PQC-ready roadmaps with vendors.
Intensifying “Steal-Now, Decrypt-Later” Threats Spur Urgent Migration to Post-Quantum Encryption.
Karl Holmqvist, Founder and CEO, Lastwall
In 2026, the timeline for quantum-enabled attacks will shrink dramatically, pressuring organizations to expedite their adoption of post-quantum cryptography (PQC). Breakthroughs in quantum computing, such as recent leaps in quantum processor power and the corresponding multi-billion-dollar buildouts that are underway, underscore that a cryptography-breaking machine may arrive sooner than expected. We expect a sharp increase in quantum security spending in 2026 as deadlines for PQC migration become real and the understanding of intensifying “harvest-now, decrypt-later” espionage campaigns proliferates. Governments worldwide have launched quantum-safe initiatives and set clear timelines: for example, U.S. federal agencies face mandates to inventory and replace vulnerable encryption within the decade. With the 2024 standardization (FIPS 203) clearing the path for deployment, 2026 will see organizations scrambling to start the overhaul of their cryptographic infrastructure.
Quantum Becomes a Now Problem, Not a Future One.
Eric Adolphe, Founder & CEO, Forward Edge-AI
2026 is the first year organizations will treat quantum risk as a present-tense operational threat.
In Malcolm Gladwell’s world, there’s always a moment when the outlier becomes the norm. A tipping point. A shift so quiet you only recognize it once it becomes obvious. Quantum computing has been advancing precisely that way.
“2026 is the first year organizations will treat quantum risk as a present-tense operational threat. Systems built on RSA and ECC will no longer be viewed as durable, and leadership will recognize that decades of encrypted archives may already be compromised. Quantum resilience becomes a prerequisite for national security and commercial continuity.”
Quantum computing will not revolutionize 2026.
Doron Davidson, Managing Director, Global Security Operations and Delivery, CyberProof
Don’t expect quantum to revolutionize 2026. I expect the real impact will emerge in the next few years, between 2027 and 2029. We’ll see an increase in long-term attacks due to quantum, meaning attacks happening today will continue to focus on long-term data theft, with adversaries stealing information now in anticipation of future quantum capabilities. They may exfiltrate data in 2026 and only decrypt it years later, once quantum tools become practical. This means stolen information might be exploited long after logs have been deleted, making it far more difficult to trace who stole the data or when it was accessed.
Quantum readiness is going to become a real planning problem.
Tim Chase, Field CISO and Principal Technical Evangelist at Orca Security
In 2026, CISOs are going to be asked to show what their organizations are doing to prepare for post-quantum cryptography. We are already seeing early moves from major cloud providers who are beginning to test quantum-resistant ciphers inside core services. With no clear agreement on which algorithms can endure true quantum computing power, organizations must prepare for change without full visibility. That means identifying assets at risk from outdated encryption and gauging the complexity of unwinding those dependencies. The companies that start this inventory and planning work early will avoid a far more expensive and rushed migration later.
Cyber Insurance:
Cyber insurance will enter the age of quantified risk.
Asdrúbal Pichardo, CEO, Squalify
The volume of cyber insurance policies will increase by a double-digit percentage, driven by growing exposure associated with AI and a risk landscape increasingly shaped by mega-losses. Insurers are recalibrating underwriting models as AI accelerates both the scale and speed of potential breaches. Enterprises that quantify cyber risk in financial terms will gain leverage in negotiations, while those relying on qualitative assessments will face higher premiums and coverage gaps. The focus is shifting from coverage to provable resilience.
Over the next five years, CISOs will take the driver’s seat when it comes to cyber insurance decisions.
J.J. Thompson, founder and CEO, Spektrum Labs
Cyber insurance coverage and claims are now directly tied to technical safeguards, and only the CISO can prove that they actually work. If a gap can void an insurance policy or deny a claim, someone has to notice (and it won’t be finance or the insurance broker, as neither has access to the data). And buying “the best” endpoint protection or backup in isolation won’t cut it. The next wave of resilience favors ecosystems where safeguards and coverage align, and proof is built in and continuously updated. So, CISOs will gravitate toward insurer, broker, and technology vendor combinations that unify proof, protection, and insurance policy into one seamless and simple flow.
The downward trend in cyber insurance rates has reached the danger zone!
Max Perkins, Head of Insurance Solutions, Spektrum Labs
Competitive dynamics have pushed insurance rates downward, exposing insurers to losses as the severity and frequency of cyber incidents aren’t flattening. The floor is coming fast, and with that, a hard landing.
Brokers will pursue cyber risk services as new revenue streams.
Max Perkins, Head of Insurance Solutions, Spektrum Labs
To offset margin pressure and create stickier client relationships, brokerages will increasingly adopt cyber risk management tooling to become more relevant to CISOs and less reliant on transactional commissions. They won’t just be advising; they’ll be monetizing value-added services.
The most forward-looking brokers will deliver telemetry-backed renewal packages, benchmark client posture against peers, and use continuous evidence to build stronger narratives to underwriters. This shift will make brokers more relevant to CISOs, CFOs, and boards, and less dependent on transactional placement revenue.
Telemetry, parametrics and AI agents will become critical infrastructure in the cyber insurance space.
Max Perkins, Head of Insurance Solutions, Spektrum Labs
Without these, the market can’t scale. Telemetry is essential to improve pricing models, quantify risk, and attract capital. Carriers and reinsurers alike are demanding real-time, validated visibility into posture, patch cadence, and exposure clusters.
Parametric cyber coverage for Business Interruption (especially for small and medium-sized businesses) will move from pilot to early adoption. It simplifies claims, shortens tail risk, and offers transparency for both underwriters and insureds. AI agents will be needed to scale the labor-intensive workflows in brokering and underwriting – not to replace people but to support decision-making, which will enable growth. AI agents will be deployed to support, not replace, brokers and underwriters. These agents will triage submissions, synthesize risk evidence, and surface key portfolio-level insights.
Cyber Leadership:
For CISOs, AI Is Now an Irreversible Data Risk Decision.
Christie Terrill, CISO, Bishop Fox
In 2026, CISOs will face a shift from experimenting with AI to operating in a world where adoption is no longer optional. Businesses will pursue AI for efficiency, capability, and competitive pressure, often faster than security teams can evaluate the risks. CISOs won’t be the arbiters of which AI initiatives move forward, but they will be accountable for how safely they do. The real challenge will be mitigating against a growing layer of shadow AI, trying to monitor user usage, evaluating AI adoption against unclear cost models, and analyzing increasingly complex data-sharing across vendors and models. The CISO’s role will center on enabling the business while mitigating risks at a pace that matches AI’s acceleration, guiding organizations through decisions that may prove irreversible.
AI Discovery & Audibility Will Be One of CISOs’ Top Challenges.
Jacob DePriest, CISO/CIO, 1Password
As AI applications and agents gain acceptance across enterprises, big and small, it will get harder for security teams to maintain a clear picture of the activity and actors inside their organization. CISOs will be responsible for new connections, data sharing, and actions that originate from non-human actors, forcing them to rethink what visibility and accountability mean. Was it a human or an agent that acted? Who is responsible for the actions an agent takes, and which sensitive data and systems are involved? Those who can attribute, audit, and govern AI-driven actions will maintain accountability and trust as autonomy expands.
The Technical CISO Will Come Roaring Back.
James Wickett, CEO, DryRun Security
We’ve spent the last few years pretending the CISO could be a business role. That era is over. In 2026, every company will be producing code, AI-assisted, automated, or otherwise. If the CISO doesn’t understand how that code works, what risks it introduces, and how AI systems make decisions, they’re flying blind.
Code volume has already doubled in the last couple of years, and it will probably multiply fivefold again in the next few years. The job of securing the enterprise now is deeply technical: understanding how tools, vendors, and in-house models interact. The board doesn’t just need a translator anymore; they need someone who can say, “Yes, we can ship this safely,” and mean it. The modern CISO has to know the tech, or they’ll be replaced by someone who does.
Unpopular Opinion: The CISO Role is Unsustainable Without Radical Change.
Jonathan Maresky, Head of Product Marketing, CyberProof
In cybersecurity, we love to talk about resilience and innovation. But here’s an unpopular truth: the modern CISO is being set up to fail. Today’s CISOs are navigating an impossibly complex threat landscape, pressured by boards to secure exponentially growing attack surfaces with shrinking budgets and overburdened teams. Every new technology adopted—from AI to cloud-native apps—introduces new risks. Developers are racing to meet release deadlines. AI tools are rolled out enterprise-wide with little consideration for security guardrails. Meanwhile, CISOs are held accountable not only for breaches, but for vulnerabilities they never even had the resources to address. It’s no wonder many security leaders can’t sleep at night. This is more than stress—it’s systemic dysfunction. We’ve created a rat race where security teams are running faster just to stay in place. Too many tools, too few integrations, too little clarity on what matters most. We need to stop romanticizing burnout as dedication. The industry must shift from reactive firefighting to risk-based prioritization. CISOs need visibility into what truly matters: not every alert, but the exposures most likely to be exploited. They need automation that works—not just more dashboards. They need to be allowed to say “no” when risk outweighs reward. Every company can be breached. Let’s stop pretending otherwise—and start building security models that assume compromise, focus on detection and response, and give CISOs a fighting chance.
This article was originally published on CyberWire.