This post is written by Lux Summer Associate Yadin Arnon, who is currently an MBA student at the University of Chicago Booth School of Business. He's reachable on LinkedIn.
2023 will be remembered for its artificial intelligence boom, with advances in both generative AI and sophisticated machine learning models. These advances have sent shockwaves through almost every industry, from bio laboratories to movie studios, and experts believe that almost every company will shortly adopt some form of AI.
Cyber is no exception, both offensive and defensive. The recent trends in AI add risks and further expose companies to attacks that are more sophisticated than ever before, but they also open up new opportunities for securing organizations. As companies embed AI in every aspect of their business, it’s important to look at how cybersecurity tools will mitigate the growing risks from this revolution.
Over this past summer, I conducted conversations with founders and security experts to understand where the cyber market is going as a result of these recent advances. I believe that both cybersecurity startups and incumbents still have a lot of value to reap despite pullback from investing the past two years, and we should expect to see AI-focused novelty throughout the cyber sector in almost every product or solution. I’ll highlight five sectors primed for investment, as well as describe what I see as a power shift toward incumbents, pushing all but the most competitive startups out of the market.
Generative AI and generating returns
We’ve seen investments in cyber take a blow, like every industry, both in 2022 (total capital raised was $18.5B, 39% less than in 2021) and 2023 so far (total capital raised by Q2 is $4.3B, compared to $12.5B in Q2 2022). But I believe the market is recovering, and I still view cyber novelty and ingenuity as worthwhile risks. As more organizations enter the cloud, and cyber literacy is trickling down from enterprises to smaller businesses, I see many opportunities to invest in the next generation of security technologies.
The emergence of sophisticated AI models has profound effects on the cybersecurity space. Microsoft announced its Security Copilot, an AI-powered security analysis tool that allows lightning-fast response to security threats, and Crowdstrike added Charlotte, an AI-powered analyst, to its threat landscape solution. Google and Palo Alto Networks have similarly followed suit.
But to understand how valuable the AI revolution is for cyber defenders, we need to first understand how valuable it is for attackers. Generative AI drops the price and time invested significantly for any type of an attack: a well-articulated phishing campaign that otherwise would have taken hours to create can now be composed in ChatGPT within seconds, in a way that is convincing enough to breach both a company’s email filtering system and cast a wide net to maximize infection potential. Models that are highly trained in recognizing vulnerabilities will now make zero-day attacks more common and less time-consuming to produce.
Five top areas for security innovation
For many adversaries, especially the financially-motivated ones, hacking is a numbers game. With tools that dramatically speed up malicious campaigns, the time to exploit a target is narrowing. So what can we do?
Fortunately, this revolution in AI leaves plenty of room for security innovation across the entire attack surface, with different types of assistants and platforms offering intelligent alerting and attack prevention. I believe that the following five areas in cyber have the largest opportunity to produce cutting-edge novelty in the age of growing threats leveraging machine learning and generative AI: 1) workforce security, 2) AI model security, 3) identity and access management, 4) software supply chain security and 5) security operations.
Let me touch on each briefly.
1) Workforce Security
Workforce security is a relatively old market, but it benefits tremendously from artificial intelligence.
The truth is that email and browser security are still pain points for most enterprises, and professionals highlight their inability to secure the end-users’ actions as the Achilles heel of their security suite. Even with every security playbook in place, a mistake made by an employee can still cause substantial damage. This has been further exacerbated by remote work and bring-your-own-device policies.
With generative AI, human risk security can move from reactive to proactive. Companies such as Savvy and others are now building risk co-pilots that analyze an employee’s behavior to predict if they are about to do something unsafe for the organization. It’s widely known in the security world that human risk accounts for ~80% of all breaches, and with new technologies, this segment can find tremendous value in almost any company that relies on its workforce in external interactions.
More opportunities exist in email and the end-user attack surface. Jericho Security, a Lux portfolio company, is leveraging generative AI to create hyper-realistic attack simulations that easily evade security controls, training teams on the latest adversarial threats.
Many additional opportunities exist in this space, and I expect to see more emphasis on the human side of security. This can be in the form of more competition in the risk co-pilot or attack simulation spaces, or even something completely different. By analyzing telemetry from workers’ computers, for example, companies can learn a lot about their employees and maybe even understand who is more prone to fall victim to an attack, directing security professionals to optimize their training and resources. Additional proactive solutions, such as a smart background check on candidates’ online presence, can also create a cyber risk score associated with them, helping companies avoid risky hires.
2) AI Model Security (AISec)
Generative AI and LLMs have taken the world by storm, and it’s not surprising that many founders are now seeking to defend a technology that is rapidly being deployed in more and more organizations.
There are many potential dangers to AI systems, as can be seen in NIST’s Taxonomy and Terminology of Adversarial Machine Learning. Data Poisoning attacks target the model’s training dataset and alter the data to influence the model’s decisions; Evasion attacks trick models into misclassifying items and can result in malicious emails passing through a filtering system; finally, Oracle attacks help adversaries obtain knowledge into the model and the information it contains. An interesting (and fun) way to understand these attacks is Gandalf, a model built by Lakera that shows how LLMs can be tricked into giving away information.
Other risks include intellectual property leakage, as in the famous Samsung breach, or malicious code finding its way into the company through prompted GPT responses. We expect to see attacks on models become more prominent as adoption continues and hackers become more sophisticated.
Companies competing in AI model security are already building answers to these tactics, and there are a few different areas of specialization including: ensuring secure API connections to third-party LLMs, protecting the models themselves from the attacks mentioned earlier, hardening the models against generation of malicious prompts, and ensuring regulatory compliance with sensitive information inserted into the models.
Some quick-to-react incumbents already offer services to their client base, for example API security or advanced web application firewall (WAF) for LLMs, while new startups are emerging that specialize in overall protection solely for AI models. HiddenLayer, for example, focuses on securing ML models and ensuring they are not tampered with. Companies like Troj.ai focus on data poisoning protection and risk management for models, making sure that when employees use generative AI tools, they won’t put their organization at risk by unintentionally including intellectual property in prompts or engaging with malicious code. CalypsoAI, on the other hand, developed a tool that ensures no sensitive data is transmitted to third-party models to validate company compliance, working as a co-pilot to whoever is writing LLM prompts in the company. Then there are companies like Cranium, which focus on the compliance risk of models themselves.
I believe there is more to come. In a similar way that cloud security became a multi-billion dollar market, I see AI model security as the crucial defense that will be set up by the soon-to-be trillion dollar market of generative AI. With the adoption of LLMs and the like, companies will require more observability, better risk remediation, and autonomous protection against suspicious actions. While the companies mentioned above are starting to set up such defenses against the known perimeters of AI models, there are currently more unknown variables than known ones.
3) Identity and Access Management
Two major events have happened in the past few years that completely changed the identity landscape. The first, of course, is Covid. Remote workers are almost 13% of the workforce, and hybrid workers are 28%. The movement away from the office drove organizations to implement harsher policies to keep up with the high quantity of endpoints that now access proprietary resources, and predictions say that remote work is here to stay. With hybrid work models and a shorter life cycle for employees, organizations will need to continue hardening their resources against unauthorized access.
The second event is the drastic increase in cloud usage. Employees, especially engineers, use many different tools, most of them on the cloud, and most of them contain sensitive data. These were thought to be safe. Organizations have long used multi-factor authentication (MFA) to prevent account takeovers and unauthorized access to company materials, but hackers have since learned how to overpass MFA. An MFA fatigue attack, where adversaries send constant, multiple MFA requests with the hope that one authorization will work (either by an accidental push or by tiring the employee), is only one of several possible options to overcome this defense.
With more remote work and more cloud resources, organizations struggle to keep the balance between proliferating permissions and preventing employees from doing their job. An IDSA research paper reported that 84% of companies in their survey had some sort of identity-related breach last year.
As mentioned previously, both the number and quality of attacks is expected to increase, and targeting an organization’s identity infrastructure is a cost-effective way for adversaries to get a foot in the door. As such, organizations will have to understand and manage their identity sprawl, finding sophisticated methods for allowing or blocking login and access requests to valuable company resources. This is true for human identities, but also for machine identities, since many access requests and data pulls occur between software programs without a human in the loop.
Spera Security is an example of a company working on this issue, by consolidating and governing identities from across the entire software stack and remediating issues such as offboarding old users, blocking misuse based on user characteristics, and managing sensitive access. When it comes to machine identities, particularly around SaaS products, companies will have to follow suit and develop remediation to risky access requests by non-human users; a unique approach to the issue comes from Corsha, which develops a multi-factor authentication mechanism for API calls.
To solve the access provisioning issue, companies like Entitle.io are working on just-in-time provisioning algorithms that render manual access requests obsolete and are smart enough to understand when to toggle access to escalated privileges.
Companies will need to minimize their identity attack surface, both externally and internally, to stay safe in the age of proliferating phishing attacks and credential misuse. Automating the provisioning workflow and governing identity usage are both expected to receive significant market traction and priority in the CISO budget.
4) Software Supply Chain Security
The risks associated with the software supply chain have risen dramatically with the entry of generative AI as more and more companies adopt third-party models into their software stack. Since teams rely on open-source software and code is rarely developed entirely in-house, security-aware organizations must protect themselves from these vectors for attack.
Gartner predicts that 70% of platform teams will integrate application security tools by 2026, up from 20% today. This is no surprise given that some of the most infamous attacks in recent years targeted supply chains, such as the Log4j exploit and the SolarWinds hack, and finding vulnerabilities in software and creating malware is becoming easier for attackers. A zero-day vulnerability in a commonly used software package implies a risk to thousands of companies, and organizations will need to gain visibility into their software stack and understand and block potential risks in real-time.
An Israeli supply chain security company, Ox Security, has recently published an open framework for secure software release called OSC&R, which works in a similar way to MITRE ATT&CK and can be used by organizations to map and understand risks associated with their supply chain. Though the space shows a growing technological promise due to the sensitivity of such issues and demand by enterprises, supply chain security is a highly competitive market. However, there currently is no obvious winner.
The expected widespread adoption of more open-source code and models throughout industries, coupled with the fear of an influx in precise, sophisticated attacks, brings an opportunity for innovative solutions that will enable the world to leverage the potential of the AI revolution while minimizing the risk of a breach. AI-based security assistants for real-time software development, as well as tools tailored specifically for protecting LLMs in production environments, are just two examples of highly useful tools for developers that I expect to see widely adopted.
5) Security Operations
Organizations have too many security tools and security operations teams can’t keep up with alerts and their workloads. Organizations have set up many tools throughout the years to assist both with automation of workflows and remediation of security-related events, but something is still missing. Many SOAR (Security Orchestration, Automation, and Response) solutions — one of the modern cornerstones of an efficient, forward-thinking security team — are still too complicated to use. Recent advances give room to solutions that will completely overhaul the security operations center (SOC), creating smaller and smarter security operations teams.
Generative AI can assist security teams in three main ways. First, LLMs can be trained for faster triaging and for decreasing the overhead. Since a substantial part of an analyst’s day consists of filtering false positives, SOC-focused models can increasingly take a significant amount of the workload. Second, ChatGPT-based assistants can act as hyper-efficient team members and aid the SOC team with examining incidents and solving issues. Third, such assistants can provide continuous quality threat intelligence, both by filtering noise and by prioritizing information.
Overall, I expect automation in security operations to continue faster than before and in the coming years, I expect to see such solutions drive the market towards smaller security operations teams that are supercharged with more efficient tools.
The past decade has been fruitful for the cybersecurity market, both in M&As and IPOs, which is why it is not surprising to see some of the most brilliant founders enter the space. With new technology at our side, however, we may be witnessing a new age in cyber, one with less high-returning exits. Newly-funded startups will have to leverage highly advanced AI to keep up with incumbents, and access to resources and talent will serve as a barrier to many entrants, making it tougher for new companies to succeed.
Therefore, I expect to see more consolidation in the next few years. The strongest organizations in the market, such as Microsoft, Google, CrowdStrike, and Palo Alto Networks, will continue to leverage their unique access to talent and exponentially more powerful AI models to offer leading products and features. We’ve already seen this pattern with Charlotte AI, Security Co-pilot, and Security AI Workbench. They will be able to develop sophisticated products in-house and will favor acquiring startups at earlier stages to join their efforts.
The most competitive companies will still be able to redefine the market, but the rest may not survive in the climate. This warning is also true for bigger companies — growth-stage startups and incumbents competing against the largest players might find themselves with no route forward. If they choose not to focus on adopting AI and if they do not invest in access to talent, growth-stage companies may lose to faster, more agile entrants. Meanwhile, smaller incumbents who are late to the AI game may see their customers leave for the largest players mentioned above.
A consolidated market may ultimately be favored by security customers as well. CISOs may prefer the opportunity to buy a suite of products rather than allocate budget to tens or hundreds of niche solutions, especially with increased trust in more robust security suites than before. Furthermore, as more traditional companies are onboarded to the cloud as part of their digital transformation strategy, the ability of Microsoft, Google, or AWS to deploy a highly-effective, out-of-the-box security suite can help them dominate the security market. This may imply a change in the opportunity landscape for investors.
Although security budgets are increasing and CISOs get more in-company attention, finding companies that yield very high returns may become harder. Many startups will be purchased in earlier stages, for less return on capital. Finding the next CrowdStrike was never an easy task, and it might just have become a little harder.