AI's impact on cybersecurity

Managing Partner, CISO


AI’s impact on cybersecurity is multifaceted, bolstering defenses while introducing novel challenges and risks. Carefully assessing the advantages and disadvantages of this impact is crucial. While AI and Machine Learning (ML) have been buzzwords in our industry for a while, the marketing and reality of their effect have become more significant every day.

Bridge Security Advisors’ vCISOs and Advisors spend a good deal of time helping our customers govern the benefits and risks of AI in achieving their business objectives. Simultaneously, we have been implementing AI and AI-enabled tools to support our advisors and services.

There are many types of AI. AI is commonly categorized by capability or techniques. The intricacies of these are beyond the scope of this paper. However, you may notice I use the terms AI, Generative AI, and Machine Learning throughout the paper. This use of different terms is purposeful based on the subject. I tried to be correct in our usage here, but there is certainly room for interpretation. When used herein, “AI” is meant to refer to the overall ecosystem of AI capabilities or as a simplification for one capability from the ecosystem such as Artificial Narrow Intelligence or Reactive Machine AI which are two common categories used in security tools.

To distill an analogy from Christian Macedo’s great article on AI; AI is a Chef preparing meals using different established techniques (algorithms), Machine Learning is a growing recipe book that the chef uses but which also records new techniques from the chef, and Generative AI is the creative chef who takes from the recipe book and creates brand new recipes.

Here are some of the areas we considered and lessons we learned. I will also provide a few things you should be doing now to be prepared for the general use of AI in your organization.

Challenges to Cybersecurity Introduced by AI

AI presents many challenges to security professionals. What follows are some of the discussions and experiences we have had amongst ourselves and our clients; it is by no means an exhaustive list.

  1. Adversarial AI: Every time you experience, read about, implement, or innovate a new AI-enabled capability, malicious actors can do the same. They have everything we have and likely more free time to use it against us. As adversaries weaponize AI, we will continue to see a higher volume of attacks, a higher velocity of attacks, new attacks, and much more rapidly reactive attacks.
  2. Smarter Bots and Malware: AI is helping adversaries produce more innovative and effective autonomous and semi-autonomous attacks. In 2018, we saw an AI-powered attack, TaskRabbit, which impacted over 3 million users. In 2023, we saw the introduction of WormGPT (not affiliated with ChatGPT or OpenAI), a multi-language unregulated Generative AI chatbot helping malicious actors craft and run better phishing scams and develop malicious code.
  3. Reliability of Data: Your enterprise is at risk if you rely solely on answers provided by an LLM. Reliability may more often be an enterprise risk than a cybersecurity risk. Errors can be introduced through intentional data poisoning by malicious actors or simply hallucinations in the model’s response to queries.  We have seen security and IT professionals using AI models as their primary advisors and taking on strategies that are not optimal or simply incorrect.
  4. Privacy and IP Concerns: OpenAI and similar organizations assert that they continue to improve the privacy and security of data provided to the models and do not use that data to train other users. Issues will, of course, vary from model to model. However, it isn’t only external models that may index and present private data. As companies look to leverage private models, security and privacy professionals will need to be increasingly aware of the risk of exposure. As an example, Ben Bowman caused a chatbot to reveal a credit card number by telling it his name was the credit card number on file, then asking it his name.

Ethical and Regulatory Challenges Introduced by AI

  1. Privacy Concerns: We have a legal and ethical responsibility to provide practical, reasonable, and customary protections for the data of our customers, employees, partners, and company. As noted above, AI introduces several new vectors to consider in maintaining this privacy.
  2. Plagiarism: It is expected that we will derive inspiration or insight from other works. However, plagiarizing from others is not standard or acceptable. Models continue to work hard to prevent this kind of issue, but they will often quote or derive and may not reference a source. Whether it is the tool providing unattributed quotes or your users getting the attributions but not crediting them, copyright and ethics may be violated here, often unintentionally, to cause patchwork plagiarism.
  3. Bias: Both data and people are flawed. Flaws introduce bias, and prejudice can spread. Biases introduced into AI responses can propagate and reflect intentional and unintentional prejudices. These can damage your reputation, impact your employees and customers, and lead to legal exposure in the worst cases.

AI in Cybersecurity Delivery

Speed and Accuracy are the overarching themes of AI’s benefit to cybersecurity; this includes the areas listed below. However, with speed and accuracy comes all the risks and considerations above. Ultimately, you can address most of these with process and care.

  1. Enhanced Analytics: This is an area we are excited about. Whether it be enhanced threat analytics or anomaly detection in your tool of choice or using AI to look for trends in the data and information you have aggregated from your client, better, faster, and deeper analytics will help you and your clients identify and respond to trends and threats more effectively.
  2. Automation: Code generation or AI-driven automation, such as those built into MS Sentinel Playbooks and automation rules, is becoming increasingly more ubiquitous. Automated processes can provide better standardization and more reliable results while freeing your human resources for more valuable tasks.
  3. AI in IAM Processes: Generative AI is transforming IAM processes. It is hard to find an area where AI isn’t improving processes. These are in no way limited to faster and more accurate user identification, better and more reliable authentication processes, AI-driven role and entitlement reviews, provisioning and de-provisioning, profile management, and intelligent auditing. AI is becoming increasingly critical in areas supporting zero-trust, such as policy generation, adaptive and continuous authentication, and access request automation.
  4. Threat Profile Service Customization: Advisors and enterprise security professionals must constantly evaluate our processes, actions, and lines of inquiry for specific threat profiles. Many vectors feed into these decisions, and if done incorrectly, we, at best, waste time and money with unnecessary work, and at worst, we miss critical inquiry areas and leave threats and needs unconsidered and unaddressed. AI tools with built-in mechanisms for customizing services based on threat profiles help optimize effort and reduce resource commitments.

What You Can Do Now to Address AI in The Enterprise

  1. Understand Your Organization’s AI Strategy: Most organizations do not have a fully evolved AI strategy; most still need someone tasked with leading AI in the organization. Leadership and stakeholders should spend some time developing a high-level strategy and guardrails around the use of AI in the enterprise; this will support your further activities. If you don’t have the time or inclination to look at all AI usage, we suggest you look into the use of Generative AI first. Your users have free access to it, and likely many are already using it.
  2. Develop a Generative AI Policy: There are many good examples available online; however, you may want to address how employees may use AI, the avoidance of legal and ethical issues, the need to adhere to other policies and data protection requirements, the organization’s right to monitor use, and any requirements for education. Of course, do all these at a policy level and save what and how-to for standards and guidelines.
  3. Review Other Governance and Guidance Documents. Review your existing policies, standards, and guidelines to evaluate any needed additions or changes based on your organization’s strategy for AI.
  4. Add Generative AI to Security Awareness: Add the safe use of Generative AI to your existing security awareness training. Minimally, all knowledge workers should be required to take this training, but you may want to consider it for all users.
  5. Skill Up:  AI is here and isn’t going away. The tools and mechanisms you use will increasingly leverage AI. Your team’s preparation and understanding will directly impact your success and security.
  6. Identify Use Cases: Identify one or two use cases for using AI in your security strategy. Start piloting processes to support your growing skill sets.
  7. Speak to Your Partners: Your product and services partners need to deal with the same issues you are, but likely at scale for their areas of expertise. Discuss their roadmaps, how they will leverage AI to improve the services they provide you, and how they are securing against the new threats AI has introduced.
  8. Don’t Abandon the Human Element: This one is the most critical of all the recommendations. Don’t rely solely on AI now or in the foreseeable future. AI is an excellent tool; it can accelerate your capabilities, reduce your workload, and provide you with previously unavailable capabilities. However, AI is just another tool in your arsenal; you need your professionals to contextualize, validate, control, and administer your AI tools and ensure that the risks of these tools’ output are minimized, and the value provided is maximized.

A Few Ways Bridge Security Advisors Leverages AI in Our Delivery

BSA’s use of AI continues to evolve. Our duty of care requires us to consider every step along our journey carefully. We have adopted AI in several ways and expect to broaden our use of AI continually. A few areas we are leveraging AI now include:

  1. Integration into our vCISO and Compliance Portal: Our vCISO and compliance portal leverages AI to select, refine, and order our inquiries into a customer’s security and privacy postures. We start our efforts with onboarding inquiries about an organization’s profile, services, architecture, compliance and security objectives, and operations. From this information, our portal refines examinations that apply directly to our specific customer and does not waste time with unnecessary or unproductive inquiries. The tool adds and removes searches based on ongoing answers throughout the discovery process. A human vCISO advisor conducts all of this to ensure proper context and value.
  2. Assisted Access Reviews and Access Risk Scoring: Our attestation services leverage AI to evaluate identity inputs such as anomalous access group memberships and toxic combinations to inform access reviews better.
  3.  Better Findings and Documentation: Our findings are tailored to each client and written by advisors for our clients. We use AI to conduct research, optimize our writing, and assist in reviews. All findings go through at least two human reviews, even when supplemented by AI.


In conclusion, the impact of AI on cybersecurity is significant and multifaceted, bringing both advantages and challenges. As we move forward into 2024 and beyond, it is essential to carefully assess the risks and benefits that AI can bring to your organization and implement strategies to mitigate those risks. At Bridge Security Advisors, we have learned valuable lessons from our experiences with AI in cybersecurity and are committed to helping our clients navigate this complex landscape. With careful consideration and attention to best practices, we can leverage the power of AI to enhance our security defenses while minimizing the risks.

Ready To Get Started?
Contact Us!

Get a free personalized consultation with one of our experienced partners