Secure data center using AI

Generative AI: Changing the Cybersecurity Game

October 05, 20245 min read

"I'm particularly worried that these models could be used for large-scale disinformation. Now that they're getting better at writing computer code, they could be used for offensive cyberattacks." -Sam Altman, CEO, OpenAI

Generative AI: Changing the Cybersecurity Game

October is Cybersecurity Awareness Month, and this year, there's a new player in the game - Generative AI (GenAI). While GenAI has been making waves in content creation, marketing, and customer service, what is its impact on cybersecurity? Well - it’s complicated… It’s not just a tool for innovation and growth - it’s also opening the door to some serious new risks.

So, how exactly is GenAI shaking up the world of cybersecurity?

Why GenAI Should Be a Cybersecurity Concern
GenAI’s power to create new data - whether text, images, voice, or code – is all exciting AND concerning. While it’s great for creative problem-solving, that same power creates some unique challenges for cybersecurity.

1. GenAI as a Tool for Cyber Attackers
I’m always talking about GenAI as a game-changing tool for business, but let’s not forget that the bad actors are getting in on the action, too. They’re using it to automate and scale their attacks in ways that are faster, smarter, and a lot harder to defend against.

Here are some key ‘AI’ associated cybersecurity threats:

  • AI-Generated Phishing Scams: Forget the poorly written phishing emails we’ve all learned to spot. GenAI enables attackers to craft highly personalized, context-aware emails that perfectly mimic your trusted contacts. These look legit enough to fool even the most cautious employee. So, those “spidey senses” we’ve come to rely on to catch something off? They’ll need to level up too.

  • Deepfakes: We used to worry about "executive impersonation" through a form of business email compromise (BEC) where the attacker would impersonate a high-ranking executive, typically the CEO, to manipulate employees into taking certain actions. Although convincing, detection has become easier and easier for organizations to detect and train against.

    Fast forward - GenAI can now produce eerily convincing ‘deepfakes’ in video and audio where attackers can impersonate CEOs or high-ranking officials in a way more sophisticated manner. Imagine getting a video from your CEO instructing you to transfer funds, only to find out later it was a deepfake. Traditional security measures? They’re not ready for this.

  • Automated Exploit Generation: GenAI can be used to automatically create new types of malware or find vulnerabilities faster than a human hacker could even dream of. This is bad news for defenders because it makes it nearly impossible to stay ahead of the game.

2. The Rise of AI-Driven Social Engineering
Social engineering attacks, like phishing, rely on manipulating people to give up sensitive info. GenAI takes this to the next level by creating ultra-convincing fake identities, messages, and even entire conversations.

GenAI can analyze data to create personalized communications that feel so real it’s scary. These aren’t the simple tricks we’ve seen before. They’re exploiting behavioral patterns, preferences, and relationships to craft the perfect trap. Whether you’re a business or a consumer, you must be much more skeptical of unfamiliar contacts or those that seem familiar but feel slightly “off.”

3. GenAI’s Unintended Consequences for Cybersecurity
GenAI doesn’t just pose direct risks from attackers - it can unintentionally create new vulnerabilities within your organization, too:

  • Data Leaks: GenAI models are trained on massive datasets, which can sometimes include sensitive or proprietary information. If not handled carefully, these models can leak confidential data. And let’s be honest - sometimes employees might accidentally store or share sensitive data using GenAI tools without the right governance in place.

  • Model Poisoning: Attackers can feed bad data into GenAI models, causing the AI to make poor decisions or open up vulnerabilities that wouldn’t exist otherwise. This is a nightmare for anyone relying on AI without tight security protocols.

Other Forms of AI in Cybersecurity
While GenAI is grabbing all the headlines, other forms of AI also play crucial roles in cybersecurity - defense and attack.

1. Machine Learning (ML) for Defense
Machine learning is already a key player in cybersecurity. ML algorithms can analyze massive amounts of data - network traffic, user behavior, system logs, configuration changes - to detect anomalies and potential threats.

ML-driven systems can:

  • Spot Patterns: ML detects suspicious behavior like unusual login times or abnormal data transfers.

  • Predict Threats: By analyzing historical data, ML can predict future cyber threats and allow companies to patch vulnerabilities before they’re exploited.

  • Automate Responses: When a threat is detected, ML systems can take immediate action - like quarantining devices or blocking IP addresses - without needing human intervention.

But ML systems aren’t invincible. Like GenAI, ML models can be targeted by attackers using adversarial data to mislead the system and misclassify threats. Bottom line: AI-driven defense is great, but it still needs human oversight.

2. AI-Enhanced Malware
On the flip side, attackers are using AI to supercharge malware creation:

  • Evading Detection: AI helps malware constantly evolve and change its signature, making it harder for traditional antivirus tools to detect.

  • Increasing Efficiency: AI can speed up the reconnaissance phase of an attack, scanning networks for vulnerabilities much faster than human hackers could.

AI Governance: The Key to Securing Your AI-Driven Future
As businesses continue adopting AI, including GenAI, the importance of AI governance can’t be overstated. AI governance ensures that AI is used responsibly, ethically, and securely - without it, your business is exposed to new vulnerabilities that AI-driven threats can exploit.

The NIST AI Risk Management Framework emphasizes that AI brings unique risks to data integrity, system vulnerabilities, and ethical decision-making. To safely implement AI - whether it’s GenAI or ML solutions - your business needs a solid governance structure that ensures proper oversight and accountability.

Conclusion: GenAI Isn’t Just a Buzzword - It’s a Security Concern
Generative AI isn’t just changing marketing and content creation - it’s becoming a powerful tool for cyber attackers, too. With its ability to generate highly personalized attacks, deepfakes, and automated malware, GenAI is a major cybersecurity concern.

And it’s not just GenAI. Machine learning and other AI technologies are playing critical roles in both defending and attacking systems. The future of cybersecurity is deeply intertwined with AI, and businesses that want to stay ahead of the curve need to focus on AI governance and risk management - utilizing AI solutions to protect against bad actors who embrace AI to do harm.

So, is your business ready for AI-powered security threats? Visit us at AlterBridge Strategies to learn how we can help you integrate AI governance into your cybersecurity plan and future-proof your business.

Kristi Perdue, CEO, CAIO, AlterBridge Strategies

Kristi Perdue

Kristi Perdue, CEO, CAIO, AlterBridge Strategies

Back to Blog