UCLA’s Chris Mattman shares how AI is transforming cybersecurity
Share This Article

By Rebecca Kendall, UCLA Newsroom
October is Cybersecurity Awareness Month, a nationwide effort to highlight digital security threats and equip individuals with the tools they need to protect themselves online. One of the biggest concerns is the growing use of artificial intelligence in cyberattacks, especially the rise of deepfakes, which are AI-generated videos, images or audio clips that convincingly mimic real people. Cybercriminals increasingly leverage deepfakes to deceive victims into sharing personal information, clicking on malicious links or sending money and sensitive data.
In this conversation with UCLA Newsroom, Chris Mattmann, UCLA’s chief data and artificial intelligence officer, discusses how AI is transforming cybersecurity and offers practical advice for UC community members interested in AI-driven defense strategies.
How are AI and deepfakes changing the cybersecurity threat landscape in higher ed?
AI and deepfakes add new complexities to cybersecurity threats, especially on campus. A major concern is reputational harm. AI-generated videos or emails can make it seem like someone said or did something they never did. These impersonations can be very convincing, often deceiving individuals into unwittingly providing personal information. This information is then used in identity theft, direct deposit fraud or other cyber extortion tactics. On campus, deepfakes can also infiltrate daily academic activities. For example, a student might use an AI-created avatar to appear in a Zoom class and earn participation credit without actually attending. This kind of impersonation undermines academic integrity and underscores the importance of digital literacy and detection skills. As co-chair of UCLA’s Advisory Committee on AI in Teaching and Learning, I work to help faculty and students recognize these threats and raise awareness of how AI can be misused in academic settings.
What are the most common signs of a phishing attempt, and how is AI involved?
Phishing attempts often rely on creating a sense of urgency and pressuring recipients to click links or open attachments without thinking. Spear phishing takes this further by researching the target and crafting personalized messages that feel genuine. AI increases the danger by automating and scaling these attacks across platforms such as email, SMS, phone and social media. It can generate thousands of customized messages, increasing the chances of success. Conversely, AI is also used to detect phishing by analyzing patterns, language and sender behavior in real time.
How can AI-powered social engineering tactics manipulate us?
AI makes social engineering more realistic, scalable and multimodal. Deepfakes and AI-generated messages mimic our trusted contacts with alarming accuracy, exploiting visual and emotional cues. Attackers use AI to analyze our social media, then replicate identities and launch coordinated campaigns across email, SMS and social platforms. This realism causes targets to trust and engage with malicious content — especially when it appears to come from familiar sources like university leaders, colleagues or friends — to create an additional sense of urgency. AI also allows attackers to target multiple vectors simultaneously, increasing the likelihood of success.
Are AI-driven attacks becoming more sophisticated? How can universities prepare their infrastructure to defend against them?
Yes. Distributed Denial of Service (DDoS) attacks, once simple floods of traffic, are now powered by AI, making them faster, stealthier and more difficult to trace. These attacks can overwhelm servers with thousands of requests per second, disrupting services and damaging reputations. Traditional defenses involve detecting traffic patterns or geographic origin, but AI can simulate traffic from multiple locations worldwide, bypassing filters and hiding the true source. To protect against these threats, universities need layered, AI-enhanced infrastructure — including firewalls, traffic scrubbing tools and real-time anomaly detection systems. Advanced AI-powered networks can proactively detect and block malicious threats before they ever threaten members of the UC community; for example, as part of the Bruin Connect and Secure program, UCLA has begun a network transformation that will implement many of these advanced capabilities. AI can also help identify new attack methods by learning from past incidents and adjusting defenses accordingly.
How can students and staff learn about and use AI-focused defense strategies?
Cybersecurity education is foundational. AI literacy increases awareness of the risks in usage and how to exercise sound judgment, creates a shared vocabulary and empowers individuals to spot and respond to threats. Training helps people understand what to do, whether it’s reporting phishing or responding during a breach.
Visit UC’s Cybersecurity Awareness Month website for information and resources to promote online safety throughout the year.