Cyber Security Awareness Month - AI-Powered Phishing
16 October 2025 | Posted in Online Safety
Cyber Security Awareness Month - AI-Powered Phishing
This week, we're focusing on one of the fastest-growing threats: AI-powered social engineering and phishing.
Why AI-Powered Phishing is on the Rise
Traditional phishing attempts were often easy to spot due to poor grammar, awkward phrasing, or generic content. Generative AI tools have changed this dramatically:
- Scale and Realism: AI can generate thousands of perfectly-worded emails, texts, and voice scripts instantly, removing the linguistic red flags.
- Hyper-Personalisation: AI models can quickly scrape public data to craft highly personalised attacks (spear phishing) that reference specific colleagues, projects, or events, making the messages feel legitimate.
- Voice Cloning: AI can mimic a person’s voice from just a few seconds of audio, making "urgent" phone calls or voicemails from a manager or colleague extremely convincing.
Warning Signs of AI-Generated Social Engineering
Since the grammar is perfect, we need to focus on context and behavior. Watch out for these signs:
- Unusual Urgency: Any request demanding immediate action—especially involving money, sensitive data, or changing login credentials—that bypasses standard protocols.
- Hyper-Specific Detail, Low Context: The message mentions precise details about you or a project but feels slightly "off" in tone or asks you to do something outside the norm.
- Unusual Communication Channel: Receiving a sensitive request from a leader via an unexpected channel, like a personal text message or an unfamiliar third-party app.
- Suspicious Attachments/Links: Even if the email seems legitimate, treat all links and attachments with extreme caution.
How to Stay Secure While Using AI Programs
As we integrate AI tools into our workflows, protecting ourselves is a shared responsibility:
- Never Input Sensitive Data (Data Leakage): When using public AI tools (like large language models), never input sensitive, proprietary, or confidential information. Treat anything you input as data that if leaked publicly would not cause harm to either a staff or students privacy or their reputation.
- Verify AI Output (Accuracy & Bias): Do not blindly trust facts, code, or financial advice generated by AI. Always verify critical information from trusted sources before acting on it or sharing it.
- Use Approved Tools Only (Shadow IT): Only use AI programs and platforms that have been officially approved and vetted by TCE such as <insert names here>. Avoid unauthorised tools that may expose our data.
Following these defensive steps significantly improves your security posture against these evolving threats.