Cybersecurity and AI: Securing Public Administration Infrastructure
Scheduled sessions
The New Threat Landscape: Artificial Intelligence is a double-edged sword. While it enables unprecedented automation for public services, it also equips malicious actors with tools to generate highly sophisticated, automated cyberattacks.
Defend Against AI: Learn how to detect and defend against AI-generated phishing, deepfakes designed for social engineering, and automated vulnerability scanning targeting government networks.
Defend With AI: Move beyond traditional rule-based SIEMs. Discover how to use Machine Learning models to analyze network traffic anomalies, automate threat hunting, and accelerate incident response within a Security Operations Center (SOC).
Secure Your AI: Understand the vulnerabilities inherent to ML models (Prompt Injection, Data Poisoning, Model Extraction) and how to secure the AI tools your institution is actively deploying.
Who it’s for: Cybersecurity Experts (CERT/CSIRT), Security Analysts, and Network Administrators defending public sector IT infrastructure.
Skills You Will Learn
Curriculum
Offensive AI: How Attackers Use Machine Learning
- AI-Generated Phishing: The end of grammatical errors and the rise of hyper-personalized spear-phishing
- Deepfakes in Social Engineering: Voice cloning and video manipulation targeting public officials
- Automated Reconnaissance: Using LLMs to map vulnerabilities and generate exploit payloads
- Lab: Analyzing AI-generated malicious payloads vs. traditional payloads
Defensive AI: Enhancing the SOC
- Beyond rules: Using Unsupervised Learning for anomaly detection in network traffic
- AI in SIEM/SOAR: Automating log analysis and alert triage to reduce analyst fatigue
- Behavioral Analytics: Detecting compromised user accounts through ML-based profiling
- Lab: Training a simple anomaly detection model on network traffic logs (PCAP data)
Securing AI Systems (Adversarial Machine Learning)
- Data Poisoning: How attackers compromise model training data
- Model Evasion and Inversion: Bypassing ML-based spam filters and extracting model logic
- LLM Vulnerabilities: Deep dive into Prompt Injection and Jailbreaking internal government chatbots
- Lab: Executing and defending against a Prompt Injection attack on a RAG application
Incident Response and Future Trends
- Using LLMs for Incident Response: Automated forensic analysis and report generation
- Regulatory considerations: Incident reporting in the context of the NIS2 Directive and EU AI Act
- Tabletop Exercise: Responding to a simulated, AI-coordinated ransomware attack on a public agency
Course Day Structure
- Part 1: 09:00–10:30
- Break: 10:30–10:45
- Part 2: 10:45–12:15
- Lunch break: 12:15–13:15
- Part 3: 13:15–15:15
- Break: 15:15–15:30
- Part 4: 15:30–17:30