- This event has passed.
Building Secure Code with LLMs: A Hands-On Prompt-to-Verification Workflow
Large Language Models (LLMs) are increasingly used by developers and students to generate code, offering significant gains in productivity and accessibility. However, LLM-generated code often introduces subtle yet critical security vulnerabilities, particularly in domains such as cryptography and secure software development. This webinar presents a practical and systematic approach to secure code generation using LLMs, focusing on transforming raw model outputs into verifiably safe implementations.
The session introduces a structured workflow, Prompt → Harden → Verify, that guides participants through crafting security-aware prompts with explicit constraints, enforcing correctness through unit testing, and applying lightweight automated security checks prior to deployment. Through hands-on demonstrations, attendees will learn how to integrate secure coding practices directly into the LLM-assisted development pipeline.
By the end of the webinar, participants will gain a repeatable methodology for reducing vulnerabilities in AI-generated code, along with ready-to-use templates and a security validation checklist. This work aims to bridge the gap between AI-assisted programming and secure software engineering, enabling practitioners to harness LLM capabilities while maintaining strong security guarantees.
Speaker(s): Dr. Mahmoud Abouyoussef,
Virtual: https://events.vtools.ieee.org/m/553201