
LLM 101: Foundations and Practical Application
Audience – Basic understanding of python, AI/ML
The course is eligible for 0.4 CEUs equivalent to 4 PDHs. Upon successful completion and completing the evaluation form, participants will be eligible to receive a digital certificate at no additional cost.
Who Should Attend: Computer engineers, data scientists, data engineers, software developers, business executives, industry executives, industry leaders, business leaders, technical managers, and similar professionals.
Course attendees are welcome to bring their own laptops for the hands-on activities during Part 2 of this course.
There will be a short lunch break (pizza provided) for questions and networking
On-campus parking on Saturdays is free of charge. You can park in the nearby Commons Garage.
Further information is at:
https://ewh.ieee.org/r2/baltimore/continuing_education/Web_Ad_LLM_101_2025.htm
Speaker(s): Swati Tyagi,
Agenda:
Total Duration: 4.0 Hours
Part 1: Theoretical Foundations (1.5 Hours)
1. Introduction to Large Language Models (30 minutes)
– Topics Covered:
– Evolution from AI to Generative AI
– Understanding Neural Networks and Transformers
– Overview of Foundation Models (e.g., GPT, Claude, Gemini)
– Learning Outcome: Grasp the basic architecture and functioning of LLMs.
2. Prompt Engineering Techniques (30 minutes)
– Topics Covered:
– Crafting Effective Prompts
– Zero-shot and Few-shot Learning
– Chain-of-Thought and ReAct Techniques
– Learning Outcome: Develop skills to interact effectively with LLMs through prompt design.
3. Retrieval-Augmented Generation (RAG) Concepts (30 minutes)
– Topics Covered:
– Introduction to RAG and its Importance
– Components: Retrieval Mechanism and Generation Model
– Use Cases and Benefits
– Learning Outcome: Understand how RAG enhances LLM capabilities by integrating external knowledge.
Part 2: Hands-On Application in Google Colab (2.5 Hours)
1. Setting Up the Environment (30 minutes)
– Activities:
– Accessing Google Colab
– Installing Necessary Libraries (e.g., LangChain, FAISS)
– Loading Sample Data (e.g., PDF documents)
– Learning Outcome: Prepare the workspace for building a RAG application.
2. Building a RAG Pipeline (60 minutes)
– Activities:
– Text Extraction and Chunking
– Creating Embeddings and Storing in Vector Database
– Implementing Retrieval Mechanism
– Integrating with a Language Model for Response Generation
– Learning Outcome: Construct a functional RAG system capable of answering queries based on provided documents.
3. Enhancing the RAG Application (30 minutes)
– Activities:
– Implementing Reranking Techniques (e.g., BM25)
– Testing with Various Queries
– Discussing Potential Improvements and Extensions
– Learning Outcome: Refine the RAG system for better accuracy and explore avenues for further development.
4. Q&A and Discussion (30 minutes)
– Activities:
– Addressing Participant Questions
– Sharing Best Practices
– Discussing Real-World Applications
– Learning Outcome: Consolidate learning and clarify any doubts regarding LLMs and RAG.
Room: 233, Bldg: ILSB, UMBC, 1000 Hilltop Circle, Baltimore, Maryland, United States, 21250