
Generative AI
This course offers a hands-on exploration of modern Generative AI, guiding participants through foundational and advanced concepts that power today’s intelligent systems. Beginning with neural network fundamentals and the semantic representation of words, trainees will progress to understanding transformer architectures, large language models (LLMs), and how these models are enhanced through Retrieval-Augmented Generation (RAG) and agentic AI systems. The course also introduces essential topics like evaluation methodologies (Evals), Model Context Protocol (MCP) for system integration, and the emerging practice of vibe coding—rapid, AI-assisted development of applications on top of LLMs.
Through a pick-and-choose, interest-driven approach, participants will gain both conceptual understanding and practical skills to build, evaluate, and deploy AI-powered solutions.
Curriculum
Generative AI
-
Neural network basics
-
Semantic representation of words
-
Transformers and variations
-
LLMs
-
RAG (Retrieval Augmented Generation)
-
Agents and Agentic AI
-
Evals
-
MCP (Model Context Protocol)
-
Vibe coding
-
Build your own app(s) on top of LLMs
Neural network basics
-
Structure of artificial neurons and how they mimic biological neurons.
-
Feedforward networks: inputs, hidden layers, outputs.
-
Activation functions (ReLU, sigmoid, tanh) and their role in nonlinearity.
-
Loss functions and optimization (e.g., cross-entropy, mean squared error).
-
Backpropagation and gradient descent explained.
-
Overfitting, underfitting, and regularization techniques (dropout, L2).
Semantic representation of words
-
One-hot encoding and its limitations.
-
Word embeddings: Word2Vec
-
Semantic similarity, analogy tasks, and vector arithmetic
-
Vector Stores
-
Subword tokenization (Byte Pair Encoding)
Transformers and Variations
-
Self-attention mechanism and why it replaced RNNs/CNNs.
-
Multi-head attention and positional encoding.
-
Encoder-decoder vs. decoder-only architectures.
-
Efficiency improvements: sparse attention, FlashAttention
-
Scaling laws and training challenges.
Large Language Models
-
How do LLMs work?
-
Training pipelines: pretraining, fine-tuning, RLHF.
-
Emergent abilities and scaling behavior.
-
Instruction tuning and alignment with human intent.
-
Safety issues: hallucinations, bias, prompt injection.
-
Open-source vs. proprietary LLM ecosystems
Retrieval-Augmented Generation (RAG)
-
Motivation: grounding LLMs in external knowledge.
-
Embeddings and vector databases (e.g., FAISS, Pinecone).
-
Document chunking, indexing, and retrieval strategies.
-
Fusion techniques: retrieve-then-read, rerankers, hybrid search.
-
Applications: Search, customer support, research
-
Limitations: retrieval latency, outdated or irrelevant sources
Agents and Agentic AI
-
Definition: systems that perceive, reason, and act autonomously.
-
Planning and tool use (API calling, browsing, DB queries).
-
Memory management (episodic vs. semantic memory).
-
Multi-agent collaboration and swarm intelligence.
-
Challenges: safety, control, and unpredictability.
-
Real-world applications: coding assistants, research copilots
Evals
-
What are evals? Why should you care?
-
Importance of systematic evaluation in LLM development.
-
Benchmarks: MMLU, BIG-bench, HELM, HumanEval.
-
Automatic metrics vs. human evaluation.
-
Stress testing
-
Continuous evaluation pipelines
-
Making good choices w.r.t. Evals
MCP (Model Context Protocol)
-
What is MCP? Why should you care?
-
Protocol structure: requests, responses, metadata.
-
Context window management.
-
Integration with external systems (databases, APIs, memory stores).
-
Security considerations.
Vibe coding
-
What is vibe coding? Why should you care?
-
Contrast with traditional software engineering practices.
-
Rapid prototyping and iteration cycles.
-
Leveraging pair-programming with AI assistants.
-
Where are we today? How is it different from last year?
-
Common pitfalls: code maintainability, over-scaffolding
-
Finding the right balance
Pick-and-choose
-
Covering all these elements at-depth would require a lot more time than what we have.
-
We will pick-and-choose based on trainee interest.
Learning Outcomes
By the end of this course, participants will be able to:
-
Explain the core principles of neural networks and how they enable generative AI models.
-
Understand semantic representations and how text meaning is encoded using embeddings and vector databases.
-
Describe and compare transformer architectures, including encoder-decoder and decoder-only models.
-
Analyze the training and fine-tuning process of Large Language Models (LLMs), including RLHF and instruction tuning.
-
Implement and experiment with Retrieval-Augmented Generation (RAG) pipelines for grounded AI responses.
-
Design and deploy autonomous agents that can plan, use tools, and manage memory effectively.
-
Evaluate model performance using standard benchmarks and Evals frameworks.
-
Integrate AI models with external tools and systems via Model Context Protocol (MCP).
-
Apply vibe coding techniques to rapidly prototype applications built on top of LLMs.
Who Should Enroll
This course is ideal for:
Developers, data scientists, and researchers seeking to deepen their understanding of Generative AI systems.
Educators and technical leads looking to integrate AI tools and methods into teaching or product development.
AI enthusiasts and early adopters interested in building practical applications powered by LLMs.
Industry professionals and entrepreneurs aiming to explore agentic AI, RAG systems, or AI-assisted development workflows.
Prerequisites
To get the most out of the course, participants should have:
A basic understanding of Python programming and comfort with libraries like NumPy or PyTorch/TensorFlow (conceptually).
Familiarity with machine learning fundamentals, such as supervised learning and model evaluation.
An interest in exploring AI-driven applications and emerging tools in the Generative AI ecosystem.
Pricing & Schedule
Pricing: $2,000 for the full course ($250 per session)
Schedule | Sessions | Register |
|---|---|---|
Saturdays 10:30 AM - 12:00 PM | 8 | SOLD OUT |
About AIClub’s Professional Development Program (AIClubPro)
AIClub is an education technology company focused on Artificial Intelligence Literacy. We educate individuals from students to professionals, covering all aspects of Artificial Intelligence and related technologies at appropriate depth from introductory to advanced, depending on the individual's prior knowledge and future interests. Our AIClubPro programs have educated professionals from many industries, with roles ranging from Engineering and Operations to Product Management, Marketing and Executive Leadership.
The AIClubPro program is led by three founders with exceptional depth and expertise in both industry and academia. Together, our leadership team has over 40 years of professional experience in technology, having served in executive roles in public companies and startups, founded four companies, and with over 200 patents and over 100 research publications.
