Comment on page
Course Syllabus
By the end of this course, you will:
- Be proficient in developing LLM-based applications for production applications from day 0.
- Have a clear understanding of LLM architecture and pipeline.
- Be able to perform prompt engineering to best use generative AI tools such as ChatGPT.
- Create an open-source project on a real-time stream of data or static data.
Module
1 – Basics of LLMs
Topics
- What is generative AI and how it's different
- Understanding LLMs
- Advantages and Common Industry Applications of LLMs
- Bonus section: Google Gemini and Multimodal LLMs
Module
2 – Word Vectors
Topics
- What are word vectors and word-vector relationships
- Role of context in LLMs
- Transforming vectors in LLM responses
- Bonus Resource: Overview of Transformers Architecture and Vision Transformers
Module
3 – Prompt Engineering
Topics
- Introduction and in-context learning
- Best practices to follow: Few Shot Prompting and more
- Token Limits
- Prompt Engineering Peer Reviewed Exercise
Module
4 – RAG and LLM Architecture
Topics
- Introduction to RAG
- LLM Architecture Used by Enterprises
- Architecture Diagram and LLM Pipeline
- RAG vs Fine-Tuning and Prompt Engineering
- Key Benefits of RAG for Realtime Applications
- Simialrity Search for Efficient Information Retrieval
- Bonus Resource: Use of LSH + kNN and Incremental Indexing
Module
5 – Hands-on Project
Topics
- Installing Dependencies and Pre-requisites
- Building a Dropbox RAG App using open-source
- Building Realtime Discounted Products Fetcher for Amazon Users
- Problem Statements for Projects
- Project Submission
Last modified 15h ago