AI Fundamentals
The concepts every AI developer needs
Build a solid foundation in AI/LLM concepts. Understand how language models work, what embeddings are, how tools extend capabilities, and why type safety matters for AI applications.
What You’ll Learn
Section titled “What You’ll Learn”- Understand what happens when you call an LLM API
- Explain tokens, attention, and context windows
- Implement semantic search using embeddings
- Build a tool that an LLM can call
- Design type-safe schemas for AI outputs
Modules
Section titled “Modules”How LLMs Work
Demystify language models by understanding tokens, prediction, attention, and context windows. No PhD required.
30 minVectors & Embeddings
Learn how text becomes numbers that capture meaning. Understand embeddings, vector databases, and how they power semantic search and RAG.
35 minTools & MCP
Extend LLM capabilities beyond text generation. Learn how tool calling works, build your own tools, and understand the Model Context Protocol.
35 minType Safety for AI
Learn to constrain AI outputs with schemas. Master JSON Schema, Zod, and Pydantic to build reliable, type-safe AI applications.
40 minConcepts Covered
Section titled “Concepts Covered”Fundamentals
LLMs process text as tokens — chunks of characters that form the atomic units of input and output, directly affecting pricing and context limits.
10 minThe context window is the maximum amount of text (in tokens) an LLM can 'see' at once, including prompts, history, injected documents, and responses.
12 minThe core mechanism that allows language models to understand how words relate to each other by dynamically focusing on relevant parts of the input.
15 minEmbeddings convert text into numerical vectors that capture semantic meaning, enabling similarity search, clustering, and the foundation for RAG systems.
15 minRAG combines document retrieval with LLM generation, allowing AI to answer questions grounded in your specific data without fine-tuning.
20 minConstrain AI responses to follow a specific format using JSON Schema, enabling reliable data extraction and type-safe integrations.
15 minProtocols
Tool use enables LLMs to interact with external systems by generating structured function calls that applications execute and return results for.
15 minMCP is an open protocol by Anthropic that standardizes how AI applications connect to data sources and tools through a unified server architecture.
12 minDecision Guides
Section titled “Decision Guides”When should I use a simple chatbot vs RAG vs an autonomous agent?
20 minShould I fine-tune a model or use RAG for domain-specific knowledge?
20 minShould I use LangChain (or similar frameworks) or build custom?
15 min