Learning path
LLM Learning Path
From one-shot chatbot queries to Cursor-style IDE collaboration and OpenClaw-style local agent governance
This path is not a pile of isolated prompt tricks. It is a staged upgrade: first write clear requests, then collaborate with tools and a codebase, then design MCP, Skills, and safety boundaries for local agents.
Learning path
Foundation: how LLMs work
这一段先解决什么: Before chasing techniques, start by understanding why models drift, miss details, or answer unevenly.
你会学到什么: This stage introduces tokens, context windows, modalities, and tool boundaries.
学完后你能做什么: After this stage you can tell whether a problem comes from the prompt, the context, or the task actually needing tools.
Stage 0: clear requests
这一段先解决什么: Many people start with one-shot chatbot questions and expect the model to fill in the missing background.
你会学到什么: This stage turns casual asks into clear briefs with goals, context, constraints, outputs, and acceptance checks.
学完后你能做什么: After this stage you can rewrite vague asks into clear tasks and know when clarification should come first.
- Say what you need, clearly and precisely · Practice
- Hard constraints on long context · Practice
- Why one-shot queries get stuck · Reading
- The prompt brief kit: goal, context, constraints, output, acceptance · Reading
- When the model should ask questions first · Reading
- Rewrite a vague request into an executable brief · Practice
Stage 1: tools & function calling
这一段先解决什么: Once a task touches schedules, databases, APIs, or code, chat alone stops being enough.
你会学到什么: This stage teaches the split between continuing the conversation and using systems, APIs, or actions.
学完后你能做什么: After this stage you can scope work the way you would with Cursor: files, actions, parameters, and validation.
- Everyday requests that deserve tools · Practice
- Which tasks should not rely on chat alone · Reading
- Structured output is not the same as a real tool call · Reading
- Before Cursor, learn how to assign work to an IDE assistant · Reading
- Write an executable coding request for an IDE assistant · Practice
Stage 2: MCP
这一段先解决什么: If you keep pasting documents and status updates into chat, it is usually a sign the workflow needs better wiring.
你会学到什么: This stage explains how files, knowledge bases, APIs, and permissions can be connected in a standard way.
学完后你能做什么: After this stage you can decide when MCP is worth it and define a small, safe integration shape.
Stage 3: Skills
这一段先解决什么: If you keep re-explaining the same task, it is time to capture that workflow instead of repeating it.
你会学到什么: This stage turns personal habits into reusable Skills / SOPs with inputs, steps, outputs, and failure handling.
学完后你能做什么: After this stage you can package your collaboration habits into instructions that teammates or agents can reuse.
- Skills — turn playbooks into executable specs · Reading
- Turn a repeating workflow into a Skill-style spec · Practice
- Prompt vs template vs Skill vs SOP · Reading
- Turn personal habits into agent-ready skills · Reading
- Write a skill spec for “read requirements -> edit code -> self-test -> report” · Practice
- Distinguish prompt, template, Skill, and playbook · Practice
Stage 4: Harness
这一段先解决什么: A prompt and a workflow are still not enough once real usage starts exposing drift and edge cases.
你会学到什么: This stage adds checklists, eval cases, regression sets, and human fallback where needed.
学完后你能做什么: After this stage you can design lightweight harnesses for important tasks and reason about rollout safety.
- Harness engineering — tame nondeterminism · Reading
- A one-page acceptance checklist · Practice
- Why a Skill is still not enough · Reading
- From manual checks to a lightweight regression set · Reading
- Design a first regression set for a task · Practice
- Write a one-page “pre-launch checks + rollback on failure” rulebook · Practice
Cross-cutting tips
这一段先解决什么: By the time you reach local agents, the question is no longer just “can I use this?” but “can I govern this safely?”
你会学到什么: This stage connects safety boundaries, approvals, human responsibility, and long-term usage habits.
学完后你能做什么: After this stage you can build a durable workflow from one-shot chat to Cursor-style collaboration to OpenClaw-style agents.
All challenges
- What is an LLM (intuition)
- Tokens and context windows
- Modalities and tool boundaries
- Say what you need, clearly and precisely
- Hard constraints on long context
- Why one-shot queries get stuck
- The prompt brief kit: goal, context, constraints, output, acceptance
- When the model should ask questions first
- Rewrite a vague request into an executable brief
- Everyday requests that deserve tools
- Which tasks should not rely on chat alone
- Structured output is not the same as a real tool call
- Before Cursor, learn how to assign work to an IDE assistant
- Write an executable coding request for an IDE assistant
- What MCP is for
- Do we need MCP? A tiny checklist
- Why IDE collaboration is not enough
- MCP is about the right context, not just more context
- Write a minimal MCP integration spec for a knowledge base or API
- Skills — turn playbooks into executable specs
- Turn a repeating workflow into a Skill-style spec
- Prompt vs template vs Skill vs SOP
- Turn personal habits into agent-ready skills
- Write a skill spec for “read requirements -> edit code -> self-test -> report”
- Distinguish prompt, template, Skill, and playbook
- Harness engineering — tame nondeterminism
- A one-page acceptance checklist
- Why a Skill is still not enough
- From manual checks to a lightweight regression set
- Design a first regression set for a task
- Write a one-page “pre-launch checks + rollback on failure” rulebook
- Habits you can adopt today
- Local agent safety boundaries: files, commands, and secrets
- AI and my job — a three-part lens
- Which decisions should stay with humans
- Turn one-off usage into a long-term learning loop
Learning resources
Resources are grouped by the learning sequence: structured prompting first, then tools and IDE collaboration, then MCP, local agents, Skills, harnesses, and safety. On-site lessons are original; external links are mostly official docs.
From one-shot queries to structured prompts
- OpenAI prompt engineering
Official prompt design guide. Best after Stage 0 when you need clearer structure and constraints.
- Prompting Guide
A broad map of prompting patterns. Best mid-Stage 0 as supporting reference.
- Kaggle prompt engineering whitepaper
A textbook-style overview. Best after Stage 0 to fill in gaps.
From chat to tools / function calling
- OpenAI function calling
Official guide to tool invocation. Best at the start of Stage 1.
- OpenAI structured outputs
Useful for separating structured replies from actual tool use. Best mid-Stage 1.
- OpenAI Tokenizer
See how context grows and why long chats become expensive. Best between Stage 0 and Stage 1.
From IDE collaboration to code agents
- Cursor docs
Official docs for codebase-aware collaboration. Best after Stage 1 basics.
- Cursor rules
Helpful for understanding scoped collaboration, rules, and guardrails in an IDE.
- Cursor guides
Workflow-oriented docs for moving from prompts to codebase tasks.
MCP and local agents
- Model Context Protocol
The official MCP hub. Best at the start of Stage 2.
- OpenClaw docs
Official local-agent docs covering gateway, tools, sessions, and security. Best after Stage 2 basics.
- OpenClaw website
A product-level overview to build intuition for local agents.
Skills / playbooks / SOPs
- Anthropic: Building effective agents
A strong workflow design reference. Best in Stage 3 when prompts start turning into reusable procedures.
- Cursor background agents
Useful for connecting rules, repeatable workflows, and delegated tasks.
- OpenClaw skills
Shows how local-agent capabilities get extended through reusable skills.
Harness / evals / reliability
- OpenAI: Harness engineering
Why wrappers, checks, and evals matter. Best at the start of Stage 4.
- OpenAI evals guide
Useful for designing regression sets and acceptance criteria in Stage 4.
Safety, permissions, and governance
- MCP security
A practical look at trust boundaries for connected context and tools.
- OpenAI safety best practices
Read before connecting real data or real actions.
- OpenClaw security
Best for Stage 6 when you start reasoning about local-agent permissions and execution risk.