1

Copilot Architecture & Flow

What: How your prompt flows through Copilot โ€” context is gathered, routed via a proxy to the AI model, filtered for safety, and returned as a response.
HOW YOUR PROMPT FLOWS THROUGH COPILOT ๐Ÿ’ฌ User Prompt Chat ยท Inline ยท CLI Completions ๐Ÿ“‚ Context Gathering Open tabs ยท #file ยท #codebase Instructions ยท Skills ๐Ÿ›ก๏ธ Pre-Model Filters Responsible AI Content Exclusion ๐Ÿ”€ Proxy Service Routes to model Applies policies ๐Ÿง  AI Model GPT ยท Sonnet ยท Gemini Opus ยท Grok ยท etc. User-selected or auto ๐Ÿ” Post-Model Filters Duplicate code check Safety & quality Response delivered to user CONTEXT SOURCES ๐Ÿ“„ Open Tabs Active editor files ๐Ÿ“ Workspace #codebase search ๐Ÿ“‹ Instructions .md config files ๐Ÿงฉ Skills SKILL.md bundles ๐Ÿ”Œ MCP Servers External tools/data โœ‚๏ธ Selection #selection context ๐ŸŒ Web Pages #fetch URL content OUTPUT TYPES โŒจ๏ธ Inline Completions ๐Ÿ’ฌ Chat Responses ๐Ÿ”ง Agent Actions ๐Ÿ“ Code Edits ๐Ÿ—๏ธ Terminal Commands
Key Stages
  • Context gathering โ€” workspace files, open tabs, selections, instructions
  • Pre-model filters โ€” responsible AI, content exclusion
  • Proxy service โ€” routes to chosen model, applies policies
  • AI model โ€” user picks model or auto-select (10% discount)
  • Post-model filters โ€” duplicate code detection, safety checks
Installation
ext install GitHub.copilot ext install GitHub.copilot-chat
๐Ÿ’ก Your code is never stored or used for training (all plans). Prompts are not retained after the response is delivered.
Use Cases
Debug unexpected output โ€” understand why Copilot gave a wrong answer (check if context was gathered correctly)
Explain to stakeholders โ€” describe how code never leaves the proxy pipeline (security/compliance reviews)
2

Chat Experience

What: The conversational AI panel in VS Code. Use slash commands, context variables (#), and participants (@) to give Copilot precise context for better answers.
ANATOMY OF A COPILOT CHAT PROMPT / Slash Commands /explain /fix /tests What to do # Context Variables #file #codebase #changes What to include @ Participants @workspace @github Who to ask ๐Ÿ’ฌ Assembled Prompt /explain #file:auth.ts "How does the login flow work?" Your natural language question + structured context โœจ AI Response Explanation, code edit, tests, scaffold, or fix CHAT MODES ๐Ÿ’ฌ Ask โœ๏ธ Edit ๐Ÿค– Agent ๐Ÿ“‹ Plan โŒ˜I Inline Chat โŒƒโŒ˜I Panel ยท โŒ˜I Inline ยท ๐ŸŽ™๏ธ Voice
Slash Commands
CommandActionCommandAction
/explainExplain selected code/fixFix problems in code
/testsGenerate unit tests/newScaffold new project
/clearNew chat session/helpCopilot quick reference
/initGenerate instructions/searchWorkspace search
/delegateSend to coding agent CLI/compactCompress context CLI
Context Variables (#)
#fileInclude file content
#selectionSelected text
#codebaseFull workspace context
#problemsError/warning diagnostics
#changesGit changes (diff)
#fetchFetch a web page
#terminalLastCommandLast terminal output
#block #class #functionCode scope
Chat Participants (@)
@workspaceProject structure & code
@vscodeVS Code commands & features
@terminalTerminal shell context
@githubGitHub-specific skills
@azureAzure services help
Voice Chat VS Code
  • Install ms-vscode.vscode-speech
  • Dictate prompts hands-free
Use Cases
/explain + #file โ€” "Explain the auth flow in #file:auth.ts" (onboard to unfamiliar code)
/fix + #problems โ€” "Fix all errors in #problems" (resolve build failures in one prompt)
3

Code Completions

What: Real-time ghost text suggestions as you type. Copilot predicts your next lines of code based on context โ€” accept with Tab, dismiss with Esc.
HOW CODE COMPLETIONS WORK โŒจ๏ธ You Type Code Comment, function sig, variable name, pattern ๐Ÿ‘ป Ghost Text Appears Greyed-out inline suggestion Multiple alternatives available โœ… Accept โ€” Tab Inserts full suggestion โŒ Dismiss โ€” Esc Ghost text disappears ๐Ÿ”„ Cycle โ€” Alt+] / [ Browse alternative suggestions shows next/prev alternative โœ‚๏ธ Partial โ€” Ctrl+โ†’ Accept word by word app.py โ€” VS Code 12 13 14 15 16 # fetch user by email from the database def fetch_user_by_email ( email : str ): """Fetch a user record from the database by email.""" query = "SELECT * FROM users WHERE email = %s" return db.execute(query, (email,)).fetchone() ๐Ÿ‘ป Ghost Text Press Tab to Accept
Keyboard Shortcuts
TabAccept full suggestion
EscDismiss suggestion
Alt+] / Alt+[Cycle alternatives
Ctrl+โ†’Accept word by word
Enable / Disable
  • Toggle from Copilot icon in status bar
  • Disable for specific languages in settings
  • Snooze โ€” temporarily pause suggestions
Completions Model
  • Change model in settings for inline suggestions
  • Premium models consume premium requests
  • Free plan: 2,000 completions/month
๐Ÿ’ก Write a descriptive comment first โ€” Copilot uses it as a prompt to generate the function body below.
Use Cases
Write a function from a comment โ€” type // fetch user by email, Copilot generates the implementation
Auto-complete test cases โ€” write one it("should...") block and Tab through the rest
4

Next Edit Suggestions (NES)

What: Predicts your next edit based on recent changes, not just cursor position. Jumps you to the right location and suggests the change โ€” great for repetitive multi-location refactors.
HOW NEXT EDIT SUGGESTIONS WORK โœ๏ธ You Make an Edit Rename, refactor, add code in one location ๐Ÿง  NES Detects Pattern Analyzes your change and finds similar locations โžก๏ธ Arrow โ†’ Appears Indicator points to the next suggested edit location โœ… Tab โ†’ Tab Tab navigates to edit, Tab again accepts it ๐Ÿ” Repeat Tab again for next location jumps to next matching location BEFORE (your edit) AFTER (NES applies) 5 const userId = getUser (); accountId 8 console.log( userId ); โ† still old name 15 fetchData( userId ); โ† still old name โ†’ NES 5 const accountId = getUser(); 8 console.log( accountId ); โœ… fixed 15 fetchData( accountId ); โœ… fixed
How It Works
  • Monitors your editing patterns across files
  • Suggests edits at locations you haven't navigated to yet
  • Shows arrow indicator โ†’ jump to suggested edit
  • Tab to navigate to edit, Tab again to accept
Why Use It
  • Rename variables across multiple locations
  • Apply consistent pattern changes
  • Add missing imports after using a new API
Use Cases
Rename userId โ†’ accountId โ€” change one occurrence, NES finds and fixes the rest across files
Add error handling โ€” wrap one function in try/catch, NES suggests the same for similar functions
5

Inline Chat

What: Chat with Copilot directly inside your editor or terminal without switching to the sidebar. Get targeted code edits, command generation, and image-to-code conversion in context.
INLINE CHAT WORKFLOW IN EDITOR โ€” โŒ˜I / Ctrl+I ๐Ÿ–ฑ๏ธ Select Code (optional โ€” or place cursor) ๐Ÿ’ฌ โŒ˜I โ†’ Type Prompt "Extract into a custom hook" ๐Ÿ“Š Preview Diff Review changes inline โœ… Accept / Discard Apply edits or reject IN TERMINAL โ€” โŒ˜I $ โŒ˜I โ†’ "Find all .log files > 100MB and delete them" Copilot generates: find . -name "*.log" -size +100M -delete โ†‘ Run or Copy VISION INPUT โ€” Attach Images ๐Ÿ–ผ๏ธ UI mockup โ†’ โŒ˜I + ๐Ÿ“Ž image "Build this login form using React + Tailwind" โ†’ ๐Ÿ“„ Generated
In Editor
  • โŒ˜I (Mac) / Ctrl+I (Win) โ€” open inline chat
  • Select code first for targeted edits
  • Supports context vars & model selection
  • Preview diff before accepting
In Terminal
  • โŒ˜I in terminal โ€” generate commands
  • Copilot understands shell context
  • Generates and explains terminal commands
Vision Input
  • Attach images to chat prompts
  • UI mockups โ†’ code generation
Use Cases
Refactor selected code โ€” select a function, โŒ˜I โ†’ "Extract this into a custom hook" (targeted in-place edit)
Generate shell commands โ€” โŒ˜I in terminal โ†’ "Find all .log files larger than 100MB and delete them"
6

Model Selection & Premium Requests

What: Choose from 20+ AI models. Each model has a premium request multiplier โ€” included models (GPT-4.1, GPT-4o, GPT-5 mini) cost 0ร—, while advanced models cost 1ร—โ€“30ร—. Manage your monthly budget accordingly.
MODEL COST TIERS โ€” PICK THE RIGHT MODEL FOR THE TASK ๐Ÿ†“ 0ร— INCLUDED No premium requests consumed GPT-4.1 GPT-4o GPT-5 mini Raptor mini Best for: boilerplate, simple edits ๐Ÿ’ฐ 0.25โ€“0.33ร— Budget-friendly premium Grok Code Fast 1 Claude Haiku 4.5 Gemini 3 Flash GPT-5.1 Codex Mini Best for: fast iteration, tests โšก 1ร— STANDARD Full premium request per use Sonnet 4 / 4.5 / 4.6 Gemini 2.5/3/3.1 Pro GPT-5.1 / 5.2 / 5.4 GPT-5.1 Codex Best for: complex refactors, agents ๐Ÿ”ฅ 3โ€“30ร— PREMIUM High cost โ€” use sparingly Opus 4.5 3ร— Opus 4.6 3ร— Opus 4.6 fast 30ร— Best for: architecture, design โ—€ โ–ถ FREE EXPENSIVE COST โ†’
Model Multipliers (Paid Plans)
ModelRate
GPT-4.1 / GPT-4o / GPT-5 mini0ร— included
Raptor mini0ร— included
Grok Code Fast 10.25ร—
Claude Haiku 4.5 / Gemini 3 Flash0.33ร—
GPT-5.1-Codex-Mini / GPT-5.4 mini0.33ร—
Sonnet 4 / 4.5 / 4.61ร—
Gemini 2.5 Pro / 3 Pro / 3.1 Pro1ร—
GPT-5.1 / 5.2 / 5.4 + Codex variants1ร—
Claude Opus 4.5 / 4.63ร—
Opus 4.6 fast mode (preview)30ร—
Plans & Allowances
PlanIncluded
Free2K completions + 50 premium/mo
Pro / StudentUnlimited + premium allowance
Pro+Unlimited + higher premium
BusinessUnlimited + org premium pool
EnterpriseUnlimited + enterprise premium
๐Ÿ’ก Auto model selection gives 10% multiplier discount and excludes models with multipliers greater than 1ร—.
โš ๏ธ Unused requests don't carry over. Counters reset on 1st of each month (UTC).
๐Ÿ“Œ Models are frequently added and deprecated. Multipliers may change. Check official docs for the latest model list.
Use Cases
Save budget on simple tasks โ€” use GPT-4.1 (0ร—) for boilerplate, switch to Sonnet 4.6 (1ร—) for complex refactors
Architecture decisions โ€” use Opus 4.6 (3ร—) for design reviews where quality matters more than cost
7

Custom Instructions

What: Markdown files that automatically inject your coding standards, conventions, and project context into every Copilot request โ€” no need to repeat yourself in prompts.
INSTRUCTION PRIORITY โ€” WHICH FILE WINS? โ‘  PERSONAL Highest priority ~/.copilot/copilot-instructions.md โ‘ก PATH-SPECIFIC Glob-matched rules .github/instructions/*.instructions.md โ‘ข REPOSITORY Repo-wide rules .github/copilot-instructions.md HIGH LOW ALWAYS-ON FILES copilot-instructions.md AGENTS.md CLAUDE.md Task-specific: commitMessageGeneration ยท reviewSelection ยท pullRequestDescription ยท Generate with /init
Always-On Files
copilot-instructions.md.github/ folder
AGENTS.mdRoot or subfolders
CLAUDE.mdRoot, .claude/, ~/
For Specific Tasks
Code reviewreviewSelection.instructions
CommitscommitMessageGeneration
PR descriptionspullRequestDescription
Priority (High โ†’ Low)
  • 1. Personal (user-level)
  • 2. Path-specific (.github/instructions/*.instructions.md)
  • 3. Repository-wide (.github/copilot-instructions.md)
  • 4. Agent (AGENTS.md)
  • 5. Organization
Generate Instructions
  • /init โ€” auto-generate for workspace
  • Chat debug view โ†’ verify loaded files
Use Cases
Enforce coding style โ€” add "Always use single quotes and 2-space indentation" to copilot-instructions.md
Standardize commit messages โ€” set commitMessageGeneration instructions to follow Conventional Commits format
8

Instructions.md Files

What: Targeted instruction files (.instructions.md) that apply conditionally based on file glob patterns or semantic matching โ€” e.g., Python rules only for *.py files.
CONDITIONAL INSTRUCTIONS โ€” applyTo GLOB MATCHING .github/instructions/ ๐Ÿ“„ python.instructions.md **/*.py ๐Ÿ“„ react.instructions.md **/*.tsx ๐Ÿ“„ tests.instructions.md **/*.test.* ๐Ÿ“„ api.instructions.md **/api/** match ๐Ÿ“ You're editing: src/utils/auth.py inject ๐Ÿ’ฌ Copilot Prompt (enriched) System instructions injected: โœ… python.instructions.md โŒ react.instructions.md โŒ tests.instructions.md โŒ api.instructions.md Only python.instructions.md matches auth.py via applyTo: '**/*.py' โ€” others are skipped
Format
--- name: 'Python Standards' description: 'Python conventions' applyTo: '**/*.py' --- # Python coding standards - Follow PEP 8 style guide - Use type hints for all functions
Locations
Workspace.github/instructions/
Claude fmt.claude/rules/ Claude-specific
User~/.copilot/instructions/
๐Ÿ’ก Type /instructions in chat to open Configure menu.
Use Cases
Python-only rules โ€” create python.instructions.md with applyTo: '**/*.py' for PEP 8, type hints, docstrings
React testing patterns โ€” create react-tests.instructions.md with applyTo: '**/*.test.tsx' for RTL best practices
9

Reusable Prompt Files

What: Saved prompt templates (.prompt.md) you invoke as slash commands. Encode frequent tasks like scaffolding components or running reviews โ€” type /prompt-name instead of re-writing the prompt.
HOW PROMPT FILES WORK ๐Ÿ“ Create .prompt.md ๐Ÿ’ฌ Type /name in chat to invoke โšก AI Executes with tools & model PROMPT FILE ANATOMY --- description ยท agent ยท tools ยท model --- + prompt body Variables: ${input:name} ยท Markdown links to instructions
  • Encode common tasks as .prompt.md files
  • Invoke via /prompt-name in chat
  • Workspace: .github/prompts/
  • User: profile prompts/ folder
Format
--- description: 'Create React form' agent: agent tools: ['editFiles', 'search'] model: GPT-5.2 --- Generate a React form component with validation for ${input:fields}
Quick Commands
/create-promptAI-generate a prompt file VS Code
/promptsConfigure prompt files
๐Ÿ’ก Reference instructions via Markdown links. Use ${input:name} for user-provided values.
Use Cases
One-command API scaffold โ€” /create-api generates REST endpoint with controller, service, tests, and OpenAPI spec
Standardized code review โ€” /review-pr checks for security, performance, and accessibility issues every time
10

Chat Modes & Custom Agents

What: Agents are specialized AI personas with their own tools, instructions, and model. Create a security reviewer, planner, or any role โ€” each with restricted tool access and handoffs to other agents.
BUILT-IN MODES vs CUSTOM AGENTS BUILT-IN MODES ๐Ÿ’ฌ Ask Read-only Q&A โœ๏ธ Edit Direct code edits ๐Ÿค– Agent Autonomous + tools ๐Ÿ“‹ Plan Plan before code vs CUSTOM AGENT (.agent.md) ๐Ÿ›ก๏ธ Your Agent reviewer.agent.md Custom instructions ๐Ÿ”ง tools[] ๐Ÿง  model ๐Ÿค– agents[] ๐Ÿ”Œ MCP HANDOFF WORKFLOW ๐Ÿ›ก๏ธ Review ๐Ÿ”จ Fix โœ… Validate
Built-in Modes
ModePurpose
AskRead-only Q&A, no code changes
EditDirect code edits
AgentAutonomous tool use + edits
PlanCreate plan before implementation
Custom Agent File (.agent.md)
--- description: 'Security reviewer' tools: ['search', 'readFile'] model: Claude Sonnet 4.6 agents: ['*'] handoffs: - label: Fix issues agent: implementation prompt: Fix the issues above. send: false --- Review code for OWASP Top 10...
Agent Frontmatter Fields
FieldDescription
descriptionShown as placeholder text (required on github.com)
nameDisplay name (defaults to filename)
toolsList of available tools; omit for all tools
agentsAllowed subagents (* = all, [] = none)
modelAI model (string or prioritized array)
handoffsSequential workflow transitions (label, agent, prompt, send, model)
targetvscode or github-copilot (omit for both)
mcp-serversMCP server configs scoped to this agent
hooksAgent-scoped hooks Preview
user-invocableShow/hide in dropdown (default: true)
disable-model-invocationPrevent subagent invocation (default: false)
argument-hintHint text in chat input field
File Locations
Workspace.github/agents/
Claude fmt.claude/agents/
User~/.copilot/agents/
Org/Enterprise.github-private repo โ†’ agents/
Where Custom Agents Work
VS Code JetBrains Eclipse Xcode GitHub.com Copilot CLI Coding Agent Background & Cloud agents
๐Ÿ’ก Type /agents to configure or /create-agent to generate with AI. On github.com, use the Agents tab at github.com/copilot/agents to create and manage agents.
Use Cases
Security review agent โ€” read-only tools (search, readFile), scans for OWASP Top 10, hands off to "fix" agent
Plan โ†’ Implement workflow โ€” Plan agent creates a spec, handoff button (send: true) auto-submits to implementation agent
Org-wide agent โ€” create agents in .github-private repo to share across all repos in your org/enterprise
11

Skills (Agent Superpowers)

What: Portable folders of instructions, scripts & resources that Copilot auto-loads when relevant. Unlike instructions (guidelines), skills teach capabilities โ€” testing workflows, deployment recipes, etc. Open standard across VS Code, CLI & coding agent.
HOW SKILLS LOAD โ€” PROGRESSIVE CONTEXT REQUEST 1 ๐Ÿ“‹ Metadata name + description ร— all skills ๐Ÿง  Prompt Match AI matches prompt to skill descriptions โœ… Best match testing skill selected for loading ๐Ÿ’ก Only metadata is sent initially โ€” not full SKILL.md REQUEST 2 ๐Ÿ“„ SKILL.md Full instructions loaded into context ๐Ÿค– Agent reads: โ€ข Workflow steps โ€ข Patterns & rules ๐Ÿ’ก Only matched skill's SKILL.md is loaded Agent now knows HOW to perform the task but hasn't loaded helper files yet REQUEST 3+ ๐Ÿ“ฆ Resources Scripts, templates, fixtures loaded โšก Executes Agent uses helpers to complete task ๐Ÿ’ก Progressive loading saves tokens Helper scripts, test templates, fixtures loaded only when agent needs them ๐Ÿ”„ All automatic โ€” just place skills in .github/skills/ โ€” Copilot handles matching & loading
SKILL.md Format
--- name: webapp-testing description: 'Run and debug web app integration tests' --- # Testing Workflow Use describe + it + AAA pattern Use factory mocks for fixtures
Locations
Project.github/skills/<name>/
Personal~/.copilot/skills/<name>/
Key Features
  • Invoke via /skill-name in chat
  • /create-skill โ€” AI-generate a skill
  • Works across VS Code, CLI & coding agent
  • Open standard: agentskills.io
Use Cases
Testing skill โ€” SKILL.md + test templates + fixture scripts โ†’ Copilot auto-loads when you ask "help me test"
Deployment skill โ€” SKILL.md + deploy scripts + Dockerfile โ†’ /deploy runs the full pipeline
12

MCP โ€” Model Context Protocol

What: An open standard that connects Copilot to external tools & services (databases, APIs, browsers). MCP servers expose tools, data resources, and prompt templates that the AI can use during conversations.
MCP โ€” HOW COPILOT CONNECTS TO EXTERNAL TOOLS ๐Ÿง  Copilot Agent Discovers & calls tools via MCP protocol requests โ†’ โ† responses ๐Ÿ”Œ MCP Server stdio / HTTP transport Exposes capabilities EXTERNAL SERVICES ๐Ÿ—„๏ธ Databases ๐ŸŒ APIs ๐Ÿงญ Browsers ๐Ÿ“ File Systems ๐Ÿ™ GitHub ๐Ÿ”ง Custom Tools MCP CAPABILITIES ๐Ÿ”ง Tools ๐Ÿ“„ Resources ๐Ÿ’ฌ Prompts ๐Ÿ–ฅ๏ธ MCP Apps ๐Ÿ”„ Sampling โ“ Elicitation ๐Ÿ–๏ธ Sandbox
mcp.json (.vscode/mcp.json or .github/mcp.json)
{ "servers": { "github": { "type": "http", "url": "https://api.githubcopilot.com/mcp" }, "playwright": { "command": "npx", "args": ["-y", "@microsoft/mcp-server-playwright"] } } }
MCP Capabilities
CapabilityDescription
ToolsExecute operations (file, DB, API)
ResourcesRead-only context (Add Context โ†’ MCP Resources)
PromptsTemplates via /server.prompt
MCP AppsInteractive UI in chat
SamplingServer-initiated model calls
ElicitationServer asks user for input
Key Servers
github-mcp-serverGitHub APIs
PlaywrightBrowser automation
SandboxmacOS/Linux: sandboxEnabled
โš ๏ธ Only use MCP servers from trusted sources. Review configs before starting.
Use Cases
Query a database โ€” add a Postgres MCP server โ†’ ask Copilot "show me users who signed up this week"
Browser testing โ€” add Playwright MCP server โ†’ "go to our staging site and screenshot the login page"
13

Hooks โ€” Lifecycle Automation

What: Deterministic shell commands that run at specific agent lifecycle points. Unlike instructions (suggestions), hooks guarantee execution โ€” block dangerous commands, auto-format after edits, log tool usage, enforce security policies.
AGENT LIFECYCLE โ€” WHERE HOOKS FIRE SessionStart ๐Ÿš€ Session begins UserPromptSubmit ๐Ÿ’ฌ User sends prompt PreToolUse ๐Ÿ›ก๏ธ Before tool runs PostToolUse ๐Ÿ” After tool completes PreCompact ๐Ÿ“ฆ Before compaction Stop ๐Ÿ Session ends EXIT CODES โœ… 0 = Success โ†’ parse JSON โš ๏ธ Non-zero = Logged & skipped ๐Ÿ›ก๏ธ Block via JSON: permissionDecision: deny
Hook Events
EventWhen
SessionStartFirst prompt of new session
UserPromptSubmitUser submits a prompt
PreToolUseBefore tool invocation
PostToolUseAfter tool completes
PreCompactBefore context compaction
SubagentStart/StopSubagent lifecycle
StopAgent session ends
Hook Config (.github/hooks/*.json)
{ "hooks": { "PostToolUse": [{ "type": "command", "command": "npx prettier --write \"$TOOL_INPUT_FILE_PATH\"" }] } }
Exit Codes
0Success โ†’ parse stdout JSON output
Non-zeroLogged & skipped โ€” does not block agent
PreToolUse Output (JSON)
permissionDecision"deny" blocks the tool call (only deny is currently processed)
โš ๏ธ Hook failures never block agent execution. Use permissionDecision: "deny" in JSON output to block specific tool calls.
Use Cases
Auto-format on save โ€” PostToolUse hook runs prettier --write on every file the agent edits
Block destructive commands โ€” PreToolUse hook outputs {"permissionDecision": "deny"} for rm -rf and DROP TABLE
14

Copilot on GitHub.com

What: Copilot features on github.com โ€” the cloud coding agent creates PRs autonomously, Copilot reviews code, generates PR summaries, helps with issues, and runs Autofix for security vulnerabilities.
CODING AGENT WORKFLOW ON GITHUB.COM ๐Ÿ“‹ Delegate Issue or /delegate ๐Ÿค– Agent Codes Branch + implement ๐Ÿ”€ Opens PR Auto-created ๐Ÿ‘€ Review Merge GITHUB.COM FEATURES ๐Ÿ“ PR Summary ๐Ÿ” Code Review ๐Ÿ› Issue Helper ๐Ÿ›ก๏ธ Autofix
Coding Agent
  • Delegate tasks from VS Code, CLI, or github.com
  • Use /delegate in chat or @cli in terminal
  • Creates PRs autonomously in cloud
  • Uses premium requests (1ร— per session ร— model)
  • Follow up with steering comments
PR & Issue Features
  • PR Summary โ€” auto-generate descriptions
  • Code Review โ€” assign Copilot as reviewer
  • Issue helper โ€” create/update issues with Copilot
  • Autofix โ€” code scanning security fixes
Customize Agent Environment
  • Use copilot-setup-steps.yml (GitHub Actions workflow)
  • MCP servers for coding agent
  • Pre/post scripts, dependencies
Use Cases
Delegate a feature โ€” /delegate + link to GitHub issue โ†’ agent creates a branch, implements, and opens a PR
Auto-review PRs โ€” assign Copilot as reviewer on every PR โ†’ catches bugs, style issues, and security flaws
15

Copilot CLI โ€” Terminal AI Agent

What: A full AI coding agent in your terminal โ€” interactive conversations, autonomous task execution, plan mode, custom agents, MCP, hooks, skills, and memory. Default model: Claude Sonnet 4.5.
COPILOT CLI โ€” TERMINAL INTERFACE Terminal โ€” copilot โฏ copilot "Add input validation to all API endpoints and write tests" ๐Ÿค– Agent ๐Ÿ’ฌ Ask ๐Ÿ“‹ Plan Shift+Tab: toggle mode Ctrl+T: show reasoning @file: add context !cmd: run shell โ ‹ Reading codebase... โ†’ Found 6 API endpoints โ†’ Adding Zod schemas... โœ“ 12 files modified, 47 tests pass FLAGS: -p "prompt" --continue --resume --agent=name --allow-all-tools --model name --allow-tool='shell(git)'
Slash Commands
CommandAction
/modelSwitch AI model
/compactCompress conversation context
/contextView token usage breakdown
/usageSession stats (premium reqs, duration, LOC)
/agentSelect a custom agent
/mcpList MCP servers; /mcp add to add
/resumeResume a previous session
/add-dirTrust an additional directory
/cwd / /cdChange working directory
/loginAuthenticate with GitHub
/feedbackSubmit feedback, bugs, features
/allow-allAuto-approve all tools this session
Keyboard & Syntax
Key / SyntaxAction
Shift+TabToggle Ask / Plan mode
Ctrl+TShow/hide model reasoning
EscStop current operation
@path/to/fileInclude file as context
!git statusRun shell command directly
Command-Line Flags
-p "prompt"Programmatic single-prompt mode
--continueResume most recent session
--resumePick a session to resume
--agent=nameUse a specific custom agent
--allow-all-toolsAuto-approve all tools
--allow-tool='shell(git)'Allow specific tool
--deny-tool='shell(rm)'Block specific tool
--model nameSet model from CLI
Built-in Subagents
ExploreQuick codebase analysis
TaskRun tests/builds, brief summaries
Code-reviewReview changes, surface real issues
General-purposeGeneral-purpose subagent
ResearchDeep research tasks
Use Cases
Headless automation โ€” copilot -p "Run tests, fix failures, commit" --allow-all-tools in CI/CD
Resume cloud agent โ€” /resume to pull a coding agent session from github.com into your terminal
๐Ÿ“š Sources: GitHub Docs โ€” Copilot CLI
16

Spaces & Spark

What: Spaces โ€” curated knowledge collections for project context. Spark โ€” natural language to micro web apps.
๐Ÿ“š Copilot Spaces Curated knowledge collections ๐Ÿ“„ Docs + ๐Ÿ’ป Code + ๐Ÿ“ Files โ†’ Shared project context โ†’ github.com/copilot/spaces 1ร— premium/request โšก Copilot Spark Natural language โ†’ web apps ๐Ÿ’ฌ "Build a dashboard for..." โ†’ Live micro web app in seconds โ†’ github.com/spark 4ร— premium/prompt
  • Curate knowledge for project context
  • Attach docs, code, files to a space
  • Access at github.com/copilot/spaces
  • Uses premium requests (1ร— model rate)
Copilot Spark
  • Natural language โ†’ micro web apps
  • 4 premium requests per prompt
  • Access at github.com/spark
Use Cases
CLI: bulk file ops โ€” copilot -p "Rename all .jpeg files to .jpg recursively" with --allow-tool='shell'
Spark: quick prototype โ€” "Build a dashboard showing my GitHub repo stats" โ†’ live web app in seconds
17

Toolsets

What: A collection of tools you reference as a single entity in prompts, prompt files, and custom agents. Organize related tools and enable/disable them as a group.
GROUP TOOLS โ†’ REFERENCE AS ONE INDIVIDUAL TOOLS ๐Ÿ” search/codebase ๐Ÿ“„ search/changes โš ๏ธ read/problems ๐Ÿ”— search/usages group ๐Ÿ“ฆ #reader Toolset 4 tools bundled .jsonc config use USE IN ๐Ÿ’ฌ Prompts #reader ๐Ÿค– Agents tools: ['reader']
  • Define in .jsonc files via Chat: Configure Tool Sets
  • Reference in prompts: #toolset-name
  • Reference in agents: tools: ['my-toolset']
  • Built-in sets: #edit, #search
Tool Set File (.jsonc)
{ "reader": { "tools": ["search/changes", "search/codebase", "read/problems", "search/usages"], "description": "Tools for reading context", "icon": "book" } }
Properties
toolsArray of tool names (built-in, MCP, extension)
descriptionShown in tools picker
iconProduct icon (see Icon Reference)
Use Cases
Read-only toolset โ€” group search, readFile, grep into a #reader set for safe auditing
Quick reference โ€” type #reader in prompt to enable all read tools at once
๐Ÿ“š Sources: VS Code Docs โ€” Tool Sets
18

Content Exclusion

What: Organization-level rules that prevent specific files, folders, or repositories from being sent to the AI model โ€” enforcing compliance and protecting sensitive code.
CONTENT EXCLUSION โ€” WHAT GETS BLOCKED ๐Ÿšซ EXCLUDED FILES ๐Ÿ”’ .env / .env.* ๐Ÿ”‘ **/secrets/** ๐Ÿ“œ **/keys/** ๐Ÿข internal-repo ๐Ÿ›ก๏ธ BLOCKED Not sent to AI model โœ… APPLIES TO โŒจ๏ธ Completions ๐Ÿ’ฌ Chat & Inline Chat ๐Ÿšซ NOT: Agent / Edit / CLI โš ๏ธ Requires Business or Enterprise plan
  • Exclude files/repos from Copilot at org level
  • Prevents content from being sent to model
  • Configurable in GitHub org settings
  • Supports glob patterns for paths
  • Applies to completions and chat (Ask mode)
  • Requires Business or Enterprise plan
โš ๏ธ Content exclusion is not supported in Edit mode, Agent mode, Copilot CLI, or Copilot coding agent. Block mode for code referencing is also not enforced for CLI/coding agent.
Common Exclusions
.env / .env.*Environment variables & secrets
**/secrets/**Secret files & credentials
**/keys/**API keys & certificates
internal-repoEntire private repositories
Use Cases
Exclude secrets โ€” content exclusion rule blocks .env, **/secrets/** from being sent to AI
Compliance โ€” exclude regulated codebases (HIPAA, PCI) from AI processing at the org level
19

Spec-Driven Development Community

โš ๏ธ Note: This section describes community frameworks, not official GitHub Copilot features. These methodologies work well with Copilot agents but are maintained by third parties.
What: Write a specification first, then let the AI agent implement it. Specs define requirements, acceptance criteria, and architecture โ€” giving agents a clear blueprint instead of ad-hoc prompts.
SPEC-DRIVEN WORKFLOW ๐Ÿ“ Write Spec Requirements + acceptance criteria ๐Ÿ‘€ Review Refine & validate spec ๐Ÿค– Implement Agent codes from blueprint โœ… Validate Tests pass criteria met Frameworks: spec-kit ยท OpenSpec ยท BMAD ยท GSD
Popular Frameworks
spec-kitGitHub's spec workflow for coding agent
OpenSpecFission AI spec framework
BMADBMAD methodology
GSDGet-shit-done framework
Workflow
Write Spec โ†’ Review & Refine โ†’ Agent Implements โ†’ Validate Output
Use Cases
Feature spec โ€” write a spec with user stories + acceptance criteria โ†’ /delegate to coding agent for implementation
Migration spec โ€” define the target state, constraints, and rollback plan โ†’ agent handles the migration systematically
๐Ÿ“š Sources: github/spec-kit · OpenSpec · BMAD Method · GSD
20

Customization File Structure

What: The recommended project layout for all Copilot customization files โ€” instructions, prompts, agents, skills, hooks, toolsets, and MCP configs in one organized tree.
CUSTOMIZATION FILE LOCATIONS .github/ instructions/ prompts/ agents/ skills/ hooks/ toolsets/ copilot-instructions.md Root: AGENTS.md ยท CLAUDE.md | .vscode/: mcp.json | User: ~/.copilot/
your-project/ โ”œโ”€โ”€ .github/ โ”‚ โ”œโ”€โ”€ copilot-instructions.md โ”‚ โ”œโ”€โ”€ instructions/ โ”‚ โ”‚ โ”œโ”€โ”€ python.instructions.md โ”‚ โ”‚ โ””โ”€โ”€ react.instructions.md โ”‚ โ”œโ”€โ”€ prompts/ โ”‚ โ”‚ โ””โ”€โ”€ create-api.prompt.md โ”‚ โ”œโ”€โ”€ agents/ โ”‚ โ”‚ โ”œโ”€โ”€ planner.agent.md โ”‚ โ”‚ โ””โ”€โ”€ reviewer.agent.md โ”‚ โ”œโ”€โ”€ skills/ โ”‚ โ”‚ โ””โ”€โ”€ testing/ โ”‚ โ”‚ โ””โ”€โ”€ SKILL.md โ”‚ โ”œโ”€โ”€ hooks/ โ”‚ โ”‚ โ””โ”€โ”€ format.json โ”‚ โ”œโ”€โ”€ toolsets/ โ”‚ โ””โ”€โ”€ mcp.json โ”œโ”€โ”€ .vscode/ โ”‚ โ””โ”€โ”€ mcp.json โ”œโ”€โ”€ copilot-setup-steps.yml โ”œโ”€โ”€ AGENTS.md โ””โ”€โ”€ CLAUDE.md
Use Cases
Monorepo setup โ€” separate instructions/ for frontend (React) and backend (Python) with different applyTo globs
Team shared configs โ€” commit .github/ folder to Git so all team members get the same Copilot behavior
21

Third-Party Coding Agents

What: Use Anthropic Claude and OpenAI Codex as autonomous coding agents alongside Copilot's built-in agent. Assign GitHub issues or prompts โ€” they create branches, implement changes, and open PRs for review.
THIRD-PARTY CODING AGENTS โ€” ASSIGN & COMPARE Claude Anthropic ยท Preview Codex OpenAI ยท Preview WORKFLOW ๐Ÿ“‹ Assign Issue ๐Ÿค– Agent Codes ๐Ÿ”€ Opens PR Cost: 1 premium request per prompt (Actions minutes for native agent only) AVAILABLE FROM ๐ŸŒ GitHub.com ๐Ÿ–ฅ๏ธ VS Code ๐Ÿ“ฑ GitHub Issues ๐Ÿ’ฌ PR Comments ๐Ÿ“ฑ Mobile โŒจ๏ธ CLI
Supported Third-Party Agents
AgentProviderStatus
ClaudeAnthropicPreview
CodexOpenAIPreview
๐Ÿ“Œ Copilot coding agent is the first-party (native) agent โ€” see Section 14. Third-party agents work alongside it.
Where to Use
  • Agents tab โ€” github.com/copilot/agents
  • Issues โ€” assign the agent to an issue
  • PRs โ€” mention @AGENT_NAME in a comment
  • VS Code โ€” new session โ†’ select agent type
  • GitHub Mobile โ€” start agent session from Home
How It Works
Assign Issue / Prompt โ†’ Agent Plans & Codes โ†’ Opens PR โ†’ You Review
Cost & Billing
  • Each prompt consumes 1 premium request
  • GitHub Actions minutes apply to the native Copilot coding agent, not third-party agents
  • Same repo access permissions as built-in agent
Enable for Your Org
  • Org Settings โ†’ Copilot โ†’ Coding agent โ†’ Partner agents
  • Available on Pro, Pro+, Business, and Enterprise plans
  • Org admins toggle each agent independently
โš ๏ธ Third-party agents are in public preview. Policies apply to cloud agents only โ€” local agents in VS Code cannot be disabled.
Use Cases
Compare agent output โ€” assign the same issue to Claude and Codex โ†’ compare their PRs with Copilot coding agent's PR and merge the best one
Parallel workstreams โ€” assign backend refactor to Codex while Claude handles frontend tests โ€” both open PRs simultaneously
22

Browser Agent Tools VS Code Preview

What: Agents can open, interact with, and screenshot web pages in VS Code's integrated browser โ€” enabling autonomous build โ†’ test โ†’ debug โ†’ fix loops for web apps without leaving the editor. This is a VS Code-specific experimental feature.
AUTONOMOUS DEV LOOP โ€” BUILD โ†’ TEST โ†’ FIX ๐Ÿ—๏ธ Build App Agent writes code ๐ŸŒ Open in Browser openBrowserPage ๐Ÿ–ฑ๏ธ Test & Interact click ยท type ยท screenshot ๐Ÿ”ง Fix Issues Auto-fix from errors โœ… Validate Screenshot & verify repeat until all tests pass KEY TOOLS openBrowserPage screenshotPage clickElement typeInPage readPage handleDialog runPlaywrightCode
Available Tools
ToolAction
openBrowserPageOpen a URL in integrated browser
navigatePageNavigate to a different URL
readPageRead page content & structure
screenshotPageTake a screenshot for visual review
clickElementClick a page element
hoverElementHover over an element
dragElementDrag and drop elements
typeInPageType text into inputs
handleDialogAccept/dismiss browser dialogs
runPlaywrightCodeRun custom Playwright automation
Enable Browser Tools
  • Set workbench.browser.enableChatTools to true
  • Open Chat โ†’ Agent mode โ†’ Tools picker
  • Enable all tools under Built-in > Browser
Autonomous Dev Loop
Build App โ†’ Open in Browser โ†’ Test & Interact โ†’ Fix Issues โ†’ Validate
Sharing Pages
  • Agent-opened pages use isolated ephemeral sessions (no shared cookies)
  • Share with Agent button โ€” share your pages (uses your session/cookies)
  • Visual indicator shows when a page is shared
โš ๏ธ Browser agent tools are experimental and may change in future releases.
Use Cases
Form validation testing โ€” "Build a contact form, open it in the browser, test all validation rules and fix any issues" (full loop)
Responsive layout check โ€” "Screenshot this page at 320px, 768px, and 1440px widths and verify the layout is correct"
Accessibility audit โ€” "Check this page for missing alt text, heading hierarchy, keyboard nav, and color contrast issues"
23

Checkpoints & Session Forking VS Code Preview

What: VS Code auto-snapshots your workspace before each agent action. Restore to any previous state, redo undone changes, edit & resend earlier prompts, or fork a conversation to explore an alternative approach โ€” all without losing work.
CHECKPOINT TIMELINE โ€” SNAPSHOT ยท RESTORE ยท FORK C1 Agent edits ๐Ÿ“ธ auto-saved C2 More edits ๐Ÿ“ธ auto-saved C3 โŒ Bad result ๐Ÿ“ธ auto-saved ๐Ÿ”„ Restore to C2 C4 New attempt โœ… better result F1 ๐Ÿด Fork Alternative approach ACTIONS ๐Ÿ”„ Restore ๐Ÿด Fork โœ๏ธ Edit & Resend
Checkpoints
  • Enable: chat.checkpoints.enabled
  • Auto-created before each chat request
  • Hover chat request โ†’ Restore Checkpoint
  • Reverts all file changes made after that point
  • Redo button appears after restore โ€” undo mistakes
View File Changes
  • Enable: chat.checkpoints.showFileChanges
  • Shows modified files + lines added/removed per request
  • Helps decide which checkpoint to restore to
๐Ÿ’ก Checkpoints complement Git but don't replace it. They're for quick iteration within a session โ€” use Git for permanent version control.
Edit Previous Requests
  • Click any previous chat request to edit it
  • Reverts changes from that request + all later ones
  • Resends as a new request with your edits
  • Configure: chat.editRequests
Fork from Checkpoint
  • Hover chat request โ†’ Fork Conversation
  • Creates new independent session from that point
  • Original conversation preserved
  • Explore alternative approaches side by side
Workflow
Agent Acts โ†’ Review Output โ†’ โœ… Keep / ๐Ÿ”„ Restore / ๐Ÿด Fork
Use Cases
Safe experimentation โ€” let the agent try a risky refactor, restore checkpoint if it breaks tests, fork to try an alternative
Prompt refinement โ€” edit a vague prompt 3 requests back โ†’ agent re-runs with better instructions, previous bad output is undone
24

Agent Sessions, Handoffs & Orchestration

What: Run multiple agent sessions in parallel across local, background (CLI), and cloud environments. Hand off tasks between agent types with full context carry-over. Manage everything from a unified sessions view.
MULTI-AGENT ORCHESTRATION โ€” HANDOFF & PARALLEL HANDOFF FLOW (sequential with context carry-over) ๐Ÿ“‹ Plan (Local) VS Code interactive โŒจ๏ธ Build (CLI) Background on machine โ˜๏ธ PR (Cloud) Remote infra ๐Ÿ‘€ Review Team reviews PR PARALLEL AGENTS ๐Ÿ“‹ Task "Review this code for security" ๐Ÿง  GPT-5.4 ๐Ÿง  Sonnet 4.6 ๐Ÿง  Gemini 3.1 Pro โ†“ compare results โ†“ Full context carries over on handoff ยท Parallel agents run same task with different models or research different topics simultaneously
Agent Types
TypeRunsBest For
LocalVS Code (interactive)Brainstorm, debug, browse
Copilot CLIBackground on machineWell-defined tasks, POCs
CloudRemote infraPRs, team collaboration
Third-partyClaude / CodexProvider-specific models
Handoff Flow
Plan (Local) โ†’ Implement (CLI) โ†’ PR (Cloud) โ†’ Review
  • Select different agent type from session dropdown
  • Full conversation history carries over
  • Original session archived after handoff
  • /delegate in CLI โ†’ sends to cloud agent
Sessions View
  • Unified list of all sessions (local, CLI, cloud)
  • Compact or Side-by-side layout modes
  • Shows status, type, and file change stats
  • Archive / unarchive completed sessions
  • Grouped by time (Today, Last Week, etc.)
Agent Status Indicator
  • Badge in command center title bar Experimental
  • Shows unread messages + in-progress count
  • Enable: chat.agentsControl.enabled
Parallel Sessions
  • Create multiple sessions via + button
  • Each session: independent context window
  • Run different tasks simultaneously
  • Assign TODOs: right-click code โ†’ assign to agent
Use Cases
Plan โ†’ Build โ†’ Ship โ€” Plan agent designs architecture, hand off to CLI for implementation, then cloud agent opens the PR for team review
Parallel features โ€” run 3 local sessions: one for API endpoints, one for UI components, one for tests โ€” all working simultaneously
TODO delegation โ€” add // TODO: add input validation in code โ†’ right-click โ†’ assign to Copilot coding agent โ†’ agent creates a PR
25

Prompt & Context Engineering

What: Proven techniques for writing effective prompts and providing the right context. The quality of Copilot's output depends directly on how well you communicate โ€” be specific, break tasks down, include expected output, and choose the right interaction mode.
GOOD vs BAD PROMPTS โ€” QUALITY IN = QUALITY OUT โŒ BAD "Make this code better" Vague ยท No language ยท No criteria ยท No expected output โ†’ poor results โœ… GOOD "Refactor to reduce O(nยฒ) โ†’ O(n log n). Add tests." Specific ยท Clear goal ยท Measurable criteria ยท Includes verification step โ†’ great results PROMPT ENGINEERING TIPS ๐ŸŽฏ Be specific language + framework + I/O โœ‚๏ธ Break it down smaller steps = better ๐Ÿ“‹ Show expected test cases + criteria ๐Ÿ”„ Iterate refine with follow-ups โ“ Ask AI to ask clarify before coding ๐Ÿ“‚ Add context #file ยท #fetch ยท images ๐Ÿงน New sessions avoid context pollution PROMPTING TECHNIQUES ๐Ÿงฉ Few-Shot Prompting Provide 2-3 input โ†’ output examples ๐Ÿ”— Chain-of-Thought "Think step by step before coding" ๐ŸŽญ Role Prompting "Act as a senior security engineer" ๐Ÿ”€ Negative Prompting "Don't use regex. Don't use any libs." + moreโ€ฆ
Writing Effective Prompts
  • Be specific โ€” state language, framework, expected I/O
  • Break down complex tasks โ€” smaller steps = better results
  • Include expected output โ€” test cases, acceptance criteria
  • Avoid vague prompts โ€” "make this better" โ†’ "reduce time complexity"
  • Iterate with follow-ups โ€” refine, don't rewrite the whole prompt
  • Ask AI to ask questions โ€” "ask me clarifying questions before starting"
  • Course-correct early โ€” steer mid-request if heading wrong way
Prompt Example
Write a TypeScript function that validates email addresses. Return true for valid, false otherwise. Don't use regex. Example: validateEmail("user@example.com") โ†’ true Example: validateEmail("invalid") โ†’ false
Providing Context
  • Agent auto-searches workspace โ€” usually no need for #codebase
  • Reference specific files: #file, #folder, #symbol
  • #fetch โ€” pull info from web pages / docs
  • Attach images / screenshots for visual context
  • Use integrated browser to select page elements
Pick the Right Mode
ModeWhen to Use
Inline suggestionsIn-flow coding, boilerplate
AskQuestions, brainstorm, explore
Inline chatTargeted in-place edits
AgentMulti-file autonomous changes
PlanArchitecture, migration strategy
Smart actionsOne-click commit msg, fix, rename
Session Hygiene
  • New session for unrelated tasks โ€” avoid context pollution
  • Use subagents for isolated research
  • Run parallel sessions for independent work
Use Cases
High-quality output โ€” "Implement a rate limiter using token bucket. Write tests that verify: 10 req/s allowed, 11th rejected, refills after 1s. Run the tests."
Plan first โ€” use Plan agent to design architecture โ†’ review โ†’ hand off to Agent mode for implementation with tests as verification
26

Smart Actions

What: One-click AI-powered actions built into VS Code โ€” no prompt needed. Generate commit messages, rename symbols intelligently, fix diagnostics, and search semantically. Available via right-click, lightbulb, or keyboard shortcuts.
SMART ACTIONS โ€” ZERO PROMPTS NEEDED โœจ Commit Message F2 Smart Rename ๐Ÿ’ก Fix with Copilot ๐Ÿงช Fix Test Failure ๐Ÿ“„ Generate Docs ๐Ÿ” Semantic Search ๐Ÿ“ PR Summary Access via โŒ˜. ยท Right-click Context menus
Available Actions
ActionWhere
Generate Commit MessageSource Control panel
Rename SymbolF2 on any symbol
Fix with CopilotLightbulb on errors/warnings
Fix Test FailureTest Explorer failed tests
Generate DocsRight-click โ†’ Generate Code
Semantic SearchSearch view (meaning, not keywords)
Generate PR SummaryPR description field
How to Access
  • Lightbulb (โŒ˜.) โ€” hover over error โ†’ AI fix suggestion
  • Right-click โ†’ Generate Code menu
  • Context menus in Source Control, Test Explorer
  • No prompt required โ€” Copilot infers from context
Use Cases
Instant commit messages โ€” click โœจ in Source Control โ†’ Copilot analyzes diff and generates a Conventional Commit message
Smart rename โ€” F2 on processData โ†’ Copilot suggests transformUserPayload based on function body semantics
27

BYOK โ€” Bring Your Own Key

What: Use your own API keys to access hundreds of models beyond the built-in ones. Two modes: Individual BYOK (Pro/Pro+ users add keys in VS Code) and Enterprise BYOK (admins connect keys at org/enterprise level, available since Nov 2025). Both let you bypass rate limits, use custom models, and leverage existing provider contracts.
TWO BYOK MODES โ€” INDIVIDUAL + ENTERPRISE ๐Ÿ”‘ Individual BYOK โ€” Pro / Pro+ User adds API key in VS Code model picker Ollama, local models, experimentation ยท Not available for Business/Enterprise ๐Ÿข Enterprise BYOK โ€” Business / Enterprise (Preview) Admin connects key in org/enterprise settings ยท Centralized model management Billed by provider, not GitHub quotas ยท Streaming & Responses API supported SUPPORTED PROVIDERS OpenAI Anthropic Google AI Studio MS Foundry AWS Bedrock xAI OpenAI-compatible Ollama (local)
Individual BYOK (Pro/Pro+)
  • Model picker โ†’ Manage Models โ†’ select provider โ†’ enter key
  • Or: Chat: Manage Language Models command
Built-in providersOpenAI, Anthropic, Google, etc.
ExtensionsAI Toolkit, Foundry Local
Local modelsOllama, custom OpenAI-compat
๐Ÿ“Œ Individual BYOK is for Pro/Pro+ only. Business/Enterprise users use Enterprise BYOK instead.
Enterprise BYOK (Business/Enterprise) Preview
  • Org/Enterprise settings โ†’ connect API key โ†’ manage model access
  • Enterprise admins control which orgs can use which models
ProvidersOpenAI, Anthropic, Google AI Studio, MS Foundry, AWS Bedrock, xAI, OpenAI-compat
BillingBilled by your provider, not GitHub quotas
Context windowAdmins can set max context window
StreamingStreaming responses supported
Responses APISupported (since Jan 2026 update)
๐Ÿ’ก BYOK models need tool calling support to work in Agent mode. Enterprise BYOK usage does not count against GitHub Copilot request quotas.
Use Cases
Run local Ollama โ€” deploy Phi-4 locally, add via Manage Models โ†’ use in chat with zero API cost and full privacy (Individual)
Enterprise model standardization โ€” connect org's Azure OpenAI key โ†’ all devs use approved models, billed via existing contract (Enterprise)
Cutting-edge models โ€” add your Anthropic key to try Claude's latest release before it appears in the built-in list
28

Privacy, Security & Trust

What: How GitHub protects your data โ€” code is not used for training (Business/Enterprise), prompts are not retained, responsible AI filters run pre- and post-model, and IP indemnity covers Copilot-generated code.
DATA PROTECTION โ€” YOUR CODE STAYS YOURS ๐Ÿ”’ No Training on Your Code Business/Enterprise: code never used for model training ๐Ÿ—‘๏ธ No Prompt Retention Prompts & suggestions discarded after response delivered โš–๏ธ IP Indemnity GitHub indemnifies Copilot output (Business/Enterprise) RESPONSIBLE AI PIPELINE ๐Ÿ›ก๏ธ Pre-Model Filters ๐Ÿšซ Content Exclusion ๐Ÿง  AI Model ๐Ÿ” Post-Model Filters ๐Ÿ“‹ Duplicate Code Check COMPLIANCE SOC 2 Type II GDPR Encrypted Trust Center Public Code Filter Telemetry Opt-out
Data Handling
WhatBehavior
Prompts & suggestionsNot retained after response delivered
Training on your codeNo (all plans โ€” cannot be enabled)
Telemetry opt-outIndividual users can opt out
Code snippetsEncrypted in transit & at rest
Responsible AI Filters
  • Pre-model โ€” content exclusion, harmful prompt detection
  • Post-model โ€” duplicate/public code detection, safety checks
  • Configurable: enable/disable public code matching
IP & Compliance
  • IP indemnity โ€” GitHub indemnifies Copilot output (Business/Enterprise)
  • SOC 2 Type II compliant
  • GDPR compliant โ€” EU data processing
  • Subprocessor list published on Trust Center
Security Best Practices
  • Review AI output for vulnerabilities (OWASP Top 10)
  • Don't paste credentials into prompts
  • Use content exclusion for sensitive files
  • Only use MCP servers from trusted sources
  • BYOK: no responsible AI filtering on output
๐Ÿ’ก Enable public code filter to block suggestions matching public repositories โ€” reduces IP risk.
Use Cases
Compliance review โ€” point stakeholders to the Trust Center FAQ for SOC 2, GDPR, and data handling evidence
Reduce IP risk โ€” enable public code filter + content exclusion for proprietary algorithms โ†’ dual protection
29

Org & Enterprise Administration

What: Admin controls for managing Copilot across your organization โ€” assign seats, set policies for features & models, review usage analytics, audit logs, and configure content exclusions at scale.
POLICY CASCADE โ€” ENTERPRISE โ†’ ORG โ†’ USER ๐Ÿข Enterprise Top-level policies ยท Overrides all ๐Ÿ›๏ธ Organization Seats ยท Models ยท Features ยท Exclusions ๐Ÿ‘ค User Personal prefs ยท Telemetry opt-out ADMIN TOOLS ๐Ÿ“Š Usage analytics ๐Ÿ“‹ Audit logs ๐Ÿ’ฐ Premium request tracking CONFIGURABLE POLICIES Features Models Coding Agent Preview Features Public Code Filter Content Exclusion Seat Management 3rd-Party Agents
Policy Management
PolicyControls
FeaturesChat, completions, agents, MCP
ModelsEnable/disable premium models
Coding agentEnable cloud agent + 3rd-party
Editor previewOpt in to preview features
Public code filterBlock public code matches
Content exclusionExclude files/repos from AI
Access Management
  • Assign Copilot seats per user or team
  • Enterprise owners โ†’ org-level access
  • Org owners โ†’ member-level access
  • Policies cascade: Enterprise โ†’ Org โ†’ User
Usage Analytics
  • Dashboard: acceptance rates, active users, languages
  • Premium request consumption by user/team
  • Breakdown by feature (chat, completions, agent)
  • Export data via API for custom reporting
Audit Logs
  • Track Copilot actions by user
  • Seat assignments, policy changes
  • Available for Business & Enterprise plans
  • 180-day retention limit
Settings Path
OrgSettings โ†’ Copilot โ†’ Policies / Models
EnterpriseEnterprise Settings โ†’ Copilot โ†’ Policies
๐Ÿ’ก Enterprise policies override org settings. If an explicit setting is chosen at enterprise level, orgs cannot change it.
Use Cases
Rollout strategy โ€” enable Copilot for a pilot team, review usage analytics for 30 days, then expand to the full org
Model governance โ€” disable Opus 4.6 (30ร— cost) org-wide, allow only 0ร—โ€“1ร— models to control premium request spend
30

Subagents

What: Isolated child agents that run research or execution tasks within a parent agent session. Context stays separate to avoid polluting the main conversation โ€” results are summarized back to the parent.
SUBAGENTS โ€” SPAWN ยท RUN PARALLEL ยท SUMMARIZE ๐Ÿค– Parent Agent Main conversation context preserved ๐Ÿ” Explore โšก Task ๐Ÿ“ Code-review ๐Ÿ”’ isolated ๐Ÿ”’ isolated ๐Ÿ”’ isolated ๐Ÿ“‹ Summary โ†’ parent EXAMPLE USE CASES ๐Ÿ” 3ร— Explore in parallel auth ยท caching ยท logging โšก Task runs test suite "2/47 failed" โ†’ parent ๐Ÿ“ Review changes quietly no noisy output in chat
Built-in Subagents
SubagentPurpose
ExploreQuick codebase analysis & Q&A
TaskRun tests/builds, return brief summaries
Code-reviewReview changes, surface real issues
General-purposeGeneral-purpose subagent
ResearchDeep research tasks
How They Work
  • Parent agent spawns subagent for a focused task
  • Subagent runs in isolated context โ€” no pollution
  • Returns a single summary message to parent
  • Can run in parallel for independent tasks
  • Available in VS Code agent mode & Copilot CLI
When to Use
  • Research a topic without cluttering main chat
  • Run tests and get a pass/fail summary
  • Explore multiple approaches in parallel
Use Cases
Parallel research โ€” agent spawns 3 Explore subagents to investigate auth, caching, and logging patterns simultaneously
Test runner โ€” Task subagent runs full test suite, returns "2 of 47 tests failed: test_auth, test_cache" โ€” no noisy output in main chat
๐Ÿ“š Sources: VS Code Docs โ€” Subagents
31

Copilot Metrics API

What: REST API endpoints that provide aggregated usage metrics for Copilot across your enterprise, organization, or team โ€” acceptance rates, active users, language/editor breakdowns, chat usage, and PR summaries. Build custom dashboards and track ROI.
COPILOT METRICS API โ€” DATA FLOW & STRUCTURE REST API ENDPOINTS ๐Ÿข Enterprise Metrics GET /enterprises/{ent}/copilot/metrics ๐Ÿ›๏ธ Organization Metrics GET /orgs/{org}/copilot/metrics ๐Ÿ‘ฅ Team Metrics GET /orgs/{org}/team/{team_slug}/copilot/metrics RESPONSE STRUCTURE (per day) ๐Ÿ“… Daily Record date: "2026-03-24" total_active_users: 24 total_engaged_users: 20 โŒจ๏ธ IDE Code Completions total_code_suggestions: 745 total_code_acceptances: 376 total_code_lines_accepted: 405 ๐Ÿ’ฌ IDE Chat total_chats: 55 total_chat_insertion_events: 23 total_chat_copy_events: 19 ๐ŸŒ GitHub.com Chat total_engaged_users: 14 total_chats: 38 by model breakdown ๐Ÿ”€ PR Summaries total_pr_summaries: 16 by repo breakdown by model breakdown BREAKDOWN DIMENSIONS ๐Ÿ“ By Language ๐Ÿ–ฅ๏ธ By Editor ๐Ÿง  By Model ๐Ÿ“ By Repository ๐Ÿ“… By Day ๐Ÿท๏ธ Custom Models acceptance_rate = total_code_acceptances / total_code_suggestions ร— 100 โ†’ 376 / 745 = 50.5%
API Endpoints
ScopeEndpoint
EnterpriseGET /enterprises/{ent}/copilot/metrics
OrganizationGET /orgs/{org}/copilot/metrics
TeamGET /orgs/{org}/team/{slug}/copilot/metrics
Query Parameters
sinceISO 8601 date (max 100 days ago)
untilISO 8601 end date
pagePage number (default: 1)
per_pageDays per page (max 100)
cURL Example
curl -L \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $TOKEN" \ -H "X-GitHub-Api-Version: 2026-03-10" \ https://api.github.com/orgs/my-org/copilot/metrics\ ?since=2026-02-01&until=2026-03-01
Metrics Returned
CategoryKey Metrics
IDE CompletionsSuggestions, acceptances, lines suggested/accepted โ€” by language, editor, model
IDE ChatTotal chats, insertion events, copy events โ€” by editor, model
GitHub.com ChatTotal chats, engaged users โ€” by model
PR SummariesTotal summaries created โ€” by repo, model
Authentication
PAT (classic)manage_billing:copilot, read:org, or read:enterprise
Fine-grained"GitHub Copilot Business" or "Administration" org permissions (read)
GitHub AppUser or installation access tokens
Requirements
  • Min 5 members with active Copilot licenses for data to appear
  • Data processed once per day โ€” yesterday is latest available
  • Up to 100 days of historical data
  • Users must have telemetry enabled in their IDE
  • Enable Copilot Metrics API access policy in org settings
โš ๏ธ The legacy /copilot/metrics endpoints are being deprecated. Migrate to the newer Copilot usage metrics endpoints for more depth and flexibility.
Response Schema (Simplified)
[{ "date": "2026-03-24", "total_active_users": 24, // licensed users "total_engaged_users": 20, // actually used Copilot "copilot_ide_code_completions": { "total_engaged_users": 20, "languages": [{ "name": "python", "total_engaged_users": 10 }, ...], "editors": [{ "name": "vscode", "total_engaged_users": 13, "models": [{ "name": "default", "languages": [{ "name": "python", "total_code_suggestions": 249, // suggested to user "total_code_acceptances": 123, // user accepted "total_code_lines_suggested": 225, "total_code_lines_accepted": 135 }] }] }] }, "copilot_ide_chat": { ... }, // chats, insertions, copies "copilot_dotcom_chat": { ... }, // GitHub.com chat usage "copilot_dotcom_pull_requests": { ... } // PR summaries by repo }]
Community Tools
copilot-metrics-viewerGitHub's official dashboard for visualizing Copilot metrics data
Custom dashboardsPipe API data to Grafana, Power BI, Looker, or Datadog
/usage in CLIQuick session stats in Copilot CLI
Use Cases
ROI dashboard โ€” query org metrics daily โ†’ calculate acceptance rate per team โ†’ report to leadership on AI productivity gains
Identify low adoption โ€” compare active vs engaged users per team โ†’ target training for teams with low engagement rates
Language coverage โ€” break down completions by language โ†’ discover which stacks benefit most from Copilot
32

Code Referencing & Attribution

What: When Copilot suggests code that matches publicly available code on GitHub, it can either block the suggestion or display references โ€” showing the source repository, license type, and a link to the original code. Helps manage intellectual property risk.
CODE REFERENCING โ€” HOW PUBLIC CODE MATCHING WORKS ๐Ÿ’ฌ Copilot Suggests Inline completion or chat response generated ๐Ÿ” Compare ~150 chars Post-model filter checks against public GitHub repos ? ๐Ÿšซ Block Mode Suggestion hidden from user โŒ User never sees match IP risk eliminated โœ… Allow + Reference Suggestion shown with attribution ๐Ÿ“‹ License + repo URL shown User reviews & decides WHAT A CODE REFERENCE LOOKS LIKE โŒจ๏ธ INLINE COMPLETION โ€” Code References Log 2026-03-24 14:23:55 [info] Similar code at [Ln 12, Col 4] let i = 1; i <= 100; i++) { let output... License: MIT URL: github.com/octo-org/repo/blob/.../fizz.js View: Output โ†’ "GitHub Copilot Log (Code References)" ๐Ÿ’ฌ CHAT โ€” Inline Reference Banner Copilot generates a code response... function fizzBuzz() { for (let i = 1; i <= 100; i++) { ... ๐Ÿ“‹ Similar code found with 2 license types โ€” View matches Configure: Personal Settings โ†’ Suggestions matching public code โ†’ Allow or Block | Org: Copilot Policies
Two Modes
ModeBehavior
BlockSuggestions matching public code (~150 chars) are hidden โ€” user never sees them. Eliminates IP risk.
AllowSuggestions are shown with references โ€” source repo URL, license type, and code snippet displayed.
Where References Appear
  • Inline completions โ€” Output panel โ†’ "GitHub Copilot Log (Code References)"
  • Chat responses โ€” banner at bottom: "Similar code found โ€” View matches"
  • Click View matches โ†’ new tab with license, URL, and code snippet
  • Ctrl+click / Cmd+click URL โ†’ view full source file on GitHub
Reference Details
LicenseMIT, Apache-2.0, GPL, or "unknown"
URLDirect link to matching file on GitHub
Code snippetThe matching public code excerpt
LocationLine & column where suggestion was inserted
How to Configure
  • Individual: Profile โ†’ Copilot Settings โ†’ "Suggestions matching public code" โ†’ Allow or Block
  • Organization: Org Settings โ†’ Copilot โ†’ Policies โ†’ "Suggestions matching public code"
  • Enterprise: Enterprise Settings โ†’ Copilot โ†’ Policies (overrides org setting)
๐Ÿ’ก Verify code referencing works by typing function fizzBuzz() โ€” a common implementation will match and show references in the log.
โš ๏ธ Org/enterprise members inherit the organization's policy and cannot override it in personal settings.
Use Cases
IP-safe development โ€” enable Block mode enterprise-wide to prevent any public code matches from reaching developers
Attribution workflow โ€” enable Allow mode + review references before committing โ†’ add license headers where required
Open-source compliance โ€” check if matched code uses GPL โ†’ decide whether to accept or rewrite to avoid copyleft obligations
33

Copilot Autofix & Advanced Security

What: AI-powered security remediation built into GitHub's code scanning pipeline. When CodeQL detects a vulnerability in a PR, Copilot Autofix generates a suggested fix with explanation. Requires GitHub Code Security license (free for public repos). Integrates with Advanced Security, Code Scanning, and Dependabot.
COPILOT AUTOFIX โ€” AUTOMATED SECURITY REMEDIATION PR SECURITY PIPELINE ๐Ÿ”€ Push / PR Code changes submitted ๐Ÿ” CodeQL Scans SARIF analysis finds SQL injection alert ๐Ÿค– Autofix Generates LLM generates fix + natural language explanation ๐Ÿ‘€ Dev Reviews Review diff, edit if needed, commit fix โœ… Alert Resolved Vulnerability fixed, CI passes, PR merges GITHUB ADVANCED SECURITY โ€” THREE PILLARS + COPILOT ๐Ÿ” Code Scanning (CodeQL) Static analysis for vulnerabilities โ€ข SQL injection, XSS, path traversal โ€ข SARIF format alerts on PRs โ†’ Autofix generates code patches ๐Ÿ”‘ Secret Scanning Detects leaked credentials โ€ข API keys, tokens, passwords โ€ข Push protection blocks commits โ†’ Autofix: AI-generated secret rotation ๐Ÿ“ฆ Dependabot Vulnerable dependency alerts โ€ข CVE-based advisory matching โ€ข Auto-PRs to bump versions โ†’ Dependency review integration ๐Ÿค– Copilot Autofix AI fix generation for all pillars โ€ข Code + explanation generated โ€ข Requires GitHub Code Security โ€ข Enabled by default on CodeQL SUPPORTED LANGUAGES (CodeQL + AUTOFIX) JavaScript TypeScript Python Java Kotlin Go C# C/C++ Ruby Swift Rust Requires GitHub Code Security Works on: Public repos (free) ยท Org repos with GitHub Code Security ยท PRs + existing alerts on default branch
How It Works
  • CodeQL scans PR โ†’ detects security vulnerability (SARIF format)
  • Alert + code context sent to LLM via internal Copilot APIs
  • Fix generated โ€” code patch + natural language explanation
  • Displayed on PR โ€” click to view diff, edit, and commit
  • Also available for existing alerts on default branch
Where to Find Autofix
PR alertsCode scanning alert โ†’ "Generate fix" button
Security tabRepo โ†’ Security โ†’ Code scanning alerts
Security overviewOrg dashboard โ†’ autofix suggestion counts
Availability
Repo TypeAutofix Access
Public reposFree
Org repos (GitHub Code Security)Included
Enterprise (GHES + Code Security)Code Security
Advanced Security Integration
FeatureAutofix Role
Code ScanningGenerates code patches for CodeQL alerts (SQL injection, XSS, etc.)
Secret ScanningAI-assisted secret rotation guidance
DependabotDependency review integration โ€” verify fixes don't add insecure deps
Security OverviewOrg-level dashboard of autofix suggestion counts & resolution rates
What Autofix Sends to the LLM
  • CodeQL alert data (SARIF format)
  • Code snippets around source, sink, & flow path
  • First ~10 lines of each involved file
  • CodeQL query help text
Configuration
Enable/disableRepo / Org / Enterprise settings
DefaultEnabled on all CodeQL repos
โš ๏ธ Always verify fixes โ€” Autofix may suggest partial fixes, incorrect locations, or dependency changes. Run CI tests before merging.
๐Ÿ’ก Data is not used for LLM training. Governed by Advanced Security terms, not Copilot terms.
Use Cases
Fix SQL injection on PR โ€” CodeQL detects unsanitized input โ†’ Autofix generates parameterized query patch โ†’ dev reviews diff and commits
Remediate backlog โ€” navigate Security tab โ†’ existing alerts on default branch โ†’ click "Generate fix" โ†’ create a PR with the suggested patch
Org-wide metrics โ€” Security overview dashboard shows total autofix suggestions, acceptance rates, and time-to-fix by team
34

Chat Debug View & Agent Debug Logs VS Code

What: Two complementary debugging tools built into VS Code that let you inspect exactly what Copilot sees โ€” the full system prompt, loaded instructions/skills, context files, tool call I/O, token usage, and raw LLM responses. Essential for debugging why Copilot ignores files, skips tools, or produces unexpected output.
TWO DEBUGGING TOOLS โ€” SEE EXACTLY WHAT COPILOT SEES ๐Ÿ”ฌ CHAT DEBUG VIEW โ€” Raw LLM Data โ‹ฏ menu โ†’ Show Chat Debug View or Developer: Show Chat Debug View EXPANDABLE SECTIONS PER REQUEST: โ–ธ System Prompt Full instructions defining AI behavior โ€” verify custom instructions, agent descriptions, skills loaded โ–ธ User Prompt Exact text sent to model โ€” confirm #mentions resolved, prompt files expanded, variables filled โ–ธ Context Files, symbols, attachments โ€” check if expected files appear ('#file'), context window full โ–ธ Response Raw model output + reasoning โ–ธ Tool Responses Input/output for each tool call Best for: inspecting exact prompts, verifying context, debugging MCP tool I/O ๐Ÿ“Š AGENT DEBUG LOG โ€” Event Timeline โš™๏ธ gear โ†’ Show Agent Debug Logs (enable agentDebugLog.enabled) THREE VIEWS: ๐Ÿ“‹ Logs Chronological events Filter by type ๐Ÿ“Š Summary Token usage, tool count Duration, errors ๐Ÿ”€ Flow Chart Agent โ†” subagent Visual orchestration EVENT TYPES IN LOG: LLM Requests Tool Calls Discovery Errors All โœจ Attach to Chat Ask AI about its own session ๐Ÿ“ค Export / Import OTLP JSON โ€” share sessions Best for: timeline analysis, token budgets, custom agent/subagent debugging Quick alternative: /troubleshoot + question โ†’ AI analyzes its own debug logs (e.g. "/troubleshoot list loaded instructions")
Chat Debug View (Raw Data)
SectionWhat It Shows
System PromptFull instructions, agent description, loaded skills โ€” verify custom instructions appear
User PromptExact text sent to model โ€” confirm #-mentions resolved to real content
ContextFiles, symbols, attachments โ€” check if expected files are included or context window is full
ResponseRaw model output including reasoning โ€” understand how model interpreted your request
Tool ResponsesInput/output for each tool call โ€” debug MCP servers, verify correct payloads
How to Open
Chat Debug ViewChat โ‹ฏ menu โ†’ Show Chat Debug View
Command PaletteDeveloper: Show Chat Debug View
Agent Debug Log Panel Preview
ViewWhat It Shows
LogsChronological events โ€” LLM requests, tool calls, discovery, errors. Filter by type.
SummaryAggregate stats โ€” total tool calls, token usage, error count, duration
Flow ChartVisual agent โ†” subagent interaction graph โ€” pan, zoom, click nodes
How to Open
Enable settinggithub.copilot.chat.agentDebugLog.enabled
Gear iconChat โš™๏ธ โ†’ Show Agent Debug Logs
Command PaletteDeveloper: Open Agent Debug Logs
Extra Features
  • Attach to chat โ€” sparkle icon โ†’ ask AI about its own session
  • Export/Import โ€” OTLP JSON format, share sessions offline
  • /troubleshoot โ€” ask AI directly: /troubleshoot list loaded instructions
Common Troubleshooting Scenarios
ProblemWhere to LookWhat to Check
AI ignores workspace filesAgent Logs โ†’ Discovery events + Chat Debug โ†’ ContextWorkspace indexed? Files in context window?
MCP tool not invokedAgent Logs โ†’ Tool Calls filter + Chat Debug โ†’ System PromptTool listed in available tools? Server running?
Response truncatedAgent Logs โ†’ LLM Requests โ†’ token usageContext window full? Start a new session.
Prompt file not appliedAgent Logs โ†’ Discovery eventsFile loaded/skipped? applyTo glob match?
Custom instructions missingChat Debug โ†’ System Prompt sectionExpand and search for your instruction text
๐Ÿ’ก Set log level to Trace for maximum detail: Command Palette โ†’ Developer: Set Log Level โ†’ select Trace for GitHub Copilot and GitHub Copilot Chat extensions.
Use Cases
Debug missing instructions โ€” open Chat Debug View โ†’ expand System Prompt โ†’ search for your copilot-instructions.md text โ†’ confirm it was loaded
Optimize token usage โ€” open Agent Debug Logs โ†’ Summary view โ†’ check token consumption โ†’ identify if context window is being wasted on irrelevant files
Debug multi-agent flow โ€” open Agent Flow Chart โ†’ visualize parent โ†’ subagent โ†’ tool call sequence โ†’ identify where the chain breaks
โ˜…

Quick Reference & Resources

What: Essential keyboard shortcuts, AI-powered slash commands for creating customizations, and key links to documentation, changelogs, and community resources.
Chat Shortcuts
ShortcutAction
โŒƒโŒ˜I / Ctrl+Alt+IOpen Chat panel
โŒ˜I / Ctrl+IInline chat
โŒ˜โ‡งIToggle secondary sidebar
TabAccept suggestion
EscDismiss suggestion
Alt+] / Alt+[Next / prev suggestion
Ctrl+โ†’Accept word by word
Useful Extensions
Awesome CopilotCurated tips & resources ยท โ†— Link
Prompt BoostEnhance your prompts ยท โ†— Link
Token TrackerTrack Copilot usage ยท โ†— Link
SpecStorySave & search AI chat history ยท โ†— Link
HuggingFaceHF models in VS Code ยท โ†— Link
Quick Commands
CommandAction
/initGenerate workspace instructions
/explainExplain selected code
/fixFix problems in code
/testsGenerate unit tests
/newScaffold a new project
/clearStart a new chat session
/helpCopilot quick reference
Key Resources
Feature MatrixCopilot feature comparison ยท โ†— Link
MCP MarketplaceDiscover MCP servers ยท โ†— Link
Skills RepositoryBrowse agent skills ยท โ†— Link
ChangelogLatest Copilot updates ยท โ†— Link
awesome-copilotCommunity resources ยท โ†— Link
Use Cases
Fast navigation โ€” memorize โŒƒโŒ˜I (open chat) + โŒ˜I (inline chat) + Tab (accept) for fluid keyboard-only workflow
Stay current โ€” check github.blog/changelog/copilot weekly for new models, features, and API changes
๐Ÿ“š Sources: GitHub Copilot Docs · Changelog