Developers are now orchestrating Claude, GPT, and Gemini models directly in GitHub workflows through custom agents. GitHub’s AgentHQ platform extends Copilot into a multi-agent system where specialised agents can be created for specific tasks (code review, documentation, DevOps, security). I have been experimenting with these custom agents to see how they fit into my development workflow, and in this post I cover what custom agents are, practical setup, and how to use them effectively.
Custom agents in GitHub Copilot are specialized AI assistants tailored to specific tasks, workflows, or team conventions. Think of them as personas you can create for different aspects of your development work:
Unlike the general-purpose Copilot, custom agents can be configured with specific instructions, tool permissions, and behavioral guidelines. They live as configuration files in the repository (typically in .github/agents/), making them version-controlled, auditable, and shareable across the entire team.
The key innovation here is that these agents aren’t just chatbots—they’re integrated directly into the development workflow. They can read files, make edits, search codebases, trigger builds, review pull requests, and interact with the entire GitHub ecosystem while respecting repository permissions and security boundaries.
Creating custom agents is straightforward. They’re defined using Markdown files with YAML frontmatter, stored in your repository’s .github/agents/ directory. Here’s a practical example:
Here’s a documentation agent example saved as .github/agents/docs.agent.md.
---
name: docs_specialist
description: Expert technical writer for API documentation and developer guides
target: github-copilot
tools:
- read
- edit
- search
infer: true
metadata:
area: documentation
priority: high
---
# 📝 Documentation Specialist Agent
You are a technical writing expert specialized in this project's technology stack.
## Expertise
- TypeScript and React component documentation
- API reference generation and maintenance
- Developer guide creation and updates
- Markdown formatting and structure
## Standards
- Follow the Microsoft Writing Style Guide
- Use active voice and present tense
- Include code examples for all API methods
- Always update `/docs/api/` when code changes
- Never modify production configuration files
## Workflow
1. When documenting a component, always include:
- Purpose and use cases
- Props/parameters with types
- Return values
- Code examples
- Common pitfalls or gotchas
2. Build and validate documentation:
```bash
npm run docs:build
npm run docs:lint
``
Boundaries
- Never access secrets or credentials
- Do not modify files in `/vendor/` or `/node_modules/`
- Always preserve existing code examples unless explicitly outdated
The YAML frontmatter supports several properties. The name field sets the agent display name in the UI. The description field (required) explains the agent’s role and capabilities. The target field specifies where it runs (vscode or github-copilot). The tools field lists available tools (eg read, edit, search, bash).
When writing agent instructions, I have found that being specific makes a big difference. Include exact commands, file paths, and workflows. Set clear boundaries by explicitly stating what the agent should never do. Provide examples showing expected output formats and code samples. Include context about the tech stack, versions, and tools. Define what success looks like.
Agents can be scoped at multiple levels. Repository agents, Organization agents, VSCode/User agents files usually live in .github/agents/. The configuration file’s name (minus .md or .agent.md) is used for deduplication between levels so that the lowest level configuration takes precedence.
Custom agents codify team knowledge, enforce standards, and automate repetitive work. Solo developers gain consistency across projects. Teams preserve conventions and accelerate onboarding. Enterprises embed compliance and maintain full audit trails.
The productivity claims circulating in the community (30-40% time savings, faster onboarding) are difficult to verify independently. My experience has been positive but results will vary based on how well agents are configured and how repetitive the workflows are. I would recommend starting with realistic expectations and measuring the actual impact in each environment.
The gotcha with custom agents is that vague instructions produce inconsistent results. “You are a helpful coding assistant” doesn’t work. “You are a test engineer who writes tests for React components using Jest, follows these examples, and never modifies source code” does. I learned this the hard way after some initial attempts produced unreliable results.
Other common mistakes to avoid:
read and search. Start restrictive and expand as needed.Agent configuration is iterative. Expect to refine instructions over several weeks as you discover edge cases.
I would recommend starting with a single agent for one specific, repetitive task in the workflow. Documentation updates, code review checklists, or test generation are good candidates. Configure it, use it for a week, and refine based on what works and what doesn’t.
Once one agent is working well, consider expanding to other workflow areas. The patterns learned from the first agent will inform better configurations for subsequent ones.