In the previous post I covered what Agent Skills are, why they matter, and how they sit alongside custom instructions, MCP servers, and custom agents. This post is the practical follow-up: how to actually build a skill, three worked examples from my own setup, and the trade-offs I have run into.
Creating a skill is straightforward. A directory, a SKILL.md file, and optionally supporting resources are all that is needed.
Step 1: Choose a scope
Skills can be:
.github/skills/, .claude/skills/, or .agents/skills/ in the repository~/.copilot/skills/, ~/.claude/skills/, or ~/.agents/skills/ on the local machineProject skills are shared with everyone working on the repository. Personal skills are available across all projects but private to the individual.
Step 2: Create the directory structure
# For a project skill
mkdir -p .github/skills/bicep-deployment
# For a personal skill
mkdir -p ~/.copilot/skills/code-review-checklist
Skill names should be lowercase with hyphens for spaces.
Step 3: Create the SKILL.md file
Every skill needs a SKILL.md file with YAML frontmatter and markdown instructions.
A minimal example:
---
name: bicep-deployment
description: Guide for deploying Azure infrastructure using Bicep templates with proper validation and testing. Use when deploying infrastructure to Azure.
---
# Bicep Deployment Workflow
When deploying Bicep templates, follow this process:
1. Build and validate
2. Run what-if analysis
3. Deploy with confirmation
4. Verify deployment
The detail behind each step (commands, examples, boundaries) goes in the body of the file. Copilot reads the whole skill into context when it loads, so unnecessary prose costs tokens.
Step 4: Add supporting resources (optional)
Include scripts, examples, or reference files in the skill directory:
.github/skills/bicep-deployment/
├── SKILL.md
├── scripts/
│ └── validate-deployment.sh
└── examples/
└── storage-account.bicepparam
Reference these in the skill instructions:
See `examples/storage-account.bicepparam` for parameter file format.
Run `scripts/validate-deployment.sh` after deployment to verify resources.
YAML frontmatter requirements:
name (required): unique identifier, lowercase with hyphensdescription (required): what the skill does and when Copilot should use ituser-invocable (optional): controls whether the skill appears as a slash command in the chat menu within VSCodeEffective descriptions:
The description determines when Copilot loads a skill. Being specific about triggers is important:
# Good — clear trigger conditions
description: Guide for debugging failing GitHub Actions workflows. Use when asked to debug failing workflows, CI/CD failures, or pipeline errors.
# Less effective — too generic
description: Help with GitHub Actions
Instruction guidelines:
Three skills from my own setup. Each one started as something I was repeatedly explaining to Copilot, and ended up as a SKILL.md after the third or fourth round of repetition.
1. Kubernetes troubleshooting skill
---
name: kubernetes-troubleshooting
description: Guide for debugging Kubernetes pods and deployments. Use when asked about pod failures, CrashLoopBackOff, ImagePullBackOff, or container issues.
---
# Kubernetes Pod Troubleshooting
## Initial investigation
1. Check pod status
kubectl get pods --all-namespaces
kubectl describe pod <pod-name> -n <namespace>
2. Common issues and solutions
**CrashLoopBackOff:**
- Check application logs: `kubectl logs <pod-name> -n <namespace> --previous`
- Verify startup command and arguments
- Check resource limits (CPU/memory)
- Review liveness/readiness probes
**ImagePullBackOff:**
- Verify image name and tag are correct
- Check image registry credentials: `kubectl get secrets -n <namespace>`
- Ensure service account has pull permissions
- Test image pull manually: `docker pull <image>`
**Pending state:**
- Check node resources: `kubectl top nodes`
- Review pod resource requests
- Check for node selectors or affinity rules
- Verify persistent volume claims
3. Get detailed information
kubectl logs <pod-name> -n <namespace> --tail=100
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
kubectl exec -it <pod-name> -n <namespace> -- /bin/sh
## Resolution
1. Fix root cause (code, configuration, or resource constraints)
2. Apply changes: `kubectl apply -f <manifest>`
3. Monitor rollout: `kubectl rollout status deployment/<name> -n <namespace>`
4. Verify health: `kubectl get pods -n <namespace> -w`
2. Release notes generation skill
---
name: release-notes-generator
description: Generate release notes from git commits and pull requests. Use when preparing releases or asked to create a changelog.
---
# Release Notes Generation
## Process
1. Collect changes since the last release
git log $(git describe --tags --abbrev=0)..HEAD --oneline
gh pr list --state merged --limit 50
2. Categorise changes into Features, Bug Fixes, Performance, Documentation, Dependencies, Breaking Changes.
3. Format using the team's release-notes template (one bullet per change, link to PR, breaking changes get a migration line).
4. Validate before publishing — every change has a PR, breaking changes include migration guidance, contributors are acknowledged.
Sharing within a team:
As mentioned above commiting this skills to a folder like .github/skills/ these can be push to the main repo and then utilised everyone who pulls the repository.
Community skills:
A few places to look for existing skills:
Always read the SKILL.md before installing, a skill can run scripts, so treat it like any other dependency. I would not blindly install a skill from a source I do not recognise.
Skills can include executable scripts and reference files. A Terraform deployment skill might look like:
.github/skills/terraform-deployment/
├── SKILL.md
├── scripts/
│ ├── validate.sh
│ ├── plan.sh
│ └── apply.sh
└── examples/
└── backend-config.tfvars
The SKILL.md references the scripts:
---
name: terraform-deployment
description: Deploy infrastructure using Terraform with validation and approval workflow.
---
# Terraform Deployment Process
## Steps
1. Initialise and validate
./scripts/validate.sh <environment>
2. Generate plan
./scripts/plan.sh <environment>
3. Apply changes after approval
./scripts/apply.sh <environment>
1. Keep skills focused. Each skill should handle one workflow or concern. Better to have multiple specific skills than one generic skill.
2. Use clear, descriptive names. The name and description help Copilot decide when to load a skill — make them unambiguous.
3. Include concrete examples. A skill with one worked example beats a skill with three pages of abstract guidance.
4. Define boundaries clearly. State what should NOT be done. The negative cases are often where Copilot goes wrong without explicit guidance.
5. Maintain and update skills. As processes evolve, keep the skills current. A skill that references a deprecated command is worse than no skill at all.
6. Test skills regularly. Ask Copilot to perform tasks covered by the skill and check the output. If the skill is not being picked up when it should be, refine the description.
Skill not loading:
SKILL.md (case-sensitive).github/skills/, .claude/skills/, .agents/skills/, or the equivalent supported user-level locationSkill loaded but not following instructions:
Skill conflicts:
Agent Skills have transformed how I use GitHub Copilot. By encoding my workflows and conventions in portable, reusable skills, I get consistent behaviour without having to re-explain context every session, and the cognitive load of remembering complex procedures drops significantly.
I would recommend starting with one or two skills for the most common or error-prone workflows — the ones where you keep typing the same paragraph of context. As the benefits become clear, expand to cover more of the development process.
Related posts: