What is a performance review template? (Clear definition + why it matters)
A performance review template is a structured form with questions, rating scales, and guidance. It standardizes how managers and employees evaluate performance.
Strong templates reduce bias, save time, and make expectations clear by anchoring ratings to observable behaviors. With a common format, reviews become comparable across teams and easier to calibrate.
How to choose the right template (by scenario, role, and cadence)
Use this decision guide to pick the performance review template that fits your goal, cadence, and audience. The right fit improves signal quality and makes calibration easier.
Start with the scenario, then tailor by level and industry.
Scenario fit: annual vs quarterly vs probation vs compensation check-ins
Choose the scenario first—your template follows.
- Annual or semiannual: Use a comprehensive employee performance review template with goals, competencies, and calibrated ratings. Add a self-evaluation and peer/360 input where appropriate.
- Quarterly or mid-year: Use a lightweight check-in template focused on OKR progress, blockers, and 1–2 development priorities; avoid rating bloat.
- Probation/90-day: Use a tight, compliance-aware format with job expectations, training provided, objective evidence, and clear pass/extend/terminate criteria.
- Compensation check-ins: Separate from performance reviews when possible; use a structured template with documented criteria, pay equity checks, and no off-script promises.
Role/level fit: IC vs manager vs executive; new hire vs tenured
Match competencies to the role and seniority.
- Individual contributors (IC): Emphasize execution, quality, collaboration, and learning agility; use 1–5 behaviorally anchored rating scales (BARS).
- Managers: Add people leadership, coaching, hiring, and team health; include team outcomes and engagement indicators.
- Executives: Focus on strategy, cross-functional impact, organizational health, and business results; include stakeholder feedback.
- New hires vs tenured: For new hires, weight onboarding milestones and role proficiency; for tenured, weight strategic projects and sustained impact.
Industry fit: sales, engineering, customer success (what to customize)
Customize metrics and anchors so ratings map to real work.
- Sales: Add quota attainment, pipeline hygiene, win rate, and forecast accuracy; include deal review evidence.
- Engineering: Add code quality, delivery predictability, technical scope, and reliability ownership; include PRs, incidents, and design docs.
- Customer success: Add NPS/CSAT, retention/renewals, expansion, and adoption; include account plans and risk mitigation steps.
Free performance review templates (Google Docs/Sheets + PDF)
Use the copy-ready templates below—paste into a Google Doc or Sheet. Then select File > Make a copy and export to PDF if needed.
Each form includes required fields, rating guidance, and prompts to minimize bias. You can standardize quickly without starting from scratch.
Annual performance review template (editable + sample fields)
Use this employee performance review template to run a comprehensive annual or semiannual appraisal.
- Employee info: Name, role, level, department, review period, manager.
- Goals/OKRs: List objectives, key results, outcomes vs targets, comments.
- Competencies (BARS 1–5): Communication, Ownership, Collaboration, Impact, Role-specific.
- Achievements: Top 3–5 outcomes with evidence (links, metrics).
- Growth areas: 1–3 development opportunities with action plans.
- Overall rating: 1–5 with behavior summary; calibration notes.
- Next-period goals: 3–5 draft OKRs with measures.
- Sign-offs: Employee, manager, date; acknowledgement of discussion.
To use: Copy these fields into your doc and keep your scale consistent. Include a self-evaluation section for two-way feedback. Align examples to your competency language to keep ratings comparable.
Self-evaluation template (questions grouped by competency)
Use this self-evaluation template to encourage reflection and reduce disagreement in the manager meeting.
- Results and impact
- Which outcomes are you most proud of this period? What evidence supports them?
- Where did results fall short? What did you learn?
- Collaboration and communication
- Describe a time you improved team alignment or unblocked others.
- What feedback have you sought and how did you use it?
- Ownership and execution
- How did you prioritize and manage trade-offs?
- Which commitments slipped, and what will you change next time?
- Growth and craft
- What skills improved most? Where do you want training or mentorship?
- Which projects will stretch you next period?
- Goals
- Propose 2–3 OKRs for the next cycle with measurable KRs.
Tip: Ask for specific examples and links to artifacts (docs, dashboards, tickets) to ground claims. This keeps the conversation fact-based and speeds calibration.
Peer and 360 feedback templates (bias-reduced prompts)
Use this 360 feedback template to collect specific, actionable input.
- Prompts
- What is one behavior this person should keep doing? Include an example and impact.
- What is one behavior this person should start or stop? Include an example and a suggested next step.
- When collaborating with this person, what helps or hinders team outcomes?
- Optional: Rate collaboration effectiveness on a 1–5 anchored scale.
- Rater guidance
- Focus on observed behaviors, not traits.
- Avoid loaded words; use concrete situations and impacts (SBI: Situation–Behavior–Impact).
- Skip areas you have not observed.
Structured prompts increase signal quality and reduce common rater biases. They make peer input easier to synthesize.
Probation/90-day review template (compliance-safe fields)
Use this probation review template to document job fit and training with legal clarity.
- Role scope: Job description essentials and performance standards.
- Training/support provided: Onboarding plan, mentors, resources.
- Evidence of performance: Specific examples, dates, artifacts.
- Ratings against expectations: Meets / Partially meets / Does not meet; behavior anchors.
- Decision and next steps: Confirm pass, extend with plan, or terminate per policy.
- Employee acknowledgement: Summary of discussion and signatures.
Note: Keep facts objective, tie feedback to job duties, and follow local laws and company policy. Clear documentation protects both the employee and the company.
Corrective action review template (PIP-aligned)
Use this PIP-aligned template to address persistent gaps fairly and transparently.
- Performance concerns: Specific behaviors, dates, and business impact.
- Expectations and standards: Observable criteria to meet.
- Support and resources: Training, coaching, tools, frequency of check-ins.
- Milestones and dates: Weekly targets and evidence required.
- Consequences: Outcomes if expectations are not met.
- Signatures: Employee, manager, HR; review dates.
Keep weekly check-ins short, documented, and focused on evidence, not intent. This builds clarity and ensures due process.
Compensation check-in template (pay equity guardrails)
Use this compensation check-in template to structure sensitive pay conversations and protect equity.
- Reason for discussion: Cycle timing, market adjustment, promotion review.
- Evaluation inputs: Performance rating, scope change, benchmarks, comp bands.
- Decision framework: Internal equity, market data, budget, location factors.
- Proposed action: New base/bonus/equity or no change with rationale.
- Communication plan: What to share now vs after approvals; effective date.
- Documentation: Notes, approvals, and audit trail.
Guardrails
- Separate performance and pay discussions when possible to reduce noise.
- Do not promise changes before approvals; use documented criteria and ranges.
- Run equity checks by cohort (role/level/location) before finalizing.
Behaviorally anchored rating scales (BARS) with examples
Use behaviorally anchored rating scales to make 1–5 ratings consistent and easier to calibrate. The anchors below are examples. Adapt language to your competency model and company values for clarity.
1–5 scale anchors for core competencies (communication, ownership, collaboration, impact)
Use these sample anchors to describe what 1–5 looks like across core competencies.
Communication
- 1: Frequently unclear or late; others must rework due to miscommunication.
- 2: Basic updates but missing key details; inconsistent channels.
- 3: Clear and timely with appropriate detail; adapts message to audience.
- 4: Anticipates information needs; simplifies complexity for cross-functional partners.
- 5: Sets communication standards for the team; drives alignment across org levels.
Ownership
- 1: Avoids accountability; needs frequent prompting.
- 2: Completes assigned tasks but escalates late; limited follow-through.
- 3: Delivers commitments reliably; flags risks early and proposes fixes.
- 4: Proactively improves systems; owns outcomes beyond immediate tasks.
- 5: Shapes strategy and mobilizes others; creates durable improvements.
Collaboration
- 1: Works in silos; resists feedback.
- 2: Participates but misses handoffs; narrow perspective.
- 3: Seeks input, supports teammates, and meets shared goals.
- 4: Unblocks others and resolves conflicts constructively.
- 5: Builds high-trust networks; elevates team performance and culture.
Impact
- 1: Limited measurable results; misaligned work.
- 2: Inconsistent results; impact not durable.
- 3: Meets goals with clear, measurable outcomes.
- 4: Exceeds goals; improves key metrics with scalable solutions.
- 5: Transforms outcomes across teams; outsized business impact.
Role-specific anchors: sales (quota/pipe), engineering (quality/velocity), CS (NPS/renewals)
Tailor anchors to the work signals that matter by function.
Sales (Quota Execution)
- 1: <50% attainment; pipeline hygiene poor; forecasts unreliable.
- 2: 50–79% attainment; inconsistent discovery; deals stall.
- 3: 80–100% attainment; clean CRM; reliable forecasting.
- 4: 100–120% attainment; high win rate; proactive multi-threading.
- 5: 120%+ attainment; mentors others; repeatable playbuilding.
Engineering (Quality and Velocity)
- 1: Frequent defects; misses estimates; low code review quality.
- 2: Occasional regressions; limited test coverage; ad-hoc delivery.
- 3: Predictable delivery; solid tests; code meets standards.
- 4: Improves reliability; reduces cycle time; raises bar in reviews.
- 5: Drives architecture upgrades; measurably boosts team throughput and quality.
Customer Success (Retention/Advocacy)
- 1: Accounts churn without mitigation; poor risk visibility.
- 2: Late escalations; limited adoption plans.
- 3: On-time renewals; clear health scores; documented success plans.
- 4: Expansions through value realization; proactive exec alignment.
- 5: Portfolio-level retention gains; creates reference customers and playbooks.
Examples and scripts: how to give fair, actionable feedback
Use these manager scripts to open positively, deliver clear feedback, and align on next steps. Keep examples specific and link to artifacts to anchor the conversation in evidence.
Positive start + constructive phrasing (before/after scripts)
- Before: “You need to communicate better.”
- After: “In the Q3 launch (Situation), the status email lacked the risk on vendor delays (Behavior), which surprised stakeholders on rollout day (Impact). Going forward, include risks and owner in weekly updates (Next step).”
- Before: “You’re not strategic enough.”
- After: “Your roadmap focuses on near-term fixes; to hit our adoption goal, add one-quarter-ahead bets with success metrics and dependencies.”
- Opening script
- “Let’s start with highlights I’ve seen and those you’re proud of. Then we’ll cover two growth areas with examples and agree on support and next steps.”
Self-evaluation exemplars: strengths, growth areas, goals
- Strengths example
- “I led the billing refactor that cut payment failures by 38% (Stripe dashboard). I coordinated with Support to validate edge cases and wrote a runbook for incident response.”
- Growth area example
- “Two project estimates slipped due to integration risks I flagged late. I’ve added risk reviews to kickoff docs and will pilot story-pointing with QA to improve predictability.”
- Goal example (OKR-aligned)
- Objective: Improve onboarding activation. KRs: Increase D7 activation from 42% to 60%; reduce setup time from 20 to 10 minutes; ship in-app checklist v2 by May 15.
Implementation guide: prep, rollout, and calibration
Follow this rollout plan to ship fast without sacrificing fairness or data quality. A light pilot and strong calibration habits will save time later.
Pre-work: align competencies and OKRs; manager training checklist
Do the groundwork once so each cycle runs smoothly and produces reliable data.
- Align on 4–6 core competencies and 1–3 role-specific ones; attach BARS to each.
- Map OKRs to the review: auto-pull status where possible; avoid duplicate data entry.
- Manager training checklist
- Practice SBI feedback and bias mitigation (halo/horns, leniency, recency).
- Calibrate with sample reviews before go-live.
- Use AI tools to summarize peer input, tag themes, and draft evidence-based summaries—always human-review before sharing.
Calibration meeting agenda + variance analysis worksheet
Run calibration to normalize ratings and reduce bias.
- Agenda
- Confirm rating scale and anchors.
- Review distribution targets and outliers by team/level.
- Discuss 10–15 minutes per outlier with evidence only.
- Align on final ratings and promotion nominations.
- Document rationale and follow-ups.
- Variance metrics
- Mean and standard deviation by team/level.
- Calibration variance: percent of ratings changed in calibration (target 10–25%).
- Gap checks: compare average ratings by gender/location/underrepresented groups; investigate unjustified gaps.
- Worksheet inputs
- Pre- vs post-calibration ratings, comments, evidence links, and decision notes.
Timeline: pilot, feedback, full rollout, and governance
- Week 0–2: Draft templates, anchors, and guidance; legal review for probation/PIP sections.
- Week 3–4: Pilot with 1–2 teams; run a mock calibration; collect feedback.
- Week 5: Update templates; publish how-to guides; train managers.
- Week 6–8: Full rollout; support hours and office hours; monitor completion and variance.
- Post-cycle: Debrief; log learnings; version templates and store in a central repository with access controls.
Privacy, DEI, and compliance essentials
Design your process to protect people and the business while improving fairness and accessibility. Build these practices into templates and training so they’re followed by default.
Probation and corrective action: documentation and legal considerations
- Tie feedback to job duties and objective evidence; avoid personal traits.
- Record training and support provided; include dates and artifacts.
- Use clear timelines and measurable expectations; confirm understanding.
- Follow jurisdictional rules on notice, representation, and termination.
- Have HR review all PIPs and probation outcomes; obtain employee acknowledgement of receipt.
Data retention, localization, and accessibility best practices
- Set retention periods by region (for example, EU often requires data minimization; keep only what you need for defined purposes).
- Limit access to HR, the manager chain, and authorized reviewers; audit access logs.
- Use inclusive, plain language; offer accessible formats and screen-reader-friendly docs.
- Localize competencies and examples where needed; avoid idioms and culture-specific references.
Measure impact: KPIs to track review quality and fairness
Track a small set of KPIs to improve cycle over cycle and demonstrate ROI. Use the same dashboard each cycle to spot trends and direct training where it matters.
Completion rate, cycle time, calibration variance, sentiment, promotion parity
Use these KPIs as your operating dashboard.
- Completion rate: Target ≥95% submissions on time.
- Cycle time: Days from kick-off to final sign-offs; reduce admin drag each cycle.
- Calibration variance: Percent of ratings changed; monitor outliers by team/level.
- Sentiment: Post-review survey on clarity and fairness; target ≥4.2/5.
- Promotion parity: Compare promotion rates by demographic and location; investigate gaps beyond role/tenure mix.
Continuous improvement loop: iterate templates with data
- After each cycle, review metric trends and anonymized feedback.
- Update anchors where confusion was highest; add examples to tricky competencies.
- Trim questions no one uses; auto-fill data from OKRs/HRIS to cut repetition.
- Re-train managers on two weak areas (for example, evidence quality, hard conversations).
- Version templates and changelog updates for transparency.
FAQs about performance review templates
What should be included in a performance review template?
A strong performance review template covers the essentials needed for fair, comparable evaluations.
Include:
- Role and review period details.
- Goals/OKRs with outcomes and evidence.
- 4–6 core competencies with 1–5 behaviorally anchored rating scales.
- Achievements and growth areas with specific examples.
- Overall rating and calibration notes.
- Next-period goals and development plan.
- Sign-offs and date stamps.
Which rating scale is best (and when)?
Choose the simplest scale that still supports calibration and decisions.
- 1–5 with BARS: Best for medium/large teams that need differentiation and calibration; anchors reduce bias.
- 3-point (Below / Meets / Exceeds): Good for fast cycles or small teams; simpler but less granular.
- Narrative-only with anchors: Use for senior roles when qualitative impact matters most; still include behavioral examples to guide decisions.
How do I adapt templates for remote teams?
Make your templates evidence-first and async-friendly to reflect how remote work gets done.
- Make evidence link-first: docs, dashboards, tickets, and recordings.
- Add prompts about async communication and time-zone handoffs.
- Encourage more peer input to replace lost hallway context.
- Use written summaries before the live conversation so people can process and respond thoughtfully.
Author’s note: This guide reflects hands-on People Ops experience implementing review cycles across distributed teams, with an emphasis on behaviorally anchored scales, calibration, and practical governance.


%20(1).png)