The Annual Review Nightmare That Led to Vereda AI
September 18, 2025
13 min read

The Annual Review Nightmare That Led to Vereda AI

A personal story about inheriting a team during review season

A few years ago, I started with a new company and stepped into an engineering manager role where the previous manager had left. I was excited — new team, new challenges, fresh start. On my very first day, before I had even met the entire team, my manager told me: "Annual reviews are due in 30 days."

Most of us learn about being a leader from a combination of the good managers and the bad managers who led us. We also learn that both tend to be horrible at annual reviews. (One thing that helps: running a free standup bot throughout the year so you actually have data when review season hits.) Very few managers enjoy this process. It's better when the manager at least has a history with the people they are leading — but even that depends on how much the manager actually observes.

The 30-Day Scramble

In the end, I had to resort to doing three things:

  • Asking each team member to rate themselves: As part of this, I asked them to share the goals they had laid out for themselves (and as luck would have it, the previous manager had not given them any).
  • Trying to learn everything each team member had done throughout the year: I ran reports on how many tickets they had owned, total story points, lines of code, and number of reviews completed. Horrible metrics in isolation — but they were all I had available.
  • Vowing to each team member that we would never go through this again while I was leading them.

The Blank Slate Problem

Inheriting a team with no context is more common than people realize. It happens when:

  • A manager leaves suddenly and there's no transition period
  • A reorg shuffles teams and reporting structures
  • A company grows fast and new managers are assigned to existing teams
  • A manager goes on extended leave and a temporary replacement steps in

In all of these cases, the new manager faces the same challenge: how do you evaluate people you don't know, for work you didn't observe?

The previous manager's notes — if they exist at all — are usually a scattered mix of Slack bookmarks, half-filled spreadsheets, and memories that walked out the door with them. There's no institutional memory of who did what, how they grew, or what they struggled with.

This is the blank slate problem. And it's not just uncomfortable for the manager — it's deeply unfair to the engineers being reviewed.

What I Actually Learned From the Experience

Those 30 days taught me more about engineering management than the next six months combined:

Lesson 1: Metrics Without Context Are Dangerous

I pulled Jira reports showing ticket counts, story points, and completion rates. On paper, one engineer looked like a low performer — fewer tickets, lower velocity. In reality, she had spent months on a complex infrastructure migration that prevented three other teams from being blocked. The metrics told the wrong story.

Lesson 2: Self-Ratings Reveal Culture, Not Just Performance

When I asked engineers to rate themselves, the patterns were revealing. The strongest performers consistently underrated themselves. The weakest performer gave themselves the highest rating. This isn't unusual — it's the Dunning-Kruger effect in action. Self-ratings are a useful data point, but they're not a reliable performance measure.

Lesson 3: Engineers Remember What Managers Forget

Every engineer on the team could tell me exactly what they'd accomplished that year, what they were proud of, and what had frustrated them. The information existed — it just hadn't been captured systematically. If someone had been collecting structured updates throughout the year, the reviews would have written themselves.

Lesson 4: Trust Is Built in Hard Moments

I was honest with the team: "I don't have enough context to write thorough reviews. Here's how I'm going to handle it, and here's my commitment going forward." That transparency built more trust than a polished review ever could have.

Lesson 5: The System Failed, Not the People

The previous manager wasn't negligent — they were overwhelmed. Without tools to capture and organize team data, performance tracking requires enormous manual effort that most managers can't sustain alongside their other responsibilities.

Why Self-Ratings Aren't Enough

Many managers facing the blank slate problem default to self-ratings as their primary data source. It feels fair — who knows their work better than the person who did it? But self-ratings have well-documented limitations:

The Confidence Gap

Research consistently shows that underrepresented groups in engineering tend to rate themselves lower than their peers rate them. Women, in particular, often understate their contributions. Relying primarily on self-ratings can systematically disadvantage the people who need the most accurate evaluations.

The Recency Problem

Engineers, like managers, suffer from recency bias. They remember recent projects vividly but forget the significant work from earlier in the year. Critical contributions from Q1 might not even appear in a Q4 self-assessment.

The Visibility Bias

Engineers tend to highlight work they think managers value — shipping features, hitting deadlines, visible wins. They often downplay the equally valuable but less visible work: mentoring, code review quality, documentation, debugging complex production issues, and building team culture.

The Goal Alignment Problem

If goals were unclear or non-existent (as they were for my inherited team), self-ratings become arbitrary. Without shared expectations, an engineer's self-assessment is based on their own definition of success, which may not align with what the organization values.

A Better Approach

Self-ratings should be one input among many, not the primary source:

  • Peer feedback from code reviews and collaboration
  • Objective data from tools and systems
  • Stakeholder input from product, design, and cross-functional partners
  • Standup data that captures daily work patterns and engagement
  • Manager observation from 1:1 conversations and team interactions

Building Review-Readiness From Day One

After that experience, I redesigned my management approach to ensure I'd never face a blank slate again — and that no future manager inheriting my team would either:

Week 1 with any new team:

  • Set up async standups immediately — this is non-negotiable
  • Have individual introductory 1:1s focused on understanding each person's goals, strengths, and concerns
  • Document initial observations about team dynamics, technical practices, and communication patterns

Month 1:

  • Establish quarterly goal-setting with each team member
  • Set up a structured 1:1 cadence with consistent talking points
  • Start capturing performance notes — even brief ones — after each meaningful interaction

Ongoing:

  • Review and update performance notes monthly
  • Conduct quarterly career development conversations
  • Collect peer feedback after major projects or collaborations
  • Let AI track what you can't: standup patterns, code contribution trends, engagement signals

The Test:

At any point during the year, could you write a fair, detailed performance review for each team member? If the answer is "not really," your system needs work.

The goal isn't to create more paperwork — it's to build a continuous, low-effort system that captures the information reviews need. When that system is in place, review prep drops from weeks to hours.

The Cost of Review Scrambles to Team Trust

That 30-day scramble didn't just affect my stress levels — it affected the team's trust in the review process:

What engineers think when reviews are clearly rushed:

  • "My manager doesn't actually know what I do"
  • "This review isn't based on real understanding of my work"
  • "Promotion decisions are being made on incomplete information"
  • "Why should I invest in doing great work if nobody's tracking it?"
  • "The best way to get recognized is to be loud, not excellent"

The downstream effects:

  • Engineers stop trusting that performance evaluations are fair
  • Top performers start looking for managers who actually pay attention
  • The compensation and promotion process loses credibility
  • Team motivation declines because recognition feels arbitrary
  • Knowledge workers disengage from a process they see as performative

The ripple effect extends beyond your team:

When engineers talk to each other — and they do — stories about unfair or uninformed reviews spread fast. One poorly handled review cycle can damage your reputation as a manager and your organization's ability to retain talent.

The fix is prevention, not recovery:

You can apologize for a bad review. You can promise to do better next time. But the damage to trust takes months to repair. It's far easier to build the right systems upfront than to recover from a review process that felt uninformed or unfair.

How Continuous Data Collection Prevents This

The nightmare scenario I experienced — inheriting a team 30 days before reviews with zero context — is entirely preventable with the right tools:

What Vereda AI Would Have Changed:

If the previous manager had been using Vereda AI, I would have inherited:

  • 12 months of standup data with sentiment analysis and engagement trends for each engineer
  • Automated performance summaries showing contribution patterns, collaboration behaviors, and growth trajectories
  • Goal tracking with evidence-based progress indicators
  • Team health signals showing who was thriving and who was struggling
  • AI-generated review drafts based on actual behavioral data, not memory reconstruction

Instead of a 30-day scramble, I would have had:

  • A 2-hour orientation reading AI-generated team summaries
  • Individual profiles showing each engineer's strengths, growth areas, and recent contributions
  • Data-backed talking points for my first round of 1:1s
  • Review drafts that needed refinement, not creation from scratch

The Broader Vision:

Performance data should be institutional, not personal. When a manager leaves, their knowledge of the team shouldn't leave with them. When a new manager arrives, they shouldn't have to start from zero.

Vereda AI creates a continuous performance record that survives manager transitions, org changes, and team restructuring. The data belongs to the team and the organization — not to any individual manager's memory.

Learn more about how Vereda AI's performance review automation ensures no team ever faces a blank slate review again.

Closing Thought

If you've ever inherited a team right before review season, you know how stressful it can be. The good news is, it doesn't have to be that way. With the right insights, you can turn a challenging moment into an opportunity to start strong and build credibility with your new team.

Instead of spending those 30 days piecing together a story, managers can focus on coaching and building trust — even in the middle of review season.

The lessons are clear:

  • Metrics alone tell incomplete stories — context and continuous observation matter more than Jira reports
  • Self-ratings are a supplement, not a solution — they carry biases that systematic data collection corrects
  • Review-readiness is a daily practice, not a seasonal scramble — small, consistent efforts beat heroic last-minute pushes
  • Trust is built through transparency — admitting limitations honestly builds more credibility than pretending to know everything
  • The system needs to outlast the manager — performance data should be institutional, not personal

That 30-day nightmare led directly to the creation of Vereda AI. We built the tool we wished we'd had — one that ensures no engineering manager ever has to write reviews from a blank slate again.

For related reading, check out our guide on reducing annual review stress and learn about running effective 1:1 meetings that build the context reviews need.

Ready to never face a review scramble again?

Discover how Vereda AI can give you instant context and confidence for any review cycle.