Pages

Sunday, February 8, 2015

Evaluate me: Software Developer! 2nd edition

I posted an article 4 years ago to address the evaluation process of Software developers Evaluate me: Software Developer!, and now I'll elaborate more on this topic and I'll add sample sheet to show how it be formulated in it's final state.
Before I start, I have to mention that this was an accumulated work and many others contributed in this effort.

Key Performance Indicators (KPIs)

In order to evaluate accomplished work, you need to define KPI's to measure the work that have been done by your team, these KPI should be measurable (tangible) and can be converted to numbers as a result so you can compare it with other numbers.
We could define 3 KPI's that we can measure and think it gives us a good evaluation of the output of the developers:
  • Deliver on time: delivery indicator define the minimum accepted delivery of task.
  • Quality: quality indicator defines the minimum accepted quality of the result of work. 
  • Code review: code review indicator defined the minimum accepted quality of coding activities.

Evaluation Baseline

Baseline is the minimum accepted score of a given KPI; for example for delivery on time KPI the minimum accepted scenario is delivering the task on-time.
In our case, we've defined baseline for each KPI's,:
  • Deliver on time:if developer delivers on time he will get 3 point out of 5.
  • Code review & Quality baseline: quality baseline is similar to code review baseline and it depends on number of reported bugs/defects and its severity, below is a table that defines Quality baseline.
A B C  D E F
Severity Weight Count of Reported bugs Weight* Number of reported bugs Number of Max Accepted bugs Weight*# Max Accepted bugs
Blocker 5 0 0 0 0
Critical 4 0 0 0 0
Major 3 0 0 5 15
Minor 2 0 0 10 20
Trivial 1 0 0 10 10
0 0 25 35
Bug severity: the classification of reported bugs.
Weight: weight of each bug in it's severity, Blocker bugs have 5 weight and trivial bugs have 1 weight.
Count of reported bugs: a count of bugs in each severity.
Weight* count of reported bugs: total weight of bugs weight for each bug severity.
Number of accepted bugs: this is the baseline where you define the minimum accepted bugs for each bug severity.
Weight*# Max Accepted bugs: total weight of baseline for each bug severity.
   

Quality indicator calculation:

to calculate the final quality indicator we use the following formula:
Quality indicator = 4 +(sum F - Sum D)/Sum D)


Generating the final score

Now, we’ve defined our KPI’s, it’s time to put everything together:
Task Complexity Code review Delivery Quality Rate
Task1 3 3.50 3.00 3.49 3.25
Task2 2 4.00 3.00 3.63 3.09
Task3 4 3.00 3.00 3.00 3.30
3.21
Task: Task name.
Complexity: Task complexity level.
Code review: Result of Code review results analysis.
Delivery: Result of delivery results analysis.
Quality: Result of testing results analysis.

Evaluation rate calculation:

Rate = Avg(Complexity, Code review, Delivery, Quality)

Automation

Using this method and collect data manually would be a real headache and it will overloads the manager, however there are many tools that can be used to accomplish this objective.
In our case, we’ve used a combination of tools integrated together along with source control repository to generate such data, most of these tools developed by Atlassian A leading company who develops great software development tools.  


Sample sheet with calculation

Find a sample evaluation sheet here.


No comments:

Post a Comment