Canada Revenue Agency Mismanagement - Competency Based Human Resources - Number 5

Canada Revenue Agency Mismanagement - Competency Based Human Resources - Number 5

Is profile Homogeneity Appropriate?

As pointed out above, the base competency profiles for SP-05 to AU-02 level auditors is identical. Even if we can accept the notion that no deliberate attempt to water down professional staff competencies exists, the notion of equating professional and non-professional staff competencies is nonsense.

By creating homogenous profiles, managers would not be able to discern over whom to hire. The Competency principle is arbitrarily managed by creating a common set point for all staff- junior and senior staff are treated the same. A common comment made these days is this, if a SP-05 and AU-02 have the same competency levels then why aren’t they paid the same? Based on how these profiles have been made, it’s a valid point.

The reality is, competencies and their levels are different between job functions and even different between job-holders of the same job titles. So creating a common competency level in the manner the CRA has, provides the opportunity to install staffing practices that are arbitrary in nature.

Placement criteria used by management to select from pools often are not related to how well or how “competent” an employee is. For instance, the use of Geographic Location as a selection criterion has nothing to do with competencies. In addition, the use of additional testing or other methods devised by local managers creates arbitrary staffing measures that ultimately abandon competencies altogether! One can argue that the whole CBHRM idea is a massive bureaucratic influence (Non-partisanship principle) exercise to use KSA assessments minus the appeal abilities of the union’s members to protest staffing decisions that lack merit. This would have the effect reducing employee buy-in and making unions irrelevant.

Homogenous competency models are also inefficient (efficiency principle). Since the placement manager can arbitrarily decide the basis for ultimate selection from a pool; it requires an employee to undergo several selection actions just to be denied a job. This process takes more time, trouble and effort due to the recourse mechanisms in place. The Auditor General’s report indicated that PQP actually resulted in longer time frames to place people than the old system. This, it would appear, is a major contributing reason for “end-state” promotion.

Dr. John Raven suggests (Hay Group CEO Lyle Spencer, acknowledges Raven) that human behaviour is too complex to be reduced into a regression style analysis. Behaviour must be analyzed in the context of other behaviours exhibited and the environment the person is working in order to come to some meaningful assessment of “competency”.

For example, suppose two identical audits are assigned to two different auditors and some time later they both hand in the similar outputs. While the outputs would be similar, the “competencies” used to create them are not.

In the first case, the auditor might have simply sat at their desk and performed all of the tasks necessary to complete the audit. The other auditor might have solicited assistance from those on the team, the team leader, technical advisors etc. In the end, the first person was able to complete an audit based on analytical thinking, audit and legislation skills. While the second person used effective interactive communication, negotiations, impact, and influence and so on to complete the task.

Can we say that either auditor is competent or incompetent? Not likely. Raven suggests that a model similar to the “Atomic Theory” be used to describe a person’s competency atom. It might look like this for the first auditor:

[ACH4DEC2LPP6AUD8 = Auditor 1]

While the second auditor’s competency profile might look like this:

[EIC7NEG4IMP4AUD1LPP2= Auditor 2]

The first auditor would have 4 parts Achievement Orientation (what the CRA does not recognize) mixed with 2 parts of decisiveness and 6 parts of Legislation and 8 parts of Auditing. The second auditor is “built” differently as the formula suggests, but both have “performed” well.

The CRA did not disclose why the Hay Group model was thought to be superior to other potential models in existence (transparency, fairness, non-partisanship).

Are Competencies Assessed Appropriately?

Currently the CRA assesses competencies in a number of ways. They include Targeted Behavioural Interviews (“TBI”), Portfolios of Competencies (“POC”), “reactant” testing for lower levels of competencies and a hybrid called “Observe and Attest” (“O&A”).

As discussed above, the Hay Group devised an assessment tool that can be quickly learned and applied in the workplace. In essence, the assessor listens to a story given to them by a candidate and then “codes” the content based on the scales listed in the generic competency dictionary. Each instance of behaviour that is consistent with the definition of the competency receives a score and the final tally suggests the level to be assigned.

However, should an organization stray from the purchased material sufficiently you could get unwanted or unexpected results. I believe this is what happened with the “Analytical Thinking” (“AT”).

Let’s look at how AT is defined by the Hay Group and then the CRA.

“Analytical Thinking” is defined by the Hay Group as follows:

Analytical thinking is understanding a situation by breaking it apart into smaller pieces, or tracing the implications of a situation in a step-by-step way. Analytical Thinking includes organizing the parts of a problem or situation in a systematic way; making systematic comparisons of different features or aspects; setting priorities on a rational basis; identifying time sequences, causal relationships or if-then relationships

  • Competence At Work page 68

The definition for AT in the CRA Competency Catalogue reads as follows:

Analytical Thinking is using a logical reasoning process to break down and work through a situation or problem to arrive at an outcome.

  • CRA competency catalogue September 2008

The difference in definitions is substantial, and begs the question: How did the differences occur? And, who made the changes and based on what (Transparency, Non-Partisan issues)? How is it, that the CRA definition uses “logical reasoning” as a descriptor? Where did this come from, and what does it mean? And more importantly, why isn’t this discrepancy shared during “Just In Time” (“JIT”) information sessions? Where is the linkage between cause and effect and the CRA definition?

The CRA version of the competency seems “watered down” so that it might appeal to all levels of staff. Good concept, but it may be the definition is so broadly worded that it is meaningless. When you combine scoring for AT and a watered down definition, your assessment results can be skewed.

Below is reprint of the Hay Group’s score card for Analytical Thinking.

The Hay Group offers a scoring summary for Analytical Thinking, here is a copy of that scoring summary:

Note at level 3; the instructions given to the assessor stipulates that Level 2 is the default code if you are unsure about the complexity of the problem or situation broken down by the interviewee. The anecdotal evidence regarding assessment outcomes from Audit suggests that most auditors are getting level 2 or lower. The staffing principle that seems to be violated is “transparency” in that feedback from assessors never seems to point to their own confusion or the fact that level 2 is the default code when confusion or uncertainty exists. Not only are employees kept in the dark regarding assessment procedures but given that many assessors do not possess enough technical knowledge or skill associated in an audit environment, it would be fairly easy to assume that they wouldn’t understand the situation articulated to them by an auditor. By inference then, it would appear many auditors are destined for level 2; when in fact level 3 is needed for higher paying professional jobs.

In order to test the clarity issue, two individuals were coached recently in AT. In both cases the Hay Group definition was used and the candidates were instructed in the assessment method to be used. One candidate received level 4 while the other candidate received level 3. The second candidate spent a total of 90 minutes to prepare for the interview (30 with the coach and 60 fleshing out their example). Is this a case where the Hay Group can argue “We sold a valid concept with research; we are not responsible for negative outcomes when our services and products are not used correctly!”

Assessor bias can influence outcomes. There are about 10 different types of coding biases that can interfere with proper assessments. They include interesting titles such as: “The Halo Effect” or the “The Horn Effect”. The main point here is that staffing actions are to be free from arbitrary treatment. The biases that can be present are never articulated to candidates nor are they even made aware they exist (transparency and non-partisanship).

Prior to the CRA 2008 reallocation exercise, a staffing shortage was being experienced in just about every TSO. Local management had to rely on assessors to get through staffing actions as quickly as possible. Sometimes, managers would bend procedures in an effort to “smooth” out the process. For example, a technical competency assessment process took place where local managers redacted POC’s prior to distributing them to assessors (non-partisanship). This was done to minimize assessor bias apparently. The issue was this, a POC, once completed becomes a confidential piece of information – protected B. Redacting these POC’s violated privacy laws and caused conflicts of interest between assessors and participants. These types of interference don’t enhance the quality of assessments of technical competencies and serves as an example of how non-partisanship principles are frequently violated.

Many CRA employees first language is neither of the official languages and this presents a problem regarding their competency assessments. Where the assessing method is a Portfolio of Competency and the written skills by the individual are poor, a decided disadvantage accrues to the employee. The same is true for Targeted Behavioural Interviews. The point being is that competency assessing methods could inadvertently discriminate against minorities who have trouble communicating. Because of the need to isolate competencies and measure that behaviour alone, improper assessments may take place because of a failure to understand the candidate (Fairness). The same may hold true for persons with disabilities. Our next post will discuss Observe and Attest.



Publish Date: 21-JUL-2009 10:12 AM

Copyright © The Professional Institute of the Public Service of Canada