Effectively leading an organisation means taking the most relevant actions at the right time. To do this, the manager has methods for analysing causes and two types of indicators: "advanced" indicators and result indicators (known as "delayed" indicators). The former allow the actions that are supposed to have an impact on the result to be monitored and the latter allow the real impact on the result to be seen.
However, in most cases, the actions that are carried out, or rather those that are controlled by advanced indicators, do not address the deep levers of performance.
Indeed, the deepest actionable levers are ultimately either the individual skills of the organisation's employees or the implementation of certain organisational 'practices' (processes, governance, tools, etc.). And as they are difficult to define, and above all to measure, it is quite understandable that the actions measured, or even carried out, are different from those which activate these deep levers.
But things have changed because it is now much easier to measure the performance of organisational practices. This is what Wevalgo's tools allow: tools specially designed to measure the performance of companies in terms of organisational practices.
Thanks to these new tools, managers finally have an even more advanced indicator, the "Best Practice Indicator" (BPI), to steer the most effective actions because it acts at the level of the root levers of performance.
Let us explain.
Two types of indicators are classically distinguished: "leading" and "lagging" indicators.
It is clear that the leading indicator makes it possible to identify more precisely the actions to be carried out than the lagging indicator. This is why it is an essential management tool used in many companies and for most functions.
But we also see the limits of traditional leading indicators. They are often partial and do not address the root causes. They often require extensive complementary analyses in order to identify these causes and the most relevant actions.
In the case of our sales representative, what could explain the low success rate of his commercial proposals? There can be many different reasons: poor customer targeting, poor understanding of needs, poorly adapted proposals...
The sales manager will need many other elements to determine the actions to be taken. He or she can use other leading indicators (an average amount that is too high could explain it), carry out more in-depth analyses, participate in customer meetings with his or her sales representative... All this is possible, but is sometimes difficult or time-consuming; so is it enough and at the right time?
Many organisations define the practices to be used by employees, at least for the main processes. To implement these "good practices", they train employees in their use, some also carry out transformation projects, "Lean" projects, or continuous improvement actions.
For our sales representative, examples of good practices can be defining the objective and agenda of each customer meeting, systematically rereading the conclusions of the last meeting, listing the different products/services that are most relevant a priori and rereading their sales arguments...
However, an evaluation system is more rarely defined to assess the extent and quality of the effective implementation of these good practices. And it is very rarely implemented in a sustainable way.
As a result, the "Best Practice Indicator " is exceptional. And it is a huge loss of value.
More precisely, it is a fourfold loss, if the use of good practices is not measured (and regularly):
There are two main difficulties in setting up a BPI:
Even if formalisation is a real difficulty, the second one is much more important and largely explains why the BPI is not used and therefore why one does not even bother to go through the formalisation stage.
The evaluation of the application of good practice can only be done:
This implies listing all the good practices that we want to evaluate, asking the right questions for each and structuring the practices into coherent sets.
Let us take an example. Suppose that a good practice for management is to use SMART indicators (Specific, Measurable, Attainable, Relevant, Time-bound). You can ask an infinite number of variations of questions such as:
It is all a matter of the level of detail; the more detailed the detail, the more precisely the actions to be taken will be identified, but also the more the subjectivity of the response will be limited by 'characterising' the practice. But the longer and more difficult it will be to respond. A frame of reference may include from twenty questions to... more than 200.
Even if this is a considerable effort and requires method, it should only be done once, at the beginning of the process. The reference system will of course be updated regularly, but the additional effort will be marginal.
A good evaluation of practices must be carried out by several individuals. It is not a question of making a survey, but of carrying out the evaluation by the main people involved in the practices: manager(s), expert(s), and important contributors. This quickly reaches a minimum of 4-5 people. In large organisations or organisations with several geographical sites, it is easy to exceed a few dozen or even a hundred people.
Unless you want to invest in the development of specific software, the preferred tool for collecting evaluations is the good old spreadsheet. It is immediately obvious that collecting through a succession of e-mails, then consolidating dozens of files and generating reports of results is a very cumbersome task. If we add the manual handling and poor ergonomics of the spreadsheet questionnaires, which can alter the quality of the evaluations, the need to regularly repeat the evaluations and to compare them with each other to measure progress, we arrive at a cumbersome and not necessarily reliable process.
An alternative is to carry out audits by a team of experts, who have a good grasp of evaluation and will complete one evaluation per team or site. This remains a cumbersome process.Moreover, if we consider that these evaluations are also designed to ensure that the participants themselves understand the actions to be taken, it is much less effective because each individual is less involved and less responsible for the evaluation; it is that of a "central" unit...
Finally, it is mainly the lack of a suitable tool that causes the under-use of BPIs.
Wevalgo offers a web-based solution designed to measure the performance of organisational practices and build Best Practice Indicators.
This solution provides a very intuitive user interface to ensure that the questions are well understood by the evaluators. It reduces by 90% the effort of collecting evaluations, consolidating results, and generating analyses and reports of results. It allows periodic measurements to be made and progress to be seen or results to be compared from different parts of the organisation or even between different companies.
If you have a best practice reference, simply enter it into the Wevalgo tool, in a secure and confidential manner. If you do not, Wevalgo offers standard benchmarks for many functional areas and industries. Below are two examples on performance management system evaluation (Objectives, KPIs...)
Do like many companies that already use the Best Practice Indicator to improve their performance, use Wevalgo.
To get started, you can view our ready-to-use best practice standards.