This page provides you with more information on all the 12 Organisation Design Excellence pillar criteria of our 100-criteria Operational Excellence model. Please note that the goal here was never to cover each item on this list in comprehensive detail. Instead, we simply want to give you more information about what each one means and what it represents. That way, you can use them to better evaluate your own company's Operational Excellence level in the future.
This page describes the Process Excellence pillar of our 100-criteria Operational Excellence model. The principle of the Process Excellence pillar is to proactively manage processes so that they become the best possible and achieve excellence. The purpose of this page is to give you the main keys to evaluate your best process management and identify the ways to progress.
The SIPOC (Supplier, Input, Process, Output, Customer) is a widely used type of process mapping in quality management and continuous improvement.
Operational Excellence is essential because 67% of well-formulated strategies fail due to poor execution
In the modern business world, the importance of performing the right operational excellence assessment at exactly the right time in an organisation's life cannot be overstated enough. Oftentimes, these assessments come out of a need to obtain an in-depth, unbiased overview of a business' current capabilities. This usually comes about after managers identify certain operational issues that need to be addressed. Only by gaining the clearest possible picture of where you stand will you have a chance to identify what isn't working and, hopefully, what steps need to be taken to adequately correct those issues.
Every manager's dream is to have effective meetings. That they are short, mobilising few people and enabling to take the right decisions or actions that will sustainably improve the performance of his organisation.
Effective meetings are a critical part of business performance. We have conducted a study of 20 popular websites giving advice on effective meeting rules.
Why an improved Strategy implementation method ?
A very popular (and good) Strategy implementation method is called Hoshin Karin. However, this method as some issues explained in this « point of view » outlined below.
This page details the Business Strategy Definition Pillar criteria of our Operational Excellence model. The goal is not to explain in great length all of the criteria described here, but to give you enough details to enable you to better evaluate your own definition and deployement of your strategy
What is DFM? DFM meaning is Design for Manufacturing. It is an engineering design methodology to reduce the complexity of manufacturing operations and the overall cost of production including the cost of raw materials.
What is DFA? DFA meaning is Design for Assembly. It is a engineering design methodology to facilitate or reduce the assembly operations of parts or components of a product.
Operational Excellence Assessment methods and tools
Industry
Production of aluminium parts
Localisation
Germany
Project type
Plant management diagnostics for operational improvement
User
Free-lance Consultant
Industry
Incubator and acceleration of technological innovation
Localisation
France
Project type
General management roadmap definition
User
Wevalgo
This page goes into greater detail of our practical 100 criteria Operational Excellence model, on the Performance Management pillar criteria, numbered from 19 to 54 after the criteria belonging to the Process management Excellence pillar (9 to 18). The goal is not to explain the entirety of all the criteria described here but to provide enough details so that one can use these criteria to evaluate his own Operational Excellence level using them.
Effectively leading an organisation means taking the most relevant actions at the right time. To do this, the manager has methods for analysing causes and two types of indicators: "advanced" indicators and result indicators (known as "delayed" indicators). The former allow the actions that are supposed to have an impact on the result to be monitored and the latter allow the real impact on the result to be seen.
However, in most cases, the actions that are carried out, or rather those that are controlled by advanced indicators, do not address the deep levers of performance.
Indeed, the deepest actionable levers are ultimately either the individual skills of the organisation's employees or the implementation of certain organisational 'practices' (processes, governance, tools, etc.). And as they are difficult to define, and above all to measure, it is quite understandable that the actions measured, or even carried out, are different from those which activate these deep levers.
But things have changed because it is now much easier to measure the performance of organisational practices. This is what Wevalgo's tools allow: tools specially designed to measure the performance of companies in terms of organisational practices.
Thanks to these new tools, managers finally have an even more advanced indicator, the "Best Practice Indicator" (BPI), to steer the most effective actions because it acts at the level of the root levers of performance.
Let us explain.
The leading indicator as a management tool
Two types of indicators are classically distinguished: "leading" and "lagging" indicators.
- A lagging indicator measures a result as a consequence of actions that have already been carried out. It does not measure the achievement or performance of the actions themselves. An example is the sales figure per salesperson. As can only be seen, it does not measure specific actions to explain the sales result of this sales representative.
- A leading indicator measures the achievement of the actions that are supposed to produce the result. For a sales representative, classic leading indicators are the number of customer appointments, the number of sales proposals, the average amount of these proposals and the success rate. These indicators measure levers that are closer to actions such as making more appointments, better detecting customer needs, etc.
It is clear that the leading indicator makes it possible to identify more precisely the actions to be carried out than the lagging indicator. This is why it is an essential management tool used in many companies and for most functions.
The limits of the leading indicator: it only very partially addresses the root causes
But we also see the limits of traditional leading indicators. They are often partial and do not address the root causes. They often require extensive complementary analyses in order to identify these causes and the most relevant actions.
In the case of our sales representative, what could explain the low success rate of his commercial proposals? There can be many different reasons: poor customer targeting, poor understanding of needs, poorly adapted proposals...
The sales manager will need many other elements to determine the actions to be taken. He or she can use other leading indicators (an average amount that is too high could explain it), carry out more in-depth analyses, participate in customer meetings with his or her sales representative... All this is possible, but is sometimes difficult or time-consuming; so is it enough and at the right time?
Organisations deploy "Best Practices" inefficiently and without indicators
Many organisations define the practices to be used by employees, at least for the main processes. To implement these "good practices", they train employees in their use, some also carry out transformation projects, "Lean" projects, or continuous improvement actions.
For our sales representative, examples of good practices can be defining the objective and agenda of each customer meeting, systematically rereading the conclusions of the last meeting, listing the different products/services that are most relevant a priori and rereading their sales arguments...
However, an evaluation system is more rarely defined to assess the extent and quality of the effective implementation of these good practices. And it is very rarely implemented in a sustainable way.
As a result, the "Best Practice Indicator " is exceptional. And it is a huge loss of value.
More precisely, it is a fourfold loss, if the use of good practices is not measured (and regularly):
- It is obvious that good practices are not being applied everywhere where they should be and with the necessary quality, we do not know where they are being applied properly or what actions to take to ensure that they are applied correctly.
- We do not know which practices are the most useful and which will have the greatest influence on the results (lagging but also leading indicators); we will therefore define practices that are not so good and lack the most useful ones.
- A lot of energy has been spent on defining practices and training employees, without being able to justify their interest to these employees, which can lead to demotivation (in general and to apply these practices) and even disregard for management.
- The definition and formalisation of "good practices" is still very limited because, as they are not linked to a measurable impact, the need to do so is less felt or identified.
Why the Best Practice Indicator (BPI) is still not widely used
There are two main difficulties in setting up a BPI:
- Formalising a measurable reference framework of practices
- Collection and consolidation of measures
Even if formalisation is a real difficulty, the second one is much more important and largely explains why the BPI is not used and therefore why one does not even bother to go through the formalisation stage.
Formalisation of a measurable frame of reference
The evaluation of the application of good practice can only be done:
- Through the evaluation by one or more people of each practice
- Based on a reference framework that stipulates how each practice should be evaluated
This implies listing all the good practices that we want to evaluate, asking the right questions for each and structuring the practices into coherent sets.
Let us take an example. Suppose that a good practice for management is to use SMART indicators (Specific, Measurable, Attainable, Relevant, Time-bound). You can ask an infinite number of variations of questions such as:
- Do we always use SMART indicators? "with a rating scale ranging from "Never" to "Always".
- Are all performance objectives SMART? Please rate each of the criteria from 0 to 10" and propose a scoring field from 0 to 10 in front of each criterion.
- A first question "Have we defined SMART indicators? "and a second question "Do we use our indicators to guide our actions? »
It is all a matter of the level of detail; the more detailed the detail, the more precisely the actions to be taken will be identified, but also the more the subjectivity of the response will be limited by 'characterising' the practice. But the longer and more difficult it will be to respond. A frame of reference may include from twenty questions to... more than 200.
Even if this is a considerable effort and requires method, it should only be done once, at the beginning of the process. The reference system will of course be updated regularly, but the additional effort will be marginal.
Collection and consolidation of measures
A good evaluation of practices must be carried out by several individuals. It is not a question of making a survey, but of carrying out the evaluation by the main people involved in the practices: manager(s), expert(s), and important contributors. This quickly reaches a minimum of 4-5 people. In large organisations or organisations with several geographical sites, it is easy to exceed a few dozen or even a hundred people.
Unless you want to invest in the development of specific software, the preferred tool for collecting evaluations is the good old spreadsheet. It is immediately obvious that collecting through a succession of e-mails, then consolidating dozens of files and generating reports of results is a very cumbersome task. If we add the manual handling and poor ergonomics of the spreadsheet questionnaires, which can alter the quality of the evaluations, the need to regularly repeat the evaluations and to compare them with each other to measure progress, we arrive at a cumbersome and not necessarily reliable process.
An alternative is to carry out audits by a team of experts, who have a good grasp of evaluation and will complete one evaluation per team or site. This remains a cumbersome process.Moreover, if we consider that these evaluations are also designed to ensure that the participants themselves understand the actions to be taken, it is much less effective because each individual is less involved and less responsible for the evaluation; it is that of a "central" unit...
Finally, it is mainly the lack of a suitable tool that causes the under-use of BPIs.
You can now become more efficient by using Best Practice Indicator with Wevalgo's tools
Wevalgo offers a web-based solution designed to measure the performance of organisational practices and build Best Practice Indicators.
This solution provides a very intuitive user interface to ensure that the questions are well understood by the evaluators. It reduces by 90% the effort of collecting evaluations, consolidating results, and generating analyses and reports of results. It allows periodic measurements to be made and progress to be seen or results to be compared from different parts of the organisation or even between different companies.
If you have a best practice reference, simply enter it into the Wevalgo tool, in a secure and confidential manner. If you do not, Wevalgo offers standard benchmarks for many functional areas and industries. Below are two examples on performance management system evaluation (Objectives, KPIs...)
Do like many companies that already use the Best Practice Indicator to improve their performance, use Wevalgo.
To get started, you can view our ready-to-use best practice standards.