R&D performance improvement guide

R&D performance improvement guide

If there is one function whose performance is difficult to measure, it is Research and Development. Similarly, to set up processes, standards and rules to improve this performance.

And the further upstream we are from the research stage, the more complicated or even counterproductive it seems. Above all, we do not want to restrict creativity and prevent ourselves from inventing products with breakthrough innovations.

However, our experience shows that it is possible to achieve a high level of performance in Research and Development (R&D). With more innovation, better quality, faster development and lower costs. From applied research in the pharmaceutical industry to train engineering and the development of electronic chips.

The nature of R&D activities means that these rules, standards or measures cannot be applied as much as in production. Nor is there a detailed universal method applicable everywhere. Nevertheless, a number of practices are applicable almost everywhere to improve R&D performance.

This guide to improving R&D performance describes the good practices we have observed among our customers, classified by theme.

We will cover several types of R&D, specifying the major differences between them:

  • R&D of medium and large series manufactured products (consumer goods, machine tools...)
  • R&D of products or "projects" in small series or even individually for a specific customer (train, plane, infrastructure, power plant...)
  • Process R&D (chemistry, pharmacy, material transformation...)

For simplicity, we include the Research, Development and Engineering functions (product or project) in the acronym R&D. In the term "Research", we include applied research, but not fundamental research.

Depending on the industries, and in particular those of "projects", R&D can be all or part outsourced. If so, some of the practices described are not applicable. Nevertheless, they can be used as a means of controlling subcontractors or suppliers.


R&D strategy and roadmaps

The R&D strategy is specific to each company and there are no rules regarding its content. However, successful R&D defines it in a relatively high level of detail. In particular, it must specify several choices. Those concerning its budget, the level of product quality, the risk profile (incremental or discontinuous innovation) sought, the development time, or internal or external skills.

This strategy must be clearly communicated to staff, taking into account the risks of disclosure to competitors. It must be implemented operationally, with short and long-term objectives and action plans.

The roadmaps, technology and customer plans must be aligned:

  • The customer roadmap includes in a long-term planning the future products (or technical platforms for engineering) and the customer needs to be satisfied
  • The "technological" roadmap defines future technologies, components or scientific advances (especially for process industries), as well as the performance or functions they will enable to achieve

To be aligned, they must first be defined! It is then necessary for R&D, marketing or sales teams to exchange regularly to bring them together.

Their convergence brings two main benefits:

  • the best allocation of resources in the long term
  • limiting the risks of integrating technologies that are not yet mature into new products. Or conversely, not having these technologies ready at the right time.

 

 

Standardisation

The notion of standardisation is very different between process industries and product or project industries.

In the product or project industries, high-performance R&D formally defines a policy for standardising parts, components or sub-assemblies, or even a technical platform (particularly for project industries). The most difficult, but necessary, step is then to manage the level of standardisation with appropriate indicators. Similarly, the modularity strategy must also be defined and monitored, by product lines or technical platforms.

For process industries, there are few or no parts or sub-assemblies; standardisation is more at the level of methods: solution preparation, characterization, handling, calibration of measuring instruments....

 

 

Idea generation

We call idea generation the set of devices that bring new product or technology ideas into the R&D process. These systems are very varied: customer or market analyses, monitoring of innovative external developments (universities, laboratories, publications, patents, etc.), ideas from suppliers or subcontractors, analysis of competitors and their products, internal ideas (idea boxes, Internet forum or blog, brainstorming structures, etc.), brainstorming or group work with key customers, internal sharing of knowledge and R&D projects, etc.

It is better to use several of these devices to increase the number and quality of ideas. However, as resources are limited, it is necessary to define priority mechanisms. Then define the idea selection process. And finally, to manage their effectiveness, in line with the R&D strategy.

For example, the Open Innovation process has been very successful and can generate many ideas. However, it can also cause a large dispersion of energy if it is not clearly focused and controlled. 

 

Innovation portfolio

The innovation portfolio is the set of product development ideas or projects, from the first stage (typically the feasibility stage) to production or market launch.

R&D performance is linked to the constant control of this portfolio according to two major objectives: to have or keep the most promising projects and to allocate resources efficiently to these projects.

As soon as some projects drift, their risks increase, or their income potential decreases, it is necessary to be able to make difficult decisions. Should these projects be slowed down or even stopped because others are becoming more promising? What decision should be taken when these projects are already well advanced and the most promising ones are at an early stage?

To be effective, the management of this portfolio must be based on a collegial, regular and as factual as possible decision-making process, with precise criteria. Otherwise, we quickly fall into the irrational, into the "leader's project" in the "decision of the loudest shouter".

This implies that each project must be well managed. With good planning, risk estimates and reliable financial potential forecasts. Otherwise the portfolio management building is built on sand.

This portfolio management is simpler for project industries that operate mainly on a sales order basis. The project portfolio is more constrained once orders are taken. Uncertainties (technological risks, financial potential...) are normally lower (if the technological and customer roadmaps are well aligned!).

 

 

Organisation of R&D

The organisation of R&D, in the "organisational chart" sense, is at two levels:

  • The geographical organisation with several sites, national or even international
  • The local organisation, within each R&D site

 

High-performance R&D applies several good practices to the local organisation.

Roles and responsibilities between different departments/services and between individuals must be clearly defined and formalized. Tasks that disrupt development activities must be controlled. They must be limited or assigned to specific roles: production support or marketing/sales teams, problem solving...

There must be a good balance (of competence, recognition and decision-making power) between project managers, technical (or scientific) competence centres and quality or testing teams.

For each product, a single responsibility must cover the entire development cycle, in terms of deadlines, costs or quality. This responsibility may fall to a role or organisation. She is not necessarily a particular person who can change, especially for long developments.

 

Concerning geographical organisation, two major and interdependent questions arise. What places? What levels of specialization and coordination mechanisms between the centres?

There are no simple answers. A key determinant of R&D performance is the ability to make decisions based on rigorous analysis of the following key parameters:

  • The definition of locations typically depends on the need to be close to customer markets or skill pools. These can be in very different places. Depending on these locations, the criteria of language, political and economic risks, or culture are to be taken into account.
  • These places can be completely autonomous, totally or partially specialized (each site, or some sites providing "services" to others). We have to find the right balance. Specialisation limits the duplication of tasks and makes it possible to set up qualified competence centres. But it leads to more "transactional" relationships between teams and is less agile when the load changes. Total autonomy (no specialisation and coordination) increases duplication of activity, reduces creativity related to exchanges between different people, and limits the mobility of resources. But coordination costs can also be significant: travel, coordination meetings, management of different staff and cultures, definition of processes and common rules...

It is frequent that the geographical organisation must be reviewed following a merger or acquisition. The same rules apply while taking into account the potential difficulty of managing large changes.

A very large R&D centre is a special case. Although on the same site, teams are often spread over several buildings and problems of specialisation, autonomy and coordination arise.

 

 

Management of R&D skills

R&D is one of the functions that most depends on the skills of these individuals in relation to other performance levers (processes, organisation, etc.). It is therefore essential to manage them effectively.

High-performance R&D regularly and formally evaluates and maps skills with a nomenclature "at the right level of detail". The right level is a compromise between the interest in accurately describing competencies and a system that is easy to use. This mapping should be used to define training and recruitment plans.

Critical skills needs must be particularly well identified and monitored.

Given the importance of skills, clear and measurable objectives must be defined and monitored concerning their management (level of competence by profile, effectiveness of training, recruitment, etc.).

The development and training program must be highly structured and include multiple elements: induction, general and specific technical skills e-learning, internal or external "in-class" technical training, customer or production training, tutoring, coaching, participation in conferences and knowledge sharing sessions.  Each individual must have individual training objectives that are important for his or her individual performance evaluation.

Learning and knowledge management must be part of the culture, processes and evaluation systems. Feedback is systematically organised and documented at the end of each project (or by phase for long projects). Processes, methods and tools must be subject to continuous improvement actions.

Finally, strategies for skills outsourcing or partnerships must be clearly defined. They must be coordinated with internal skills management. Finally, they must have measurable performance objectives that are monitored operationally.

 

 

Management of financial, strategic and operational performance

R&D is probably the function for which performance is the most difficult to measure and manage. Except to limit itself to a purely budgetary management of costs. We have found that less than 20% of R&D uses performance indicators other than cost indicators.

There is no shortage of performance indicators. It is the appropriate choice of these and their associated objectives that is difficult. As in many functions, this choice depends on the company's strategy (and the application to the R&D strategy). However, two particular difficulties are inherent in the nature of the function:

  • The main performance of R&D is ultimately the success of the product with customers. It is not visible until long after the action. And it also depends on other factors from other functions (marketing, sales, production, etc.). At best, therefore, R&D performance can only be improved for future developments.
  • It is very difficult to have performance indicators for intellectual activities and a fortiori for invention activities. In particular for leading indicators (i.e. more or less predictive of the result). How to measure the level of inventiveness? How long does it take to invent and is this the right time to have the right invention?

However, to effectively manage R&D performance, it is necessary to define leading indicators and objectives. Five types of performance indicators are to be implemented:

  • Those of progress, in time or cost. For example, the duration or cost of the various research and development steps. Or the technical progress (expressed as a % of the temporal progress). These indicators must actually be the deviation from the planned.
  • The effectiveness of some key processes such as
    • the management of the portfolio of ideas and projects: number of ideas/projects per stage, idea selectivity ratio, rate of "transition" without correction to project committee reviews, risk profile (number of high-risk/low-risk projects)...
    • development quality: number of technical or peer reviews (and number of compliant points), number of tests or experiments to be performed, failure rates for some tests, non-compliance rates
    • standardisation or modularity: rate of reuse of standard components or sub-assemblies....
    • resource management: subcontracting rate, utilisation rate on certain types of activities or project phases, coverage rate of needs...
  • Costs, by department or type of activity (development, technical support, training)
  • Those related to knowledge and intellectual property: level of competence by profile/type of competence, number of training courses, conferences, partnerships, patents, publications...
  • Financial: financial case, expected ROI, net present value of the project... These are partially leading indicators (for the cost part; the income part being generally a delayed indicator, except engineering projects with identified customers and a payment planned in advance) 

 

Customer requirement management

The phases of identifying needs or managing calls for tenders are outside the scope of this guide. We start managing needs once they have been translated into specifications by the sales or marketing teams. This includes the case of developments on request for proposals (example of project engineering), where the requirements are most often specifications directly transmitted by the customer.

Depending on the organisation and industry, the role of R&D in drafting these specifications varies greatly. It can be almost nil (example of project engineering tenders where the client writes them most of the time). Or be more or less important in collaboration with marketing/sales departments.

Nevertheless, in any case, R&D must have two objectives. That the specifications are detailed, expressed as functionally as possible, and that the expected performance is measurable.

Depending on the case, R&D may act differently. By influence, by intervening as early as possible with major engineering customers (or even by "educating" them over the long term). By more direct intervention in the drafting itself.

Fuzzy, lacking or "technical" (as opposed to functional) specifications may appear to be an additional facility or flexibility for R&D. More often than not, time bombs will cause delays, extra work, customer dissatisfaction or a "failed" product. These two objectives must not stop at the initial phase. They must be continued in the early stages of development (including the general "technical" design of products) where gaps or misunderstandings of specifications can be identified.

Another good practice is to ensure that all "non-functional" specifications are written or integrated from the beginning: maintainability in the field or after-sales service (they are sometimes part of the functional specifications), testability in production, environmental, safety or health standards...

All its specifications must be clearly listed, referenced and monitored (for evolutions, tests and recipes...) throughout the development cycle. It is preferable that this be done in specification tracking software, whether or not integrated with PLM (Product Life Cycle Management) software.

 

 

Management of change requests

There are two main types of change requests (or change requests). Those from the customer relating to a specification. Those from within R&D (or subcontractors) that concern a technical aspect of the product or project.

The first good practice is to differentiate the two types and to have defined management processes for each of them.

Any change request must be formalized, recorded and its processing status monitored and managed. Impacts on costs, deadlines and compliance with specifications must be quantified and validated before implementation. This may seem trivial, but we have seen many cases (and in well-known companies) where validation is best done after implementation.

In the case of an internal change request, it is very important to analyze and communicate its impact on other parts, components or sub-assemblies. Especially when they are developed by other R&D teams. Otherwise, we will notice any incompatibilities during the integration phase or even during testing or customer acceptance.

The acceptance of change requests (and the solution implemented) must be done by a multidisciplinary team. This will facilitate communication (or even detection) of impacts between teams 

 

Technical R&D methods or processes

We call "technical method or process", the set of activities, on or off project, that each member of R&D must perform in his own field of competence (for example: writing technical specifications, designing plans, calculations, performing tests...). This includes defining the input and output data for each activity and the tools or equipment to be used.

It is clear that each of them has specific good practices that cannot be described here. Nevertheless, the common point of high-performance R&D is their ability to formalize, document, update and make these methods accessible to all (preferably in a shared computer space). Each method has a formal manager in charge of choosing the right method, its use and continuous improvement. 


Development method or process

We call "development process", a sequence of key steps in the development of a product or technical platform. The purpose of this guide is not to prescribe one method over another. There are many of them and without listing them all we can distinguish two families:

  • Cascading" processes: simple cascade, cascade with overlapping (Sashimi), cascade with sub-projects, V-cycle. These processes are applicable to all product or project industries
  • Methods for process or life science industries. These steps can be very variable. Examples :
    • Discovery, pre-clinical, clinical (1,2,3), approval
    • Selection of candidates, characterization, formulation, process stabilization, industrialization
  • Agile" methods: Scrum, Extreme Programming, SAFe... Although initially designed and mainly used for software development, they are sometimes applied elsewhere.

Depending on the type of industry or project, each method has its advantages and disadvantages. The main thing is to ensure that the method used is adapted to the project, and that it is mastered by the teams.

Whatever the method, and whatever the industry, at a minimum, development must be organised in successive phases with formal validation points (although some may be in parallel, there are still successive ones). Validation must be carried out by a team of multidisciplinary managers. It must include the various technical functions involved in R&D, the R&D manager, a marketing or sales representative, a representative of the finance department and a representative of production (or purchasing in the case of total subcontracting). For significant projects, the representative of these functions must be the director.

The validation of these crossing points must be based on precise criteria to be respected. These should be accompanied as much as possible by performance indicators and measurable objectives. Example: financial case, level of risk, number of corrections to be made, internal audit report...

In the following, we will develop best practices that are more applicable to product and project industries, with family processes "in cascade".

At a minimum, the first versions of the test and validation plans must be drafted during the corresponding upstream phases (specification or general design validation). This makes it possible in particular to highlight missing or fuzzy points in the specifications or design. And possibly to anticipate the realization of the test means which is sometimes long and costly.

Even if we are not in a purely "Agile" process, we must seek feedback from the customer or the market as soon as possible using the best possible means: drawings, models, fast prototypes, digital models, 3D printing...

A very strange good practice now: that the development process is really applied! What is especially strange is that this process is often existing, formalized, even posted on the walls of the R&D we have advised. But not practiced, or at such a generic level that we are in form and not substance. Finally, the process is often perceived as an administrative burden and has a very relative utility.

This process should not be too strictly sequential, in the pure "cascade" model. Some phases may be anticipated for some elements or subassemblies. But this must only be done in a controlled way, having carefully checked their independence from the elements of the previous phases that are not yet completed. This makes it possible to go faster and in particular to detect certain defects or missing parts of the previous phases earlier. And so to make a quick and beneficial iteration between two successive phases. But this requires extreme rigour, otherwise we run quickly into serious trouble.

 

When developing by sub-assemblies, especially in modular development, it is necessary to analyse whether it is possible to separate development cycles and parallelize activities. Once again, this must be done with great rigour at the interfaces between these subsets, and only after analysis.

If we follow the analogy of the V-cycle method, then we can say that to be effective we must "Go down slowly and go up fast". This means that time must be taken in the specification, design part (without excluding prototyping and customer tests upstream when possible), i.e. the left bar descending from the V. This allows to have a phase of realization and test without delays (upstream part of the V). In other words, if you go down quickly, you are likely to have a square root cycle (with the right bar of the V flattening and lengthening...). Unfortunately everything contributes to do the opposite (the more concrete and interesting side of the realization, the impression of saving time for later, the pressure of management...). 

 

Software and tools

Nowadays, software and tools to support R&D are legion: specific tools (instrumentation, CAD, simulation...), collaborative tools (project management, shared documents...), resource planning and management, life cycle management...

The temptation is to want all these tools. But budgets are not unlimited. It is therefore necessary to make the right choice to have the right tools where performance is most important. We will give some simple principles, bearing in mind that the detailed process and the final choice are complicated:

  • Adopt a twofold approach; the first starting from R&D problems or difficulties (known potential benefits), the second by monitoring tools (unknown potential benefits before knowing the possibilities offered by these tools)
  • In both cases, qualify and if possible quantify the practical benefits for your R&D, in relation to its existing structure: what problems are solved, how many and what types of people are affected, how much time saved...
  • Prioritize needs according to these benefits, strategic priorities and by involving researchers, engineers or technicians. Not involving them is almost a guarantee of having underused or even bad tools.
  • Do not think only of tools. Identify the part of the problem to be solved that is related to the methods or skills and not to the tool. Similarly, identify changes in methods and training required to implement the tool. This is a major point and is often underestimated. Example: R&D had invested in a superb resource planning and forecasting software that could do it all. But everything was wrong. Because the skills mapping was inadequate and the individual project schedules were unreliable. It was even worse than before because management trusted the software and no longer used its own thinking and decision-making skills.

 

 

Efficient project management

This subject alone is very broad and we will limit ourselves to a few key practices for project performance.

The first point is the project planning. There is almost always one. It is often well designed and detailed at the beginning of the project. The dates are rather well updated while the load plans are sometimes updated. Finally, it is rarely used as an operational work support or reviewed with the team. Too often it is an administrative document that is updated for communication or just before project reviews.

Effective project managers use detailed and up-to-date schedules. They even use them as a visual management support, in an enlarged version, in the team's meeting room.

It is good practice to keep the original version of the planned dates and charges of all activities, as well as the final version. This makes it possible to measure actual performance, and to provide feedback to identify the causes of drift and implement improvement actions for subsequent phases or future projects.

Technical progress, or rather its reliability (low actual/forecast deviation) is in itself the indicator of project management performance, or even the performance of the entire R&D. To be reliable, it requires a lot of conditions:

  • the duration of each task must be well estimated, sometimes well before the project (e. g. from the call for tenders for engineering projects)
  • all aspects of the project must be well managed (resources, schedules, risks)
  • risks related to new technologies must be controlled (alignment of roadmaps as described above)
  • and many others (subcontracting, skill levels...).

If there were only one leading indicator of R&D performance, it would be this one. Even if it tends to drift rather towards the end of the projects when all the invisible problems arise (and loses a little its "advanced" side).

 

Each project must have at least three types of regular team meetings, not to mention technical meetings, ad-hoc meetings, individual meetings and milestone reviews:

  • Project management (progress review, coordination, review and action...). Depending on the type of project, their frequency may vary between weekly and monthly.
  • Those of quick exchange, to be done in "visual management" mode (standing around the notice board of current schedules and actions) to manage short-term activities. Their frequency can vary between daily and weekly.
  • Risk review and treatment

All project management tools must be defined and standardised. This saves project teams time and limits their learning time. This concerns planning models, risk analysis, financial reports, project review support, meeting minutes, etc.

 

Project operation is widespread in high-performance R&D. However, there are not necessarily dedicated project departments or services. The project team must be formally composed of representatives of the various functions involved. Roles and responsibilities must be clearly defined for everyone. The members of the "core" project team must report to the project manager and be evaluated by him or her. This evaluation is at least partial (in addition to an evaluation by their line manager) but it must remain significant. Otherwise, the project manager does not have the necessary authority.

In the case of a complex product, particularly for systems, a role as a technical architect must ensure the technical cohesion of the whole. It is preferably not held by the project manager.

Finally, subcontractors or suppliers must be managed with appropriate formal processes and defined objectives. These depend on the type of subcontracting or complexity of the supplies. This management must be done in collaboration or even in partnership with the purchasing department.

 

 

Conclusion

Most practices that enable R&D to achieve a high level of performance make sense. Taken individually, they are often simple. But to have a significant impact on performance, a coherent and important set of these practices must be implemented. They must be concrete and effectively used by R&D personnel.

The success of this implementation depends on a change management approach that is very specific to the R&D environment. It is necessary to overcome a culture that is often very "technical" and not very oriented towards economic performance or the reduction of deadlines. It is necessary to be able to put hard or rigid (processes, rules...) in an intangible and uncertain environment. It is necessary to measure activities that are not very measurable or over very long time frames.

A whole science. And also a passion.