Using Data to Drive Continuous Improvement in Manufacturing

Posted in on Nov 28 2022,by Charles Sanderson Charles Sanderson
Using Data to Drive Continuous Improvement in Manufacturing
Contents

Author
Charles Sanderson

Charles Sanderson

Director of Process Optimisation


Share on

What Does Continuous Improvement (CI) mean?

Continuous Improvement is a term that’s widely used and can be applied to many different aspects of a manufacturing or business process. For example, it can cover efforts that:

  • Reduce the energy being used, in the form of power or heat
  • Make more effective use of water, which may include reduced incoming water, changes in treatment systems, recycle and reuse efforts and changes in discharge and waste treatment
  • Improve process yield, through using raw materials more efficiently, through reuse and repurposing of “waste”, or through reducing off spec production
  • Improving product quality and consistency by more actively monitoring performance, improving process control and by predicting and avoiding potential issues
  • Reduce the carbon footprint of a process, which can be related to energy use, but can also encompass the use and sourcing of raw materials, the mix of products being made, and the markets that are served
  • Improve capacity, so that you are able to produce more material with the same equipment, either through debottlenecking the process or by increasing the uptime and asset utilization. This can also be influenced by approaches to maintenance.
  • Make capital deployment activities more efficient, by improving the sizing and procurement of equipment and by making the projects’ execution execution more efficient
  • Reduce cost to manufacture through selection of different raw materials, changes in batch sequencing, linking manufacturing performance to cost accounting to elucidate costing, or through more effective inventory management
  • Improve process safety and environmental impact
  • Improve productivity, for example by making manual tasks more efficient or automating them entirely

How to Approach Continuous Improvement Opportunities

As if that weren’t a daunting enough list, there are various aspects to the way in which these different areas of opportunity can be tackled.

Ask a Lot of Questions

  • How do we troubleshoot this specific issue? What are the underlying causes that I can address?
  • What could we do better by changing the process or design?
  • Are we “close to the edge” - to an unsafe operating condition, an out-of-spec product, or an equipment failure?
  • Given the current system performance, what will happen next?
  • What missing data or instrumentation are we facing? And which of my current readings may have drifted?
  • How do we identify the best operating point, and how do we maintain that performance over time?
  • Where are our bottlenecks? What can we do to address them through both better operation and equipment improvements?
  • What is the actual cost to manufacture a particular SKU or coproduct? How do we allocate carbon and economic costs appropriately across multiple products in a single facility?
  • How can I keep track of all this stuff - my KPIs, my project opportunities, the performance of my past projects?

There are Many Ways to do it

A quick perusal of the web will reveal dozens of different tools, services, and companies that offer CI services. Many of these will claim to be the best, the easiest, or the quickest to implement. The reality is that to do CI well, you will probably have to engage in an ongoing change management process.

It, therefore, makes sense to look for approaches and tools that are likely to be familiar to your team and that mesh quickly with your existing systems. While it may slow down the initial phase of the effort, it is likely to pay back manyfold in terms of adoption and rollout.

Target Quick Wins

While it’s tempting to set up a large corporate initiative to chase after the latest buzzword (“Industry 4.0”, “Machine learning”, “Digital Twin”, …), this can tie up a lot of resources and be quite expensive.

It can often be more effective to do a few small-scale trials at individual sites with the goal of getting some early paybacks. This can be a great way to test out different techniques and tools, and to find approaches that mesh well with your company culture.

Pick an “on-ramp”

While CI is a very broad field, the mindset, tools and underlying data to support the different aspects overlap greatly - the same process information that you need to monitor an energy savings project in a dryer can be used to troubleshoot a quality issue in its product.

As well as looking for early wins, it is often most effective to focus on one aspect of CI - energy minimization, say - to build your skills, culture and toolset. As your CI culture grows, it helps to try to maintain a “single version of the truth” - to maintain a centralized repository of data and the assumptions, calculations and key performance indicators (KPIs) that will be used across your organization.

Involve the Team

It is a truism that many engineers have limited data science skills and that many data scientists have limited experience in applied engineering.

There is often a similar lack of cross-fertilization between operations and accounting, quality control and maintenance, or several other interested parties.

CI works best when it draws experience from across the organization, so it is often valuable to involve process veterans and SMEs to foster collaboration and draw value from all quarters.

Measure and Validate

Another sad reality is that low-hanging fruit tends to grow back - it can be easy to make a change and demonstrate the win when there’s a large team focusing on an issue, but six months later, when the focus has shifted to something new, work practices may revert back to “normal”.

Setting up tools to monitor the impact and ongoing performance of deliberate changes is critical. Even more important is tying those monitors to the underlying documentation and calculations that justified the change - as staff move, things will be forgotten; and as process conditions or equipment changes, old assumptions may be invalidated.

Plan for a Long Journey

As with any business strategy, the CI journey is unlikely to be completed in a few months - it is something that is likely to develop over several years (albeit generating value along the way).

Starting with a focus on one or two sites and one aspect, say energy minimization, can yield big wins, but ideally, the process would then roll out over more sites and reach into other aspects, perhaps water use or yield, at the early adopter sites.

It’s also important to recognize that Continuous Improvement is exactly that - an ongoing process rather than a one-stop fix. With a good CI process in place, the early adopter sites will continue to find new opportunities and free up the resources to tackle previously identified projects that didn’t make the initial cut.

Be Willing to Partner

While it’s tempting to try to do all the work internally, perhaps relying on IT and Procurement to select a software platform, it may be more effective to review what services and software are available in the market. An ideal partner may well offer a combination of the following:

  • Engineering services - to help you progress projects quickly and offer subject matter expertise in areas that are not your traditional focus.
  • Software tools - these will help you collate, cleanse, visualize and analyze your process data; track and quantify the impact of projects and change initiatives; develop performance reports and rapid, targeted alerts if performance slips; and bundle together data, reports and insights around specific opportunities so that you can track the progress of different initiatives.
  • Business support - Services that will help with the change management of integrating new work practices and expectations into the existing workflow; and help integrate the engineering-heavy process data with the accounting-heavy scheduling, hedging and cost management data.

Measuring Continuous Improvement

Most operating sites will have a number of process measurements and instruments that report the current flow, temperature and pressure of the process stream; the controller setpoint and output; the power draw of large motors; and a range of other data.

This data is often reported to operators either through a centralized Supervisory Control And Data Acquisition (SCADA) system or through local Programmable Logic Control (PLC) panels. Many facilities will pull the SCADA information together into a control room, where the operators can see in real-time how the plant is performing.

Use Historians

The next step from having the instantaneous data available is to store the data in a historian. At their most basic level, data historians allow one to review the past performance of the plant - to plot the data as a time series so that you can see trends and correlations.

More advanced historians will allow you to plot several different trends together to look for correlations, as well as allow the user to add their own local calculations (for example Key Performance Indicators (KPIs) and to create data overviews, like dashboards and reports.

It’s surprising how impactful even basic trending of process data can be. It’s difficult to see a temperature that’s changing gradually over the course of a few hours if you’re only looking at the instantaneous data, for example; or to see which of two parallel compressors is more efficient if you cycle between them.

It is also frustrating how long it can take to pull data out of a SCADA system to turn it into KPIs and standardized reports - in plants without a data historian (and in several with a historian) this can mean that KPI and overall process review is something that happens only monthly, and usually on data that’s compiled a week or two after period being reviewed.

Other Systems

In addition to the process data, it can be extremely helpful to tie in data from other systems. For example, many plants take periodic samples that are run through a quality control (QC) laboratory to give insights into process yield, product quality and equipment performance.

This data is often stored in a Lab Information Management System (LIMS), but a good process historian should be able to pull this data into the same system and the SCADA history. Economic and production targets may be stored in an Enterprise Resource Management (ERM) system, but likewise can be extremely valuable to collate with process data.

What if Something Goes Wrong?

Process deviations can be minor - a product’s moisture drifting too close to the specification - or more severe - the failure of a fermentation batch. When things deviate from the normal operation in an unexpected way, though, there is often great value in both identifying the deviation quickly and in accurately identifying the underlying causes of that deviation.

Data historians can be great for assembling the data, and the more advanced ones include “watchers” or “alarms” so that if the process is deviating from its expected performance, it will alert key individuals. In some systems, these alerts can be “smart” - as well as comparing process variables to fixed values (eg temperature > 95C), they can be set to be responsive (eg if operating in mode A and cooling tower 1 is shut down, then alarm if the outlet is within 10C of the current wet bulb temperature).

Building these more complex alerts may involve the input of subject matter experts, either from within the organisation or brought in as consultants. Having a tool that can capture the fruit of that experience in the form of calculations can be very powerful. Capturing the reports that are generated and the underlying assumptions for those calculations as meta-data in the data historian can be extremely useful six months after the fact when the experts have moved on and the sponsoring engineer has received a promotion.

There are times, though, when the nature of the failure is not clear. This is when it can be very useful to export the data from the historian and use statistical analysis tools. There are a range of different methods and techniques that can be applied, from ad hoc analysis with linked graphs, Statistical Process Control (SPC), Multivariate (MV) or Principal Component Analysis (PCA) or Machine Learning / Artificial Intelligence (ML/AI) techniques.

In the opinion of the author, the key to success in any of these approaches is to make sure that all the relevant data is collected, and that once consolidated, that data is thoroughly cleansed before analysis starts. The actual analysis of the data is usually a relatively minor part of the overall exercise - to quote Abraham Lincoln “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” While there’s a lot that can be done to smooth the analysis workflow, beware of vendors that offer you a “fully automated” or “black box” data fitting and pattern analysis tool… Such tools are easily misled by erroneous or irrelevant data; find meaningless correlations; or fail when critical input data is not considered.

Boost Continuous Improvement Performance

Could we do better? A very broad question, so to try to put some bounds on this, let’s focus on improving performance with the equipment already installed. There are a number of different approaches that could be considered here:

Develop KPIs

Develop Key Performance Indicators (KPIs) to track how the plant is performing under varying conditions - for example, the energy used per mass of product made or the water loss per amount of energy produced.

A good set of KPIs can allow one to compare the performance of a facility as a function of time or product mix or to compare the performance of similar facilities.

They can also allow for a degree of nesting, so that the overall facility KPIs may be built up from those of individual departments which in turn are built from different pieces of equipment. These next KPIs allow one to scan for problems at a high level, and then drill down into the details if a discrepancy occurs.

Use statistical tools to map historical data to find “islands of excellence” - times when a continuous process was operating better than normal or “golden batches” that led to faster turnaround, higher yield, or better quality products.

Understanding the history of the best possible operation can then be mined to understand the underlying conditions that led to the desired outcomes.

Use Mechanistic Models

Mechanistic models are often used to design manufacturing facilities, and these can provide detailed mass and energy balances of the desired performance of a plant. Sadly, these models are often abandoned once construction starts, but they can provide powerful tools to identify where an operating facility is deviating from the intended performance and to predict the impact of changes in operating conditions (eg adjusting a setpoint).

Leverage Experts

Particularly those with skills in “non-core” areas of the plant, can be extremely useful. For example, many operations engineers are so busy focusing on the core process that they may not have time to develop a deep understanding of the compressed air or steam supply system, so the performance of the ancillary units may drift.

Bringing in deep subject matter experts - from a corporate pool, an equipment manufacturer or a specialist energy consultant can reveal all sorts of opportunities.

Examples include adjustments to standard operating procedures and maintenance schedules; updating of old / work equipment; adjustments to setpoints and load schedules; and recommendations for tweaks to the core process to relieve challenges for the utilities.

The core to success in many of these opportunities is a solid tool for the collation and review of the process and quality data. Ideally, that tool should be able to incorporate the calculation of the KPIs so that they are presented in real-time to the operations team at a granular level (eg past hour, by machine) and also rolled up into automated reports for the broader team (eg by department, by month for the site manager or by facility, by quarter for the Corporate Sustainability Officer).

Once the data is collated, there are various graphical and analytical tools that can be employed to look for islands of excellence. Some of the key features to look for are:

  • Easy integration of the data collation with the data visualisation and analysis tools, for example through graphing of the data within the platform or easy data export.
  • Data cleansing and pruning. Historical data will often be a hodge-podge of conditions - there will be times when the plant was down for maintenance, when meters were offline or misreporting, and when non-standard operation (eg turndown, special product mix, …) were encountered. It is very helpful to be able to quickly prune out these time periods. Similarly, or a multi-SKU or a plant that processes different lots of raw material, it can be helpful to parse out the data to match those different conditions.
  • Data linking. A powerful tool that is supported by some statistical platforms is the ability to highlight data in one visualization and see where it shows up in another graph. For example, you may highlight periods with high yield and see what the corresponding temperatures and pressures were in other parts of the facility, and thus discern useful correlations and patterns.
  • Statistical Analysis. Tools like Multivariate Analysis (MVA), Principal Component Analysis (PCA), Partial Least Squares (PLS), and Statistical Process Control (SPC) can be very powerful for taking larger sets of data and teasing relationships and interactions within the process. This in turn can yield insights into the underlying reasons for better than normal operation.

Process data, especially once cleaned up through good statistical analysis, can also be very powerful in validating and improving mechanistic models, linking known operational constraints (like mass and energy balances) back to the actual process data.

As well as spotting possible process tweaks to move into previously unexplored but potentially beneficial new operating regimes, the mechanistic models can also be used to generate more complex KPIs that can be fed back into the process data historian.

For example, one could use a mechanistic model to estimate the steam flow based on ambient conditions and the flow and composition of the raw materials and then embed that correlation into the process to give live updates - “soft sensors” if the steam flow is not measured, or a cross-check on the actual meter if one exists.

How to Maintain Improved Performance?

Automated Process Control

Most plants use process measurements and feedback control loops to maintain equipment at target setpoints, but process control is often something of a dark art. It is surprising how many control loops are poorly tuned or switched into manual operation by exasperated operators.

In part, this is due to a lack of control expertise - in theory a control loop, once set up, should not need to be adjusted - and in part, it is because there’s often a complex interaction between the physical world and the control loop that is not always considered.

The wrong instrument may be chosen for a given measurement, for example, or it may be located inappropriately. There is often a surprising amount of opportunity to be garnered from mixing the basics of process control in a facility.

Advanced Process Control (APC)

Once the basic process control is in place, then there may be opportunities for advanced control techniques - things like nonlinear process control, which attempts to use more complicated models to account for process complexities; multivariate control which attempts to adjust several setpoints in conjunction with each other; or optimizing process control, which attempts to incorporate one or more KPIs into the control logic, and adjust the setpoints to improve that KPI.

There’s a wealth of approaches to APC, using statistical models based on process “step tests” to machine learning based on process history, and these can be extremely powerful to improve a well-controlled process. In general, though, they are only really applicable once the hard work of getting that base level control is in place.

Predictive Maintenance

While process control can keep the plant running at its target conditions, it will typically assume that the underlying process equipment is performing at its normal conditions.

If the equipment fails, or even if it becomes impaired, then the controller will not function as expected. While some failures happen unexpectedly, there are often measurable conditions that foreshadow failures - the current drawn by a motor may increase, for example; a valve may open further than normal to maintain setpoint; or the temperature leaving a given unit may drift up or down.

Predictive maintenance tools attempt to spot these patterns based on historical data and thus create alerts and warnings with sufficient warning to allow the operations and maintenance teams to address the issues before they impair productivity.

Live KPI’s and Alerts

As with maintenance, there are a range of other process indicators that can be tracked to understand how well the process is performing.

As discussed in the section above, detailed correlations can be built based on historical data or mechanistic models to compare the current performance to the ideal conditions, and these trends are used to understand where and how a process may be deviating.

A good data historian should also be able to track these comparisons automatically, generating targeted warnings to alert the relevant team members to the deviations and packaging together the relevant data and background information to allow those people to swiftly address the issue.

What Equipment is Available?

One of the key challenges that a smaller operating company will face is the challenge of keeping up to date with developments in all the different technology that is available to achieve a particular operating goal, or to understand how the cost of that equipment is changing.

For example, some of the areas that have been changing rapidly in recent years include:

  • Carbon restrictions and carbon pricing. In Europe in particular, carbon certificates have been increasing greatly in cost - they are around $50/te at the time of writing, having more than doubled in the past six months. This increased cost has made a number of previously marginal process changes and investments far more attractive.
  • Solar and battery pricing has been roughly halving every 18 months for the past several years, which has changed the types of opportunities that operating facilities can exploit. Some go so far as to establish solar farms.
  • Demand Side Units (DSU) / Demand shaving of electrical power consumption is a rapidly developing area of interest in several markets, whereby large consumers may be offered lucrative offers to reduce their local power consumption in response to the wider supply/demand on the grid.
  • Electrification of Heat through the use of Combined Heat and Power (CHP) or industrial heat pump technology is seeing several new developments in terms of cost and availability of equipment.
  • Lighting and the switch from incandescent / fluorescent bulbs to Light Emitting Diode (LED) is a more mature technology, but still one where there are real opportunities to be garnered. Read the considerations involved in switching to commercial LED lighting.
  • Utilities systems (boilers, refrigeration, compressed air, water and wastewater) are all areas that continue to develop and evolve in terms of what is possible. Areas like steam traps, condensing economizers and biofuels are developing for boilers; absorptive refrigeration and smart sequencing of compressors for cooling; anaerobic digestion for waste treatment and energy generation. The range of improvements and developments for the “non-core” parts of the facility are significant.

Conclusion

If your organization isn’t large enough to keep a staff of specialists who are up to speed with the various shifts in market and technology, you may at best be missing out on opportunities or at worst risk getting hit with unexpected costs or operating constraints. Finding a partner who has the breadth of focus to help you stay aware of such developments can be a very valuable insurance policy.

This partner will ideally have not only subject matter experts in the various technologies, but will also have a database of recently installed systems so that they can both understand the true cost of ownership and the longer-term performance of innovative systems.

How CoolPlanet Helps

At CoolPlanet, we try to offer the attributes of a good partner. We are an engineering services-led company supported by a world-class software platform, CoolPlanetOS . We have a strong focus on energy efficiency and process improvement, with a stable of SMEs in areas like boilers and refrigeration; CHP and solar; mechanistic and statistical modeling; and project management and execution. With active projects all over the world, we would be very keen to discuss your needs.