logistics supply chain modeling simulation
Logistics Supply Chain

PROCESS IMPROVEMENT

The Role of Dynamic Simulation® in Process Improvement Initiatives

Introduction:

In every industry, from manufacturing to services, logistics and healthcare, there is an increasing need to do more with less, increase efficiency while reducing cost, and improve utilization while maximizing throughput. Manufacturing and logistics are at the forefront of this transformation, and changes implemented within their realm are finding their way into other industries.

Process improvement is a constant, never ending effort that strives to achieve better performance with current or reduced resources. As improvement in one area is achieved, other opportunities arise and new ideas are tested and implemented. Improvement ideas do not always produce the desired results; some improve the performance of a sub-component while introducing negative consequences to the overall system.

To achieve the required enhancements, process improvement professionals and system designers have introduced automation and smart systems into their current environment. Automation, including AS/RS, Guided Vehicles, smart machinery, and intelligent conveyor systems have been implemented with great promises of exceptionally high OEE (Overall Equipment Effectiveness) and throughput numbers that seemingly make them a one stop solution to all issues.

Although automated systems have such great potential, they also need to perform within two key boundaries. First, they have to interact with human run systems. Humans do not have the robotic behavior of machinery, yet they are indispensable in any environment. Second, automated systems must be able to adapt to fluctuations in the operation, and to the interaction with other automation systems. Moreover, system malfunction, breakdowns and changeover of such systems can have a cascading negative effect on the operation.

This white paper addresses the role of Dynamic Simulation® in enabling designers and process improvement professionals to maximize their design investment, analyze the impact of new equipment, and are better prepared for future changes to their operation.

 

Challenges
Why do current designs fail to achieve their promised performance?

Recurring questions often come up in meetings, offline discussions, and in the minds of all designers; how many guided vehicles do we need? Is our slotting optimized? How much buffering do we need to support a manual operation? How can we better schedule our tanks? What is the impact of the cleaning cycle? The list goes on. The fact is that there are many questions that need to be answered. Some are more pressing than others, while most can have a negative impact on the performance of the system if not handled properly.

In the case of simple systems, where a single input, a single output and a constant feed rate produce a single product, computing the system throughput and OEE is simple. Unfortunately, a closed loop system with a single product type is seldom the case. Automated systems are built to handle multiple product types and variations in order to meet their ROI. To complicate things further, different products require different speed rates, change-over and sometimes labor requirements. All of these constraints contribute to the overall system throughput and OEE, making a simple formula computation an invalid and inaccurate representation.

A common negative effect of improper system throughput calculation is that operations are promising customer deliveries based on high OEE of individual components. Although each independent sub-component performs at the designed specifications, the overall system OEE is lower due to component interactions, product variations and other unexpected delays. Equipment ROI is not achieved, order delays become a common occurrence, and frustration sets in.

One example of improper identification of system inefficiencies is more apparent when identifying the optimum number of guided vehicles needed for an operation; a Sorting Transfer Vehicle (STV) loop that interacts with three systems, one unloading station driven by a human interface and two loading automated operations.

Historically, a quick math formula that takes into consideration the inbound rate, speed of loading/unloading, and STV speed is used to identify the number required. Then designers add an additional STV as “extra buffer”, or cushion for the system. When the system goes online and external factors start to impact the STV loop, its efficiency drops and all connected systems follow suit. In certain situations, the STV track is interacting with processes that have a manual component. With a limited number of available drop locations or inadequate buffer capacity at the manual stations, any inefficiency in the manual operation will have a cascading effect on the STV loop itself as loaded STVs need to complete additional loops on the track while waiting for a drop location to become available. When the majority of the STVs are loaded with products, it is assumed that the loop has reached its maximum throughput, and additional STVs are needed. The fact is that even though products are loaded on the STVs, the loop efficiency is low since multiple passes are needed before a drop can be made. As a result the STV loop is viewed as the choke point of the system, while, in reality, the STV loop efficiency is low due to inefficiencies in the interfacing systems. The solution is not achieved by adding STVs to the loop, but by finding the external conditions that are causing the reduced efficiency. The true system efficiency in this case is not measured based on how many guided vehicles have products, but by how many successful single pass moves are achieved.

The list of examples is endless, and systems, although designed with the best intention in mind, fail to meet the set expectations when implemented as part of a larger system. Two key problems are apparent. The first is related to systems interaction where the initial system design does not take into consideration the impact of other systems (both human and automated). The second is driven by the fact that all systems are evolving, and initial logic and interaction criteria will constantly change.

 

Common Methodology:
To analyze such systems, companies have opted for a multitude of tools which have fallen short in delivering the analysis required to study the interaction among all connected systems. In this section, three common options are explored, and drawbacks are identified

 

  1. Vendor Specifications and good Visuals;
    Vendors and equipment manufacturers build their systems based on a set of provided and defined metrics. During subcomponent presentation and delivery, vendors provide stunning 3D graphics, performance metrics, and validation that meet, or sometime exceed, the initial design requirement. Since system acceptance criteria are based on the individual sub-component performance, the process proceeds as planned. Unfortunately, when the newly designed sub-component is integrated into the overall system, performance suffers and OEE drops.

    3D graphics are a great environment to showcase a product, but during the vendor presentation, graphics are used to showcase the equipment in optimal work conditions. Although this is a correct representation of the system, it only showcases a scenario that seldom occurs in the actual system integration and performance. Due to their life-like resemblance, 3D graphics tend to hide the potential problem and emphasize the internal detail of the sub-component.
     
  2. Spreadsheet based analysis
    Spreadsheets are great tools designed for quick number crunching and static analytics. However they do not have the capability to analyze behavior through time nor to analyze different transition behaviors. Although they can project the future of the operation through mathematical formulas, they do not accurately provide a view of component interaction with the detailed time based analytics required

 

Traditional simulation tools;
Traditional simulation tools have been used to analyze systems in warehousing, logistics and manufacturing. Most of the tools are based on proprietary coding languages and visualization. Although they do provide some of the required analytics, traditional simulation tools suffer from a number of key limitations;
a ) Speed of the model building​
Traditional simulation tools are normally code heavy, meaning a computer program needs to be written in C++, C#, VB, or another proprietary language and compiled in order to build the model. Some tools have developed a visual code generation environment to get the model started, yet expanding the model to anything useful requires going into the code.

The drawback of an increased model development timeframe is that by the time the model is complete it represents in most cases, an outdated scenario that has already been modified. Therefore any analysis performed on the model is no longer valid.

b) Simulation Interaction​
Traditional tools provide no interaction with the simulator. In other words, the simulation runs on the engine and the results are provided after the run is completed, along with the animation. Some tools have provided animation and limited results during the model run, yet have no ability to physically interact with the model. This is an inherent design issue with static tools due to their reliance on compiled code before the model is run.

c) Analysis
Static simulation tools provide analysis and reporting after the simulation run has completed with limited data feedback during the run. In addition, special care must be taken in order to identify the metrics required from the simulation run, if a metric was not collected and coded in, the user must rerun the simulation.

d) Connectivity
Static simulation tools, in general, preload the system with data before the simulation starts. This behavior limits the amount of data that can be imported and the ability of the model to have multiple data segments. In addition, pulling in model state information or running actual scenarios based on system data is not intuitive and requires extensive model changes.

Dynamic simulation® and Process Improvement ​
As a key component of any process improvement initiative, proposed changes, modifications, and design ideas must be validated within the context of the operation. Adding a new piece of equipment or modifying a process to increase throughput without analyzing the impact on upstream and downstream operations is a clear path to problems down the road.

From a lean perspective, Kaizen events are prioritized based on their impact on the overall flow. Making a cell faster will not yield any measurable benefits if its interconnected cells are not optimized to benefit from the added production. To properly identify and quantify the effect of change, a study must be made to show its impact on the overall system.

Dynamic Simulation® has been proven to provide the proper environment to effectively analyze and prioritize design changes within the time constraints of the operation. Valid dynamic models are built in record time allowing unprecedented interaction with the virtual operation. Moreover, with the model built, making adjustments and modifying constraints is an intuitive task that can be performed by all members of the team. Analysis of the change, lean metric computation, and even model visualization are automatically computed and created by the dynamic simulator.

Furthermore, the dynamic behavior is not limited to the system constraints. Dynamic models can automatically adapt to changes, add or remove equipment, increase the number of lanes, and modify rack locations and capacity at any point during the simulation run. Analyzing the roll out effect of new equipment is now possible through a simple time schedule definition of when equipment will go online.

From a scheduling perspective, dynamic models can provide forward schedules and potential delivery delays. Since the model is connected to actual data sets (ERP, MRP, WMS, etc.) it is capable of loading the current state of the operation, future orders or sequences going through, maintenance requirements, and changeover information to predict and create accurate representation of the future of all systems modeled.

Dynamic Simulation® Benefits
Dynamic Simulation® tools provide a number of key benefits to help in achieving analysis in record time.

  1.  

    ​Model Building
    A dynamic simulator’s model building environment provides constructs that allow users to define the model without relying on code, and without generating code in the background. The resulting environment connects the user interface directly to the simulation engine, allowing users to quickly modify scenarios during the simulation run. Moreover, by using the integrated tools, dynamic simulators do not require programmers to build models, but enables process improvement teams, designers, and operators who have more knowledge of the system limitations and behaviors to build models and perform analysis. Model building is now done in days and weeks instead of months, and simulation benefits can be achieved at a much faster pace.
  2.  

    Interaction
    Dynamic simulators allow for model constraints to change during the run by either user interactions or external data systems. As an example, users can run the model, visualize the system, and play the what-if game live, while the rest of the system reacts. Using the STV example, users can dynamically create the scenarios needed, play with the number of STVs available, and modify the constraints on inbound and outbound spurs while visualizing the effect in a game-like interactive environment.
  3.  

    Connectivity
    As with the interactive environment, Dynamic simulators have an integrated ability to connect to data systems at run time, and do not preload the data in their environment. This behavior achieves two key benefits;
    • There is no limit to the amount of data or data sources available to the simulator. In other words, the system can connect to existing ERP, MRP, WCS, and WMS systems, pull in the current state, previous behavior, and future orders in order to perform a simulation analysis without requiring any model changes. In addition, since the model is dynamic, any model changes can be dynamically pulled in from the external data set.​
    • The developed model can be used for daily scheduling and analysis activities. Since the model is already developed and connected.to actual systems, a daily run can provide insight on the day outlook and allow managers to be better prepared to handle any potential problems that may arise.
  4.  

    Analysis and Visualization
    Dynamic simulators provide most of the required analysis values for the model. Lead times, utilization, efficiency, throughput rates and cycle times are all integral parts of the model and can be displayed or analyzed as part of the model view. Users have the ability to add analysis parameters on the fly during the simulation. Moreover, scenario comparison and detailed graphs on system fluctuations are available anytime during or after the simulation run.

    From a visualization perspective, the animated environment of a dynamic simulator is more effective since it represents the current state of the simulation engine. What is animated on the screen is what the simulation engine is doing; including how current constraints are impacting the system behavior and analytics.
  5.  

    Lean Metrics and VSM
    Dynamic simulators inherently have most of the lean metrics embedded into the models. Analysis options such as spaghetti diagram of the operation, heat map of the warehouse, or hands on time and average resource touches are readily available. Value stream maps are automatically computed by the simulators and updated on the visual display. Since the computed value stream map changes with the model constraints, each model state has an automatically generated value stream representation based on the active constraints.

 

Summary

Dynamic Simulators provide the proper tools and analytics required to analyze and troubleshoot today’s complex systems. As new constraints are identified, designers change their dynamic models and use the interactive reporting and visualization in order to define and validate new solutions. Dynamic Simulation® has also been used to design new systems and to play the what-if game in order to identify the proper design limits for each of the sub-components.

With data connectivity and tracking systems becoming the norm, dynamic simulators are finding their way into the daily routine of designers, managers, operators, and process improvement specialists. Due to their design openness, dynamic simulators can interact with tracking systems (RFID, Barcode, GPS, etc.), and machine PLCs to dynamically create a visual picture of the present, accurately replay past events for analysis, and forecast the future of the operation.

Dynamic simulators, coupled with integrated dashboards and alerts, provide a rich and more complete environment that rivals the analysis and forecasting of current MES systems on the market today. Moreover, the ROI of dynamic simulators far exceeds their implementation cost as they provide optimization of the current state while providing an efficient analysis path for all future changes to the operation. CreateASoft Inc. is the developer of Simcad Pro® Dynamic Simulator and SimTrack® Dynamic Visibility tool. More information can be found on our website www.createasoft.com

About the Author
As the co-founder of CreateASoft, Inc. Hosni has been involved in Process Improvement and Simulation for the past 20 years. Hosni has applied his process improvement expertise to multiple industries including healthcare to increase efficiency and reduce operating risk. As the holder of several patents in the fields of dynamic simulation and tracking Hosni has been a sought after expert in these fields and has presented multiple papers on process improvement using simulation and implementing lean concepts. With his dedication to the use of technology to improve efficiency and output, he has positioned CreateASoft as a leader in the process improvement industry.

Next Article Dynamic Simulation for Distribution Centers
Print
18127 Rate this article:
4.5
Comments are only visible to subscribers.