About Shaun Snapp

Shaun Snapp is a long time supply chain planning software consultant and the Managing Editor at SCM Focus. He focuses on both SAP APO as well as best of breed applications.

How MCA Solutions Should be Remembered

What This Article Covers

  • What are some of the important MCA Solutions contributions to supply chain planning software?
  • What will happen to the MCA product?
  • What can other companies learn from MCA?


Servigistics recently acquired MCA Solutions. This is an important development as the two companies were the top two software vendors in the service parts planning space. A number of articles will certainly cover the strategic angle of what this merger means for the service parts planning software market, however, in this article I wanted to focus on some of the important contributions for which MCA Solutions should be remembered.

My Exposure to MCA Solutions

I first attended MCA training in 2007, which was just a month or so after my first introduction to the company. After attending training at their headquarters in Philadelphia, I worked on an MCA implementation for a year. During that year I learned quite a bit about their application, and used their software, read through their documentation and interacted with MCA consultants.  My interaction with MCA’s people and product was how I first became educated in inventory optimization and multi echelon planning (MEIO), a topic on which there is also an SCM Focus blog, and for which I have a book coming out which highlights several important features in MCA’s product that helps demonstrate concepts related to MEIO (MCA screenshots are included in the book, but they will now be described as Servigistics screenshots).

What Will Happen to MCA’s Application? 

The MCA Solutions product will eventually be discontinued, and some of the functionality will be ported to Servigistics’ service parts planning product. Because the MCA application will not exist as a product far into the future, I wanted people who had not worked with the product to know some of the important contributions of MCA Solutions.

A Sampling of Their Ideas and Contributions

MEIO Innovation

MCA was one of the first MEIO applications. MCA was founded by Morris Cohen, a highly regarded academic and sometimes consultant, and along with the people they brought in, they were able to implement in a commercial product something that had previously been primarily of academic interest.

A High Degree of Control Over the Supply Plan

MCA developed one of the most powerful supply planning applications, either in service parts planning or in finished goods planning, that I have used (MCA’s solution was also a forecasting in way specifically customized for service parts). A few of the reasons that MCA’s application was so powerful are listed below:

  1. By leveraging MEIO, which is more powerful and controllable than other supply planning methods (MRP/DPR, heuristics, allocation and cost optimization), the application was able to control the supply plan very precisely.
  2. The application interface was compact, with easy access to different screens.
  3. The application’s parameter management was one of the easiest to review and change of any application that I have worked with. Parameter maintenance is one of the most underrated areas of supply chain application usability, and a major maintenance headache with many applications, however MCA made it look easy to develop a straightforward way to adjust configuration data. It was actually very simple, and I have wondered several times why more companies don’t copy it.

MCA’s solution had an excellent combination of a mathematically sophisticated backend, with an easy to use frontend. This is one of the main goals of advanced supply chain planning software generally, and it is infrequently accomplished.

Alerts and Recommendations in One View

MCA developed an ability that I had never seen before, which was the Network Proposed View. In this view, which is shown in the upcoming book, sorted the recommendations by their contribution to service level. It is a combined straight analytical view on the application recommendations (Procurement Orders – so-called “New Buys,” Repair Orders, and Stock Transfers, so-called Transshipments and Allocations) as well as alert system — in that it told planners where to focus. It also required no configuration, and was literally an out of the box capability.


MCA had mastered redeployment, something which all service parts planning clients need, and many finished goods companies also need (but often refuse to admit, the comment on this topic often being “we just need to improve our forecast, and we won’t need to redeploy“). MCA’s redeployment was also highly customizable, and could be very specifically tuned.

Simplified Simulation

MCA’s application was an excellent simulation environment. It displayed two planning runs results right next to each other in the user interface. This allowed a planner to keep one result, and then make adjustments, and rerun the optimizer with new service level or inventory parameters. The planner could then perform a direct comparison between the old and new runs. If the new run was not an improvement, a few changes could be made, and then the optimizer could be rerun and the simulation would be overwritten. This provided simulation capability in the same screen as the active version, and made it very easy to use. This is another area which many vendors have hard time making user friendly, and which MCA had mastered.

Optimizing Service Level or Inventory Investment

The MCA MEIO optimizer could be run bi-directionally. That is it could maximize service level and cap inventory investment, or minimize inventory investment and cap service level. While inventory optimization is generally known as controlling service levels, by capping inventory investment, MCA allowed companies to stock their network based upon their budget.

Clear and Highly Educational Documentation

MCA’s documentation on its solution was top-notch. Through accumulating research papers, books and other sources, I have a large library of MEIO documentation, and MCA’s Principles of Operation in particular may be my favorite MEIO document. In fact, I still frequently refer to MCA documentation when I have a question about how MEIO or service parts concept can be implemented in software. MCA had both functional and technical documentation, and all of it was extremely helpful and was written with a high attention to detail. Many vendors could learn from how MCA documented their product.


From any angle one would wish to view these items, these are important contributions, and this is not even the full list.
Things change. However, I will miss MCA Solutions. They were a true innovator, and had a great vision and executed on this vision extremely well. MCA showed the benefits associated with focusing on one area. Many of their consultants were not only expert in MCA software, but they also knew service parts planning inside and out. Their software and their people got me to think about things differently on a variety of topics than I had before. While MCA did not exist as an independent entity for that long (although software companies tend to have shorter lives that most other companies), their innovation should be remembered.

Fill Rate Versus Backorder as a Service Measurement


The vast majority of articles on this website that discuss service level tend to focus on fill rate, as this is the most popular service level measurement method in business. However, the majority of early work on inventory optimization and multi echelon planning that began in the late 1950s, and now drives the best of breed service parts planning software applications, was in fact designed around backorders. This is because the research was primarily paid for by the Air Force and carried out by the RAND Corporation, and the focus was squarely on solving the problem of managing military service parts networks. Therefore, it is interesting to compare and contrast two quotations from research papers that focused on minimizing backorders. This first is from Craig Sherbrooke and his METRIC (an acronym for Multi-echelon Technique for Recoverable Item Control) paper written in 1966. This is how Sherbrooke explains his use of backorders over fill rates in in his paper.

Fill rate — defined as the fraction of demands that are immediately fulfilled by supply when the requisitions are received — concentrates nearly all stock at the bases. The result is that when a non fill occurs, the backorder lasts a very long time. Similarly fill rates behaves improperly in allocating investment at a base when the item repair times are substantially different. Consider two items with identical characteristics except that one is base-reparable in a short time, and the other is depot reparable with a much longer repair time. Assume that our investment constraint allows us to purchase only one unit of stock. In that case, the fill rate criterion will select the first item, and the backorder criterion the second.

The fill rate possesses an additional defect. A fill is normally defined as the satisfaction of a demand when placed. But if we allow a time interval T to elapse, such as a couple of days, on the grounds that some delay is acceptable, the policy begins to look substantially different. As longer delays are explored, the policy begins to resemble the minimization of expected backorders.

In summary, the backorder criterion seems to be the most reasonable. The penalty should depend on the length of the backorder and the number of backorders; linearity is the simplest assumption. This is the criterion function most often employed in inventory models. – Craig Sherbrooke

Sherbrooke explains that he considers backorders superior for his purposes due to the following:

  • Fill rates tend to concentrate stock at the bases (bases in Sherbrooke’s papers would correlate to DCs in industry-speak, with the depot being the regional DC or (RDC))
  • Fill rates measure the satisfaction only at the point of initial delay, and do not measure how late a fulfillment actually occurs.

Therefore Sherbrooke designed an algorithm as part of METRIC a penalty which multiplies the length of the backorder by the number of backorders.

Leanard Laforteza states a similar reasoning in his paper for selecting backorders as a measurement for his paper designing a multi echelon system for supplying Marine military deployments.

Fill rate is the percentage of demands that can be met at the time they are placed, while backorders are the number of unfilled demands that exist at a point in time. In commercial retail, if customer demand cannot be satisfied, a customer either goes away or returns at a later time when the item is restocked. the first case can be classified as lost sales while the second case creates a backorder on the supplier or manufacturer. In military applications, especially in most critical equipment, any demand that is not met is backordered. The backorder is outstanding until a resupply for the item is received, or a failed item is fixed and made available for issue. These two principle measures of item performance – fill rate and backorders – are related, but very different. Commercial retailers are more interested in fill rate than in backorders because fill rate measures customer satisfaction at the time each demand is placed. Not only is fill rate easy to calculate, but it also helps retialers form a picture of how well they are meeting customer demand. Experience may tell them that a 90% fill rate on an item is not acceptable and will create customer complaints. On the other hand, backorders are not easy to compute as fill rate. Unlike commercial retail business, the military is not concerned with lost sales. The military measures performance not in terms of sales, but in terms of equipment availability.

In terms of supply support measurement, we recommend tracking backorders. Although fill rate tends to have clearer meaning to commercial suppliers, the rate does not have the same meaning in militar applications. Using the concept of backorders, a unit can determine the status of its supply support not just when the order is placed, but up to the time the item was received. – Leanard D. Laforteza

Here Laforteza does a good job explaining why backorders are more relevant for military application than fill rates. However, as the greater market for MEIO applications is civilian, vendors added fill rates and fill rates are not the dominant method of MEIO implementation. MCA Solutions, a service parts planning vendor with a substantial military client base can measure service level by fill rate or by availability (i.e. the uptime of equipment). However, while it does not measure fill rate by backorder as do Sherbrook’s METRIC or Laforteza’s approach, MCA allows for the flexible setting of backordering for different locations. MCA allows for the following settings:

  1. All locations to be backorderable
  2. Only the root locations to be backorderable
  3. No locations to be backorderable (which is the default).

MCA describes its management of backorders in the following way:

A Location is called backorderable if the unmet demand at that Location gets backordered at that Location and waits until the inventory is available at that Location. A Location is not backorderable (also referred to as lost-sales) if the unmet demand is passed to another Location or outside the supply chain. In backorderable models, preference is given to destinations that do not have enough inventory position to meet their child Location needs. – MCA Solutions


The service level measurement must fit the application. The early MEIO research papers were centered around military application, and thus used backorders, and backorder which is often the number of backorders times the average backorder duration serves as a common service level measure. However, civilian applications require fill rate as the service level measure.


“METRIC: A Multi-echelon Technique for Recoverable Item Control,” C.C. Sherbrooke, RAND Corporation, 1966

“Inventory Optimization of Class IX Supply Blocks for Deploying in U.S. Marine Corps Combat Service Support Elements,” Leanard D. Laforteza, Naval Postgraduate School Monterey, California, June 1997

Principles of Operation, MCA Solutions, 2007

How SAP’s TCO Compares for SAP APO Service Parts Planning

Background and Motivation for the Research

When I am often told about the reasons for decisions to go with software that I am familiar with, the logic often does not seem to make sense. In fact the entire process for repeatedly selecting expensive and lower functionality software from the major monopoly vendors turns out not to be based on the main comparison points of software. The main comparison points of enterprise software are the TCO and the application’s functionality. However, companies primarily look for solutions from vendors that they are already working with, and then allow the issue of integration to play a primary decision-making role. Therefore, they primarily ignore TCO (most tend to make decisions without knowing the estimated TCO), focusing more on initial software acquisition cost, and de-emphasize the functionality comparison between applications. A primary way this is done is by having executives, who don’t work with enterprise applications perform the functionality observation through a controlled demo environment and by marginalizing the users, removing them from the decision-making process. This allows applications that are weak or have unreliable functionality to compete with vendors that have excellent and reliable functionality.

The Basis for Estimation

I visit clients often post go-live on SAP APO and have developed a good sample of companies. I know the typical length of an APO implementation, as well the costs of maintaining APO. I also work with a number of best-of-breed vendors. Because I had access to information from several necessary sources, and was able to make times estimations based upon personal experience, I decided to perform a total cost analysis between SPP and a best-of-breed service parts planning vendor. This is just the service parts planning analysis. Here are links to the others.


The Scope of the Analysis

This analysis is limited to the major planning applications. I have developed estimates for costs of APO modules versus best-of-breed applications for the areas which I have first-hand knowledge, which is demand planning, supply planning, service parts planning and production planning and scheduling. I do not perform any similar analysis for other popular enterprise areas such as ERP or analytics.

Why the Best of  Breed Vendor is Not Named

I am not trying to recommend any one vendor in this analysis, so naming the vendor I used would be a distraction. The main point is that SAP’s TCO is in an entirely different cost category. Essentially any best-of-breed vendor I selected would generally compare similarly. Some will be a bit more expensive and some a bit less so, but no best of breed vendor will come anywhere close to SAP’s TCO.

Why SAP License Costs are Set to Zero

SAP license costs are difficult to determine. There is little doubt they have some of the highest average license costs in the enterprise market, but their price fluctuates greatly. In addition, the costs may be bundled with other software. In terms of publicly available rates, SAP has a government price sheet. However, the price sheet is based on an arcane point system that is clearly designed to not allow anyone to independently calculate a price, while meeting the US government requirement that they have a price sheet. I worked with this sheet for around an hour and a half, and then realized, it was not meant to be deciphered. SAP license costs are shrouded in mystery.

However, when I performed the analysis, even without SAP license costs, I found SAP TCO costs to be so high that even without any license costs or SAP support costs (which are based upon the license costs) the best of breed vendors were still easily beating SAP in TCO in all the application areas. Secondly, any article which does not rank SAP as #1 in whatever it is being compared with is open to immediate criticism. (In fact, the easiest way to have a soft life in IT is to skip any analysis and declare SAP the victor. In doing this, you generally are not required to provide any evidence, but simply say something like “SAP support best practices.”) So something that shows SAP’s TCO being higher than anything else will be considered biased. Therefore, to counteract this concern, I decided to tilt the playing field in SAP’s direction by making all of the license costs free. So this analysis assumes you never had to pay anything for SAP’s software or their support.

Doing this does one other thing, it emphasizes the point that the license cost should not be the main focus of the comparison and that other costs predominated in the TCO. Therefore, free software can end up being not the best decision.

Analysis Assumptions

There are a number of assumptions in this analysis. One of the most important is the duration of the implementation. This is one of the trickier things to set. Software companies tend to deemphasize this number, which is why I had to use my experience to adjust the results to what I have seen. SAP implementations take the longest of any enterprise vendor, and there are very good reasons for this, which I get into later in this post. However, for both SAP and the best-of-breed vendor, I have included a range, and the estimated TCO for each in terms of implementation is based upon an average. There is no perfect analysis of this type that can be created because of all the different variables. However, not being able to attain perfection should not get in the way of attempting estimation. One way or another, these types of analyses must be performed and I always think it’s better to take a shot at estimation rather than to throw one’s hands up and say its unknowable.

Total Cost of Ownership

According to this estimate, SAP has a higher total cost of ownership than the best of breed application I compared it against. Having worked in SAP as long as I have, I intuitively I knew it would be higher, but even I was surprised by how much higher it was. Here are some of the reasons.

SAP’s Implementations take Significantly Longer than Best of Breed Implementations

  1. SAP’s software is very difficult to understand and is highly encapsulated. SAP has so many settings which allow the system to behave in different ways that extensive time must be spend in both understanding the settings and understanding the interactions between the settings. The statement that SAP is filled with “best practices,” is actually incorrect, because a best practice approach prescribes that the system define specific ways of doing things, when in fact SAP follows the “comprehensive approach.” This includes a seemingly unlimited number of ways of configuring the system.
  2. Of all the applications I work with, none approach SAP in the number of areas of their applications that don’t work. This includes functionality that never worked, beta functionality that is still listed in the release notes as functional, and functionality that did work at one time but was broken by an upgrade or other cross application factor. In fact no one is even comes close. SAP’s marketing strategy is to cover functionality as broadly as possible so they can always say “we have it.” This same development approach spans across applications, as I observe the same thing in different product lines such as SAP BW. This is one reason SAP’s TCO is probably headed further up in the future. However, this results in product management writing checks that development cannot cash. Testing each area of functionality to ensure (part of what I do by the way) imposes more work and more time on the implementation.
  3. The large consulting companies have built their business model around SAP and extend the time of SAP implementations to maximizes their billing hours. SAP made a strategic decision quite some time ago to let the consulting companies control the speed of implementation in order to be recommended by the major consulting firms, regardless of the fit between the application and the client need.

SAP Resources Are Some of the Most Expensive in IT

  1. There is nothing controversial about this statement, it is well known in IT circles.

SAP Has the Highest Manpower Support Requirement

  1. Getting back to the topic of application complexity and fragility, SAP simply takes more resources to maintain. Something I recently had to work with was one method which was part of functionality that did work, but stopped working as of the release SCM 7.0. First the problem that cropped up due to this needed to be diagnosed and explained (we did not find out about the broken functionality but perceived it through system problems. Once discovered, this functionality had to be change to a method that did work, and the business had to invest time creating a new policy to work with the changed functionality. This was course expensive and time-consuming.

SPP in Particular

Of the four different planning areas that I have created TCO comparisons for, SPP is a special case as it is an immature product with significant needs for on site development. For this reason I have given it the longest implementation timeline of any SAP planning product. It also estimate its support load to be higher than the other SAP planning products because of maturity issues combined with the extra effort to support the custom development that must accompany any SPP project.

Secondly, SPP projects are very risky due to SPP’s lack of maturity. Therefore, the long timeline included here does not include the likelihood of walking away from the implementation at any time during the project.

Integration is Overrated as a Cost

The cost differences between SAP and a best of breed application are enormous, and the frequently used argument, that the company wants an integrated solution, cannot reasonably be used to justify a decision to select SAP. I have not broken out the integration separately, as it is built into the consulting costs, but an adapter of even a few hundred thousand dollars would not tip the TCO in SAP favor. Also, the maintenance of the SAP CIF (the middleware that connects R/3 to APO) is vastly underrated. My experience and with developing custom adapters for connecting best of breed planning applications to SAP, I have become firmly convinced that the cost of maintaining the CIF is more than the cost of developing and maintaining a custom adapter. The CIF, which connects up APO to SAP ERP is unacceptably problematic. For more on the CIF, see this post.


Implication for ROI

According to most publicly available studies, around 1/2 of projects have a positive return on investment. However, this greatly depends upon the TCO of the solution and the functionality within the application that can be leveraged. SAP planning modules are so expensive compared to alternative solutions, and deliver a lower functionality level than best-of-breed solution, that as a natural consequence they have a lower ROI, and a lower percentage of positive ROI projects. However, the incorrect perception in industry is just the opposite, that SAP is the safe vendor to choose.

Outsourced Support to Reduce Costs?

Companies now often outsource a portion of their support to India, so one might imagine that the support costs listed here could be reduced. This is another frequently held assumption, but does not prove out in reality. A good rule of thumb is that while India based resource are about 1/4rth as expensive it takes more than twice as many individuals to get close to the same amount of support work done. Secondly, there must always be at least one in country resource. Thirdly, this is a mess to manage. There are not only language and time barriers, but it appears some of the companies providing these resources are actually double book the same resource on multiple clients. I have been dealing with this issue for several years now and I end up having to read notes from the support team which are not spelled properly because of language barriers. Outsource operations lack good professional management, and the client resources end up having to take over support organization tasks.

Generally, I am not sure outsourced support works for any area very well, but it particularly unsuited to complex systems such as planning applications. Generally, when support is outsourced, the quality of support drops precipitously, and anyone in IT knows this.


If you confront SAP and large consulting firms to require a good TCO analysis, be prepared for a dispute on the true cost of their software and time required to go live. However, its critical to make your decisions based on actual observations at multiple account, as I have in this article, and not based on hypothetically sales estimates from their sales team on how fast a solution can be brought live. I have done the best job possible here to bring the real world data to my estimates, and I even stacked the deck for SAP by removing all license costs, but SAP still came up with a much higher TCO.

By the way, this was also true in the other application areas I analyzed. The real world data shows across the board that SAP is significantly more expensive in total costs of ownership than best-of-breed solutions.




Understanding the (S, s-1) Inventory Policy


In several articles distributed across different SCM Focus sites, the (S,s-1) inventory policy is discussed. For this reason, it made sense to create an article which explains what it does. The following was written by Wayne Fu.


In Muckstadt’s book, section 1.1.2, it explains briefly about the (s,S) policy.

  1. s is normally the reorder point
  2. S is the order up to level

When inventory position (which is on hand plus on-order minus backorder) falls to or below s, it triggers an order to raise the inventory position to S.

And (S,s-1) is just a specialized form of (s,S).  Basically it is s-1 only. In section 1.2, Muckstadt states the fundamental assumption of his model. He assumes the costs of parts are high enough to be managed by (S, s-1) policy. (S,s-1) is an ordering policy basically says if the inventory level is one below (S-1), place an order to bring inventory level to S.

It is very commonly used in long lead-time environments such as aerospace.

Author Thanks
I wanted to thank Wayne Fu for his contribution. I was not aware of many of the details which are described above, and I think this should be of interest to anyone who practices in this field.
Author Profile
Wayne Fu is a Senior Product Management in Servigistics.With operation management background, Wayne has worked in service part planning domain for more than a decade. In Servigistics, he led the research and development of various areas like install-base (provisioning) forecasting, inventory optimization and distribution planning. Currently, he is focusing on the effectiveness of forecast techniques in Last Time Buy.

Important Service Part Multi Echelon Inventory Optimization Books


It is very interesting to find out about the origins of service parts inventory optimization. It’s rarely discussed, though I thought I would write and article on this topic.

Guest Co-Author

For this post we have a guest co-author. Wayne Fu, a Senior Product Manager at Servigistics provides a perspective on what he considers some of the pioneering works in multi-echelon inventory optimization.


Mult-Echelon Inventory Optimization is interesting but relatively more challenging arena than other popular optimization practice like production scheduling. This is due the fact that the performance measurements in inventory management (fill rate, back order, availability etc…) are largely non-linear. Thus some common methods like linear programming are not easily applied to the problem. And because it is a solution that requires a holistic “hovering-view,” the problem cannot be segregated in smaller scope very easily. This fact places challenges on performance and scalability. These issues are even more severe in the service part environment due to larger volume.

One of the major publish in Service Part Inventory Optimization is the “Optimal Inventory Modeling of System” by Craig C. Sherbrooke. Sherbrooke laid out both the METRIC and Vari-METRIC algorithms in this book. They are generally recognized as the foundation of many heuristic based optimization algorithms today. (for more on “heuristic based algorithms” see this post below.)


METRIC is specifically designed to address the multi-echelon issue while Vari-METRIC is an enhanced version of METRIC to resolve multi-indenture problem.

METRIC and Marginal Analysis

In simplified terms, METRIC is an algorithm based on marginal analysis. This approach still recognized as the most accurate and effective approach, but is very computationally intensive. Several issues merged in practice of METRIC also. Sherbrooke generally assumed the inventory policy to be (S,s-1) and operated under relatively low demand environment. And it is dominated by fill rate (no stock out) measurement. Thus, the application of METRIC is limited to specific industries that fit this model, most notably aerospace and defense. Vari-METRIC, on the other hand, placed more emphasis on availability. Fill rate applies to any supply planning environment, whereas, availability is primarily used in service operations. A service level agreement (SLA) will guarantee an availability of unit such as a plane or piece of industrial equipment. Availability is then used to model this the uptime of this equipment, which is dependent upon the fill rates of a variety of service parts that support that unit, all of which have different failure rates, part costs, etc.. (to read about SLAs, see this post)


METRIC measures the fill rate and other measures at the intermediate location, which ended up being a highly debated aspect of METRIC.











At What Locations to Measure Service Level?

In fact, where to measure service level is an extremely important topic in and differentiation between inventory optimization products. Currently, the vast majority of supply planning organizations measure service level at their internal locations in addition to measuring it at the customer location.











As we will see in the following section, another researcher who followed Sherbrook’s work changed this location service level measurement assumption.

Measuring at the Intermediate Location

The second most influential publication in service parts inventory optimization“Analysis and Algorithms for Service Parts Supply Chain” by John A. Muckstadt. Dr. Muckstadt’s model could be said as an updated version of Sherbrooke’s. It is an availability based algorithm and avoids the need of approximations required in METRIC due to convexity problem. Muckstadt also proposed some novel approaches to reduce the performance and scalability issues during implementation. Perhaps most importantly, Muckstadt moved always from measuring the satisfaction at intermediate level location, measuring the customer facing demand only. However, Muckstadt is still based on the (S,s-1) order policy and assumed in low demand environment. This means that Muckstadt’s model faces some of the similar challenges as do Sherbrooke’s in broader application. (for a description of the (S,s-1) order policy, see the link below:



The previous paragraphs were an overview of two of the most important publications in service part inventory optimization and distinct from finished goods inventory optimization. Sherbrooke’s and Muckstadt’s algorithms are used in service parts planning products to this day, with alterations here and there.

Author Thanks

I wanted to thank Wayne Fu for his contribution. I was not aware of many of the details which are described above, and I think this should be of interest to anyone who practices in this field.

Author Profile




Wayne Fu is a Senior Product Management in Servigistics.With operation management background, Wayne has worked in service part planning domain for more than a decade. In Servigistics, he led the research and development of various areas like install-base (provisioning) forecasting, inventory optimization and distribution planning. Currently, he is focusing on the effectiveness of forecast techniques in Last Time Buy.







Heuristic Based Algorithms Explained


This post documents an email discussion between myself and Wayne Fu regarding heuristic based algorithms.

Question for Wayne Fu

What is a heuristic based optimization algorithm? I thought that heuristics were one form of problem solving, and optimization was another. How is a heuristic based algorithm is different from a non-heuristic based algorithm? That would help me and readers out a lot. – Shaun Snapp



Optimization can be classified as deterministic and stochastic, while all inputs are a constant in deterministic optimization. Inventory related optimization is definitely stochastic, since the demand is never been a constant, but a given distribution. The most classic optimization method in deterministic is linear programming.

Another name for stochastic is meta-heuristic. Meta-heuristic is a vast topic and used very broadly, because it is much more flexible, contingent, and even could yield better result than deterministic methods while inputs are deterministic.

Heuristics in Major Solvers

Like ILOG’s CPlex, they are very powerful linear programming solvers, but eventually when it tries to determine a solution, it uses heuristics. i2 Technologies used to use CPlex in master planning to provide draft outcomes, and then MAP as the heuristics solver to fine-tune the solution.

A Metaphor for Comparing Heuristic Versus Optimization

One extremely simplified way to see the deterministic and heuristics is like searching for a house. Using a deterministic approach would be like zooming out to a couple thousand miles always from earth, and then picking a location you think is best by giving all the criteria you can check at that distance. Then heuristics would be like standing in front of a train station, start asking the people around or checking local newspaper to figure out where is the better place to live. Then you move over there, check around again and narrow the scope further down or even jump out to next place.


So, inventory optimization is meta-heuristic. In METRIC, it is basically using margin analysis as the criteria of heuristic. (for more on METRIC see this post)


It starts by searching the for the part which provides best value to increase its inventory, then next one, then next one in the believe that we will stop at some point and that will be the optimal inventory position overall.


Follow Up Comment from Shaun: I think one of the complicating factors in understanding the difference between heuristics and optimization is that they are often taught as separate methods. A generalization is that an optimizer has an objective function, while a heuristic does not. However, in practice and in many important foundational research papers, in fact heuristics are combined with optimization. I think you provided a very good explanation of meta-heuristics. It enables a person who reads METRIC (an acronym for Sherbrooke’s foundational Multi-Echelon Technique for Recoverable Item Control), to understand it much better.


Author Thanks:

I wanted to thank Wayne Fu for his contribution.

Interviewee Profile

Wayne Fu is a Senior Product Management in Servigistics. With operation management background, Wayne has worked in service part planning domain for more than a decade. In Servigistics, he led the research and development of various areas like install-base (provisioning) forecasting, inventory optimization and distribution planning. Currently, he is focusing on the effectiveness of forecast techniques in Last Time Buy.


Deloitte Writes “Ok” Paper on Service Parts But Would You Want to Hire Them?


The paper The Service Revolution by Deloitte “research” is an average white paper which has some interesting numbers about service parts along with quite a lot of fluff to reach its 13 pages. I would recommend it to skim rather than read. It reminded me that I recently wrote an article on the low quality white papers that seem to fill the internet that are primarily focused on gaining business rather than imparting any knowledge.


However, while Deloitte was able to put together a middling white paper, are they the right consulting company to use for your service part solution needs? This is Doubtful. Aside from some strategy consultants who “dabble” in service parts and (can give good presentation), and a number of consultants who have been working on Cat and Ford on SPP, it’s very difficult to see how Deloitte, a company that fakes an aftermarket presence with a few white papers every few years, should be selected over consulting firms that really focus on the aftermarket. There are quite a few reasons why I came to this conclusion.

How Many Times Can a Company Bomb?

The answer for large monopoly consulting companies like Deloitte is unlimited. Deloitte has bombed on 100% of the projects that I have followed them on, which is now around five, including one where I worked with them while they were still on the project and getting close to rolling off. Not only do they bomb, but after they leave, the workers at the client tend to have developed a number of terms for them that include one swear word or another, followed by the name “Deloitte.” Another client had basically banned the use of the work Deloitte, and would insert some other word to describe them.

I used to have to deal with this animosity when I worked for them, and its nice to no longer have to deal with it now that I am independent. Often I was put in the position of having to  compensate for the fact that the Deloitte had been failing to meet expectations for quite some time before I would show up on an account. I noticed the higher-ups at Deloitte never thought very much about this, but would usually tell me that the client “was their own worst enemy.”

But at Deloitte, there is a simple rule, everything rolls downhill. The partner is never to blame, they blame the Sr. Managers, the Sr. Managers blame the Managers, and so on. The vast majority of the Sr. Managers and Partners at Deloitte have these tremendous egos and extreme type A personalities, however, what they can’t explain is if they are so talented and know how to staff and manage projects well, why is it that the majority of their projects are in the ditch? The Sr. Managers and Partners are also deeply deluded about the lack of corruption that exists in the major accounting/consulting firms. I was once told by two Sr. Managers out of the Cleveland office that the Andersen Consulting involvement with Enron was “built up” by the media, and what Andersen did there was “not a big deal.”

How to Misconfigure the Wrong Solution

At the client where I worked with them Deloitte had chosen the wrong solution for them, and then had not taken down requirements properly, and right before go-live the Deloitte consultant’s answer to the problem was to leave the project and then to leave Deloitte. Interestingly, the software selected never had a chance of supporting the business requirement, but Deloitte recommended it anyway.

Obvious Failures with SPP

Deloitte is associated with not only failing on projects generally, but has failed specifically on at least three SPP implementations that both Deloitte, SAP and the clients are hiding from the public. These are at Cat Logistics, Ford, and the US Navy. Part of the reason is that Deloitte is the implementation partner. However another reason is that the SPP solution is not yet ready to implement. There are many obvious things that Deloitte could have done to bring those projects live. One would have been to understand the weaknesses of SPP, that it was a beta product with functionality that did not work, and to blend it with a best of breed solution. However, they did not do this because they have no independence from SAP. I discuss SPP’s implementation mistakes in this post.


It makes little sense to hire a consulting company that is simply controlled by a major vendor. The entire concept behind hiring a consulting firm is that you are buying independent advice in addition to the bodies. The fact that SPP, a beta product, has been recommended by the large consulting firms without describing SPP’s limitations to their “clients,” is a clear demonstration that Deloitte puts itself before its clients. To see how SAP remotely controls the advice given by the major consulting companies see this post.


Inability to Partner with Best of Breed Vendors

The problem for clients with bringing in Deloitte to implement even a best of breed solution is that you will end up paying for Deloitte consultants that the vendor is then required to train and have on the project. Secondly, no best of breed service parts planning vendors requires or wants Deloitte or any other consulting firm for that matter to implement their solution. They all maintain consulting practices and they can implement far better independently. The main things a consultant can do is perform a software selection, and other activities during the project such as business process work, training and integration to ERP applications. However, neither Deloitte nor the other major consulting firms will be satisfied with this role. Vendors would always prefer a direction relationship with clients rather than being controlled by some corrupt major consulting firm. Unless it is a major vendor like SAP or Oracle, major consulting companies will strongly tend to abuse the relationship with any best of breed vendor to benefit the consulting company over the client or the vendor. Secondly, this control will take place behind the scenes and will not be apparent to the client. As all of the real service parts planning solutions are best of breed, this of course is a serious problem for selecting any major consulting, including Deloitte company to manage your service parts project.


So do yourself a favor if the Deloitte white paper on the service business, skim the paper for the data that is presented. The rest of the paper is mostly filler, designed to get business. It has some useful statistics, although Deloitte is not above faking statistics to make a case, so I am not sure how reliable they are. However, skip contacting them, because they are not suitable to help you select or implement service parts solutions. There are however plenty of good boutique firms that are better choices.

Eric Larkin from Arena Solutions on BOM Management for the Service Market


Eric Larkin, Chief Technology Officer and co-founder or Arena Solutions, a very innovative and powerful BOM (bill of material) management application, describes how Arena Solutions is rolled out on service parts accounts. He describes some of the differences between finished goods implementations of Arena Solutions vs. service part implementations.


Title: Eric Larkin from Arena Solutions on BOM Management for the Service Market

Here we learned that with regards to service parts, Arena Solutions helps companies know what they build and when they built it. That is they provide a comprehensive revision history. This allows companies that use Arena to ship out to customers exactly the correct service part product version that matches their equipment. A service part BOM (bill of material) management implementation will tend to have more materials, but a less complex BOM structure, so the challenges on the master data side are a bit different, but that the essential process of implementation remains similar. Companies that use Arena often model all products, that is service parts and finished goods in the same implementation. I could personally see a pure service parts implementation of Arena, and on of the reasons could be that the service organization could decide to go with a different BOM solution than finished goods.


This interview is filled with information that should be valuable to anyone interested in BOM management for service parts. I wanted to thank Arena Solutions for letting me record and post the videos here.

Microsoft Dynamics AX and Service Management


Much of the Dynamics AX functionality for supply chain is rather basic. However, there is one supply chain area area which stands out. This is the service management functionality. As Dynamics AX could be connected to a service parts planning engine, I thought it would be interesting to go through some of the screens in this application.

We start off in the introduction screen. This one screen controls all of the functions in Service in Dynamics AX (including configuration).

This is the Service Agreement screen. This where the service agreements are stored.

This is probably the most important screen for those that work in service parts planning because these are the different service agreements that are then connected (in terms of a service level) to the service parts planning engine.

One thing I do like is the ability to perform a drop down off of the bread crumbs and goto other related areas. You could not think of doing something like this in the SAP GUI. In this case we can take a look at the service orders.

Here they are.

Right from here we can go into repair operations, which allows us to fill out a form.

The Service Parameters is essentially the configuration. This looks extremely easy to configure compared to what I am used to SAP. See all the options below.

Selecting any of these brings up the option below.

Here is where the service agreement standards are set (under Setup –> Service Agreements –> Service Level Agreements.)

Here we can setup the diagnostic codes.

This certainly gives the impression that a Microsoft Dynamics implementation would be far more cost efficient than an Oracle or SAP implementation. This topic of configuration efficiency is almost an undiscussed topic, but its one of the most important features to implementation success. See this post on the topic:



Dynamics AX integrates reporting into each screen, which I like.

This is particularly true coming from the SAP world where reporting is not inherent in the user interface but instead is a separate system called the BW. I prefer at least a basic reporting system within the application itself.

The reports open as listed above, but you can drill down into a specific item for more detail. 
Interface Speed

The entire Dynamics Trial was slow and buggy. The interface looks very HTMLish, however, the performance is quite bad. You are required to use Remote Desktop to get into the Windows Server 2008 box that is hosting the Dynamics Trial. Why is the trial not simply available to be logged into directly through a browser? This is how Arena Solutions has setup their demo, and their demo system is extremely good and very responsive. Overall it took me quite some time to get these screen shots and navigate through the system because of its performance problems.


Dynamics AX’s service management functionality is not bad. Secondly, Dynamics does a very good job of combing everything from the user screens to configuration to reporting all in a small space. Its “breadcrumb” menu systems is also very good and improves provides a powerful navigation element throughout the system. You never feel as if you are lost as with the bigger ERP vendors.

Dynamics AX is rare ERP software in that it has quite a lot of focus on service management. I assume this is because they purchased a vendor and integrated into their suite. Explicit service management modules like the one in Dynamics AX are important in order to manage the supply chain by service levels. Service parts planning software can plan the supply chain by service level, but it needs to know what the different service levels are, and they must pull them from someplace where they are stored and managed. This is where the Service Management area of Dynamics AX would come in.

Interviews with Tim Andreae SVP of MCA Solutions


These interviews are with Tim Andreae, SVP of Marketing for MCA Solutions. Tim has extensive experience in strategic work and has been with MCA for over 8 years. His longevity in the planning and service parts space makes these interviews particularly relevant. In these interviews I asked Tim both about the history of MCA as well as exiting things happening at MCA currently.

Title: Tim Andreae Introducing MCA Solutions

Here Tim explained that MCA Solutions came out of academics after Dr. Morris Cohen had spent considerable time consulting in the service parts industry. He also explained many of the important challenges in service parts planning, and which MCA has been designed to handle. Furthermore, because service parts are counter cyclical, MCA has continued to receive demand for its software even with the difficult economy. Costs are one part of the equation, but excellent service organizations must be in stock in order to meet service agreements, and this is where MCA can really improve service organizations.

Title: Tim Andreae on How to Plan Service Parts

Here the multi use aspects of MCA Desktop were described. This is a low-cost way to get access to MCA technology for simulation, and to test the software for a possible future full implementation. This is a low risk way to test the fit of software, allowing companies to become comfortable with it, and then to prepare a plan for how the software should be rolled out. On the other hand, the company can simply continue to use MCA Desktop in a more limited way, and get all the benefits of performing simulation in a top offering.

Title: Tim Andreae on How to Plan Service Parts

Tim discusses what I believe is a very important topic for service parts improvement. That is the fact that the majority of service companies out there are still using systems that were never designed to manage all the complexities of service parts. This causes all types of problems as the assumptions of finished goods planning systems are simply different from those of service systems. As Tim points out, companies doing this are probably struggling, and they are also carrying far too much inventory. Inventory savings from 10 to 50% can be expected depending upon the level of sophistication prior to the MCA implementation. And secondly, service levels correspondingly increase at the same time.

Title: Tim Andreae on Performance Management

In this video we learned that MCA has a new product called Performance Management creates a real-time dashboard with service management best practices built right into it. Every company I have consulted with has been extremely interested in knowing their metrics in real-time. There is also a strong connection to the MCA SPO planning tool, which allows those working in Performance Management to drill down into the planning tool to get to extra levels of detail.


In these video interviews I learned quite a lot about both what has made MCA Solutions different from other vendors since its founding, as well as interesting things that MCA has recently introduced including MCA Desktop and Performance Management. I would like to thank MCA for allowing me to record the interviews and post them on this site.