The “Eye-Test” Versus Analytics
Let’s say you are out for a walk with a farmer, and you stand with her at the edge of a field, looking out at a herd of cows. You see a brown cow with white spots. She sees that the cow is pregnant and that she is feeling stronger, after a bout of illness. You both see the cow, but it’s clear that the “seeing” is different. She is “seeing” with other eyes, the eyes of deep experience that, to her but not to you, are clues with meaning.
In the medical domain, a physical exam includes a conversation with a doctor who, you hope, looks you over thoroughly. If you aren’t feeling well, the doctor is not likely to say “Well, I don’t see any viruses or bacteria.” However, the likelihood is that a doctor uses some serious and complicated analytics to declare you sound and healthy. They will take a swab, or a sample of blood, and “see” in a different way. A statistical protocol (or range) will provide context for the findings.
Football coaches (and an array of pundits and sports analysts) rely on this difference in “seeing” when they say that they use “the eye test.” This is especially true when they are contrasting it to the growing popularity of “analytics” as a way of analyzing players and teams using an array of statistics. This conversation has been evolving into camps, or schools or thought, with some friction between them.
Experienced coaches presented arguments that might seem obvious on the basis of “the eye test” – if you see more plays being run on the field, you should also see more injuries, right? Analytics showed that this was a false assumption. In reality, the faster pace of the game allowed offenses to more quickly outplay the larger, slower players. Analytics showed that the winning formula for these particular coaches (increased size of players, the rushing game, which is also slower in its execution) is in direct correlation with more, rather than fewer, injuries. In a sense, the conflict in sports between an “eye test”, and analytics brings to mind a quote from Mark Twain: “It isn’t what you don’t know that gets you into trouble. It’s what you know for sure that just aren’t so.”
It is the same with quality assurance in the supply chain. One school of thought is characterized by the attitude “we see no problem, so leave well enough alone”; the other school requires that suppliers, co-manufacturers, some internal plants have their material performance monitored for acceptability (spec conformance) to lower risk and for continuous improvement.
The Supply Chain Eye Test
In supply chain quality management, the equivalent of the belief in the “eye test” is the QA manager’s personal and cordial long-time relationship with First Tier suppliers.
The closer and the more experience-based the relationship is with them, the bigger the buffer of trust that has been built around it. And, the more trust there is, the less inclination there is to look further down the supply chain.
Experience-based familiarity replaces supply chain QA visibility, which is supported by a robust array of analytics, and by Statistical Process Control (SPC).
Supply chain QA visibility and SPC, seen the wrong way, can make the QA team uncomfortable. Why? Because SPC can detect any fiddling with statistics, using statistical methods.
In other words, you then have the tools to not rely on the equivalent of the “eye test”, your personal and trusting relationship with a long-time First Tier supplier. In a way, it might seem there is a lack of trust, and shouldn’t friends trust each other?
Of course, another way of looking at this is that providing supply chain QA visibility to your OEM (and the OEM requesting it) is that it is a way of building even more trust, rather than reducing it. Relationships with First Tier Suppliers are crucial to your success, especially if you have a complex product and you compete on quality and reliability. The relationship is nurtured over time, with individual people on each side. A key building block of this relationship is trust.
However, a long-term organizational (rather than personal) relationship should be built on the principle of “trust, but verify.” It is clear that this arrangement benefits all sides, and especially your customers who are relying on your products.
Is What You See What You Get?
Using analytics is filled with challenges. The most obvious one is the acronym GIGO (garbage in, garbage out). The results are only as good as the data that is supplied. That’s why system rigor and workflow discipline are essential to developing a knowledge base with value.
If you are collecting data from various sources, and it is submitted in different formats, with fields that mean different things to different groups of people, what you see may not be what you think you see. Effective analytics depend on comparing “like” with “like.” The variability of inputs makes this very difficult. For accurate comparison and dependability, it is hard to dispute that all raw data should be entered in the same system, using the same terms, and based on a common understanding of their meaning.
Incomplete Data and Lack of Categories of Data
You may simply not have enough information, or be missing an entire section of data. The result can skew your vision of reality.
Too Much Data, or Data that Isn’t Useful
Business processes have a tendency to become increasingly detailed and complex over time. At some point, perhaps an executive or client asked for data about some aspect of your quality management system. Perhaps there was an input that required paperwork from a government agency. Circumstances may have changed and this information no longer adds value and is not required. No one really needs it, but there it is, still using valuable resources to be processed.
Each manufacturing process may have its own versions, as well as additional ones, that make Quality Assurance less reliable or more expensive than it needs to be.
A Robust Approach
One of the most cost-effective, affordable and convenient ways your company can approach its supply chain quality management is by partnering with professionals.
EMNS can provide you with a system that helps you avoid a number of pitfalls in managing your supply chain QA. With deep knowledge and experience in working with companies in various sectors, you can be sure that you are tracking what needs to be tracked, and are doings efficiently, at a level of quality that is necessary to protect your brand and your finances. The biggest threat to this high level of quality is variability in your materials, the kind that is beyond the norms you have established in your specifications. This threat can arise at any link of your supply chain.
Material Variability Management (MVM)
Quality management in manufacturing revolves around the quality of the materials that flow into manufacturing processes. This includes both raw materials, as well as more complex sub-assemblies. The following components are included in an MVM system:
- Specification Collaboration/Distribution/Sign-off
- Supplier COA test data-capture (manual and computer-to-computer)
- Production batch tests data-capture
- Outbound COA generation from batch test data-capture
- SPC Analyses, including individual test trending of out-of-specification conditions and into problem zone (beyond 3 sigma and approaching specification limits)
- Ship-to-Control visualizations and range setting
- Alerts for material problem performance
- ANSI Z1.4 Sampling data capture and analysis
- Lab test analysis data-capture and comparisons
- Advanced-BI for user-definable reports and dashboards
- Material and Location Audits
Use of this approach with some or all of the above components can ensure that you are “seeing” what needs to be seen.