We’ve spent a fair amount of time learning the ins and outs of MSA’s, so this week I want to focus on process capability and how to understand the information you receive.
What is Process Capability?
In a nutshell Process Capability is:
• What it takes for your process to meet your customers’ needs right out of the gate with no modifications. This means for lack of a better term, inherent perfection.
• The information that can be provided on centering, variation and inappropriate measurement limits.
• The baseline metric for improvement
When determining your process capability there are three types of capabilities that we analyze:
• Continuous Capability- If you process is capable and in control, ideally you should get your desired outcome. This analysis measures the life cycle of your process telling you if the process has continued to be capable and in control.
• Concept of Stability-The idea of stability is the ability to answer the question ‘will my process produce the same result at this step every time it is used?’ To be technical, stability measures the ability of your process to meet its requirements at a regular and specific interval.
• Attribute Capability-This analysis makes assumptions about your data and is always long term data.
This week we’ve just scratched the surface on Process Capability. Next week, we’ll start digging a little deeper and show some illustrations of what it looks like.
As we go over Six Sigma statistics, we have to talk about normal distribution. Before we get to that though we have to talk about why distribution is important to the way you interpret your data. In interpreting your data there is something you should know before you tackle how the information observed, confidence intervals. Confidence intervals is more complicated than this blog, but basically what you need to know is the greater the confidence level the less likely the variation is to occur and the more you can guarantee the accuracy of data analysis. In confidence levels there are 3 common ones that we use in data analysis, 99%, 95% and 90%. The standard of measurement is 95%, the higher the better but as a baseline 95% is a solid analytic benchmark.
Okay so back to normal distribution. Here’s what you need to know.
What is it?
You find normal distribution when you take all of your data and create a visual representation of the information. You will illustrate when recurring variations show up in your process. It is actually more helpful when you have a distribution that isn’t normal because then you can say ‘Aha it was the 3 hour traffic jam that affected the process’. When you hear people talk about the curve, this is what they are referring to.
When do you use it?
This is a tool that is best when used as a continuous probability model with measurements that you don’t have to create. Think about the weight of a cargo shipment or the number of a specific product you receive.
Raw scores and Z scores
Each normal distribution will have a raw score which is made up of two parameters: the mean and the standard deviation. The Z score measures how far you varied from a particular point on your data line. In real terms it means, if you want to see how many errors occurred on the 5th then standard deviation shows you that.
Why is it important?
The area under the curve shows the proportion of the curve and which tells you how important this data is to your business. Is the curve is small then you now that the distribution occurs within a relatively small set of circumstances which is easier to control within process. A wider distribution shows you that your process can be interrupted by a variety of factors and may need you to keep a close eye on it.