One of the key things learnt from 6Sigma is the ability to accurately measure and analyze the information your organization collects. This can be as technical or as general as your organization needs, the key is to understand the level of specificity your organization needs and analyze from there. A Black belt will be able to give you in depth analysis, but a good one will give you exactly what your organization needs. We’ll start the discussion with Multi- Vari Analysis.
What is Multi-Vari Analysis?
Simply put this puts a face to the data. Once you have collected all of your information Multi-Vari studies take the data and illustrate the patterns of variation within the data. It helps you identify group or correlations between subgroups and over time. When you can identify the groups, you can make assumption or draw conclusions based on the data. For example if your data shows the your staff made more errors on product X you can draw the conclusion that your improvement efforts need to be focused on that particular product.
What is it used to assess?
Multi-Vari studies are useful in many ways but the most standard uses are
- to illustrate data in graphics.
- to show how work is influence by defined variables.
- to show the impact of specific material, departments or methods.
- the effects of external factors such as noise, delivery delays etc.
When you need to show stakeholders, influencers or project staff what you have found multi- vari studies are a great way to produce a visual. Since most people learn by doing, a visual representation allows them to see what they have done and to show leadership the gains or losses accordingly.
We opened last week with Process Capability and before we go full-fledged into that area, I want to pause and put some focus on capability studies.
What is a Capability Study?
To review from last week, a capability study is a way to ensure that your process is consistent over an extended period of time. For example if step 3 in your process produces 3 errors per cycle for 3 years, your process in consistent.
How Do You Find Stability?
There are a ton of tools you can use to test the stability of your process, but some of the most common tools are Time Series Plots and Control Charts. In addition to these tools there is a step by step process (of course!) to test the capability of your process, here they are.
What should know about capability studies?
As with all 6Sigma tools, the effectiveness of this tools lies more in how you understand and how you apply it. The most important things to remember are:
- Capability studies are used to measure the same parts of the process, at the same stage in the process at exactly the same time every time it is measured.
- You can use the capability study on discrete and continuous data.
- You get the best (ie most meaningful) information when you run the study on already stable and predictable data. New processes are not the best place for this tool.
- When you hear Sigma Level, they are talking about capability.
- Capability studies require you to understand:
- The limits of your customer or organization.
- The difference between short-term and long-term
data and what those differences mean to your organization or customer.
- Mean and standard deviation.
- How to assess normality of your data.
- How your organization or customer determine Sigma level.
Capability Studies can give you a great deal of insight on how your organization is running and what is making it difficult. This is one way to get a sense of the information flow and the quality of the information you can get your hands on. So let’s start off the new year with a look at what your data is telling you. Happy Hunting!
As we go over Six Sigma statistics, we have to talk about normal distribution. Before we get to that though we have to talk about why distribution is important to the way you interpret your data. In interpreting your data there is something you should know before you tackle how the information observed, confidence intervals. Confidence intervals is more complicated than this blog, but basically what you need to know is the greater the confidence level the less likely the variation is to occur and the more you can guarantee the accuracy of data analysis. In confidence levels there are 3 common ones that we use in data analysis, 99%, 95% and 90%. The standard of measurement is 95%, the higher the better but as a baseline 95% is a solid analytic benchmark.
Okay so back to normal distribution. Here’s what you need to know.
What is it?
You find normal distribution when you take all of your data and create a visual representation of the information. You will illustrate when recurring variations show up in your process. It is actually more helpful when you have a distribution that isn’t normal because then you can say ‘Aha it was the 3 hour traffic jam that affected the process’. When you hear people talk about the curve, this is what they are referring to.
When do you use it?
This is a tool that is best when used as a continuous probability model with measurements that you don’t have to create. Think about the weight of a cargo shipment or the number of a specific product you receive.
Raw scores and Z scores
Each normal distribution will have a raw score which is made up of two parameters: the mean and the standard deviation. The Z score measures how far you varied from a particular point on your data line. In real terms it means, if you want to see how many errors occurred on the 5th then standard deviation shows you that.
Why is it important?
The area under the curve shows the proportion of the curve and which tells you how important this data is to your business. Is the curve is small then you now that the distribution occurs within a relatively small set of circumstances which is easier to control within process. A wider distribution shows you that your process can be interrupted by a variety of factors and may need you to keep a close eye on it.
Continuing on my mission to make Six Sigma something that anyone can understand, today I want to keep the statistics conversation going with the scaled data, scales of measurement and what they mean to your company. There are four scales of measurement in Six Sigma to consider: Nominal, Ordinal, Interval and Ratio.
Nominally Scaled Data
This is the most basic scale and basically tells you whether the information is different or not. This applies to your business in the sense that it tells you the baseline in a yes or no format. Think along the lines of ‘does your customer buy product x’? The answer can only be yes or no.
Ordinal Scaled Data
This data applies to data that can be arranged in a specific order but you cannot distinguish what makes the data different. If you are looking for an answer to why a defect is happening, ordinal data is not going to answer that question.
Interval Scaled Data
This is the sweet spot in terms of data analysis, in this scale the data is able to be arranged in a way that tells you why the defect is happening in specific scenarios. Think along the lines of you need to know why you make more sales on Saturdays. You can measure the sales on Saturdays, the specials you offered on Saturday and how many sales corresponded to the specials offered on Saturday.
Ratio Scale Data
This scale is the most advanced analytic method. When you use this method you have data that has an absolute value and when you get a value of 0 is shows that there is no correlation between the variable and the measurement. For example, you have 10 programmers and programmer A completes 20 lines of code, programmer B completes 15 lines of code. If programmer C actually completes) lines of code, then you can say that no code was completed on that specific day.
Knowing how to analyze data is a big tool in your Six Sigma tool bag. Now this is not an exhaustive list, but when you sit down to meet with your belt now you know what you need to ask and what the belts information should be telling you. When you are ready to get started, let us know and we can help you.
This blog is about Six Sigma data analysis. Because statistics are such a big part of the Six Sigma world, it makes sense that we talk about the data that is gathered and what it means. So here we go….
There are different types of data and anytime you measure something you going to need how to interpret it. There are two main types of data: attributive and variable.
Some people call this the most basic form of data, but for business purposes I don’t accept that. Qualitative data is simple in the fact that it is generally data that can be gathered by asking a yes or no question. For example, ‘Did they buy the new product?’ What is limiting about attributive data is that you really can’t analyze the results in a meaningful way, but it can give you a pretty good place to set your focus.
Variable data is also called quantitative and this is the data that you can measure and analyze. In order to decide if data you have is variable ask yourself these questions:
- Can you classify the data and count the results? (Think number of defects for a particular product line)? If you can this is called discrete data and the limitation of discrete data is that it cannot be broken down into smaller measurements to create additional meaning. It’s a one hit wonder.
- Can the data be measured on a time line with meaningful divisions (Think time, production speed, delivery dates etc…) If you can this is called continuous data and it can be divided further to create additional data.
As with all of these blogs, this is to get you started and statistical data clearly has more to it than one paragraph. But information is the first step and one you know what type of data you have, you have a better idea of what you need to know. Give us call and we can help you create where you need to go next.