Choose Language Hide Translation Bar

Designing Robust Products (2020-US-30MP-539)

Level: Intermediate

 

Kevin Gallagher, Scientist, PPG Industries

 

During the early days of Six Sigma deployment, many companies realized that there were limits to how much variation can be removed from an existing process. To get beyond those limits would require that products and processes be designed to be more robust and thus inherently less variable. In this presentation, the concept of product robustness will be explained followed by a demonstration of how to use JMP to develop robust products though case study examples. The presentation will illustrate JMP tools to: 1) visually assess robustness, 2) deploy Design of Experiments and subsequent analysis to identify the best product/process settings to achieve robustness, and 3) quantify the expected capability (via Monte Carlo simulation). The talk will also highlight why Split Plot and Definitive Screening Designs are among the most suitable designs for developing robust products.

 

 

Auto-generated transcript...

 

Speaker

Transcript

Kevin Hello, my name is Kevin Gallagher.
I'll be talking about designing robust products today.
I work for PPG industries which is headquartered in Pittsburgh, Pennsylvania, and our corporate headquarters is shown on the right hand side of the slide. PPG is a global leader in development of paints and coatings for a wide variety of applications, some of which are shown here.
And I personally work in our Coatings Innovation Center in the northern suburb of Pittsburgh, where we have a strong focus on developing innovative new products.
In the last 10 years the folks at this facility have developed over 600 US patents and we've received several customer and industry awards.
I want to talk about how to develop robust products using design of experiments and JMP. So first question is, what do we mean by a robust product?
And that is a product that delivers consistent results. And the strategy of designing a robust product is to purposely
set control factors for inputs to the process, that we call X's, to desensitize the product or process to noise factors that are acting on the process. So noise factors are
factors that are inputs to the process that can potentially influence the Y's, but for which we generally have little control, especially in the design of the product or process phase.
Think about
robust design. It's good to start with a process map that's augmented with variables that are associated with inputs and outputs of each process step.
So if we think about an example of developing a coating for an automotive application, we first start with designing that coating formulation, then we manufacture it.
Then it goes to our customers and they apply our coating to the vehicle and then you buy it and take it home and drive the vehicle.
So when we think about robustness, we need to think about three things. We need to think about the output that's important to us. In this example, we're thinking about developing a premium appearance
coating for an automotive vehicle. We need to think about some of the noise variables for which
the Y due to the noise variable.
And in this particular case, I want to focus on variables that are really in our customers' facilities. Not that they can't control thickness and booth temperature and
an applicator settings, but there's always some natural variation around all of these parameters. And for us, we want to be able to focus on
factors that we can control in the design of the product to make the product insensitive to those variables in our customers' process so they can consistently get a good appearance.
So one way to really run a designed experiment around some of the factors that are known to cause that variability.
This particular example, we could design a factorial design around booth humidity, applicator setting, and thickness.
This assumes, of course, that you can simulate those noise variables in your laboratory, and in this case, we can. So we can run this same experiment
on each of several prototype formulations; it could be just two as a comparison or it could be a whole design of experiments looking at different formulation designs.
Once we have
the data for this,
one of the best ways to visualize the robustness of a product is to create a box plot. So I'm going to open up
the data set comparing two prototype formulations tested over a range of
application conditions, and in this case the appearance is measured so that
higher values of appearance are better. So ideally we want, we'd like high values of appearance and then consistently good over all of the different noise conditions. So to
look at this, we could, we can go to the Graph Builder.
And we can take the appearance and make that our y value; prototype formulas are X values. And if we turn on the box plot and then add the points back,
you can clearly see that one product has much less variation than the other, thus be more robust and on top of that, it has a better average.
Now the box plots are nice because the box plots capture the middle 50% of the data and the whiskers go out to the maximum and minimum values, excluding the outliers. So it makes a very nice visual display of
the robustness of a product.
So now we want to talk about how do we use design of experiments to find settings that are
best for developing a product that is robust. So as you know, when you design an experiment, the best way to analyze it is to build a model. Y is a function of x, as shown in the top right.
And then once we have that model built, we can explore the relationship between the Y's and the X's with
various tools in JMP, like in the bottom right hand corner, a contour plot and on and...also down there, prediction profiler. These allow us to explore what's called the response surface or how the response varies as a function of the changing values of the X factors.
The key to finding a robust product is to find areas of that response surface where the surface is relatively flat.
In that region it will be very insensitive to small variations in those input variables. An example here is a very simple example where there's just one y and one x
And the relationship is shown here sort of a parabolic function. If we set the X at a higher value here where the, where the function is a little bit flatter,
and we we have some sort of common cause variation in the input variable, that variation will be translated to a smaller amount of variation in the y, than if we had that x setting at a lower setting, as shown by the dotted red lines.
In a similar way, we can have interactions that transmit more or less variation. This example we have an interaction between a noise variable and and a control variable x.
And in this scenario, if there's again some common cause variation associated with that noise variable, if we have the X factor set at the low setting, that will transmit less variation to the y variable.
So now I want to share a second case study with you where we're going to demonstrate how to build a model, explore the response surface for flat areas where we could make our settings to have a robust product, and finally to evaluate the robustness using some predictive capability analysis.
This particular example, a chemist is focused on finding the variables that are contributing to unacceptable variation in yellowness of the product and that yellowness is measured with a spectrum photometer with with the metric, b*.
The team did a series of experiments to identify the important factors influencing yellowing, and the two most influential factors that they found were the reaction temperature
and the rate of addition of one of the important ingredients. So they decided to develop full factorial design with some replicated center points, as shown on the right hand corner.
Now, the team would like to have the yellowness value (b*) to be
set to a target value of 2 but within a specification of 1 to 3.
I'm going to go back into JMP
and open up the second case study example.
It's a small experiment here, where the factorial runs are shown in blue and the center points in red. And again, the metric of interest (B*) is listed here as well.
Now the first thing we would normally do is fit, fit the experiment to the model that is best for that design.
And in this particular case, we find a very good R square between the the yellowness
and the factors that we're studying, and all of the factors appear to be statistically significant.
So given that's the case, we can begin to explore the response surface using some other tools within JMP.
One of the tools that we often like to use is the prediction profiler, because with this tool, we can explore different settings and look to find settings where we're going to get the yellowness
predicted to be where we want it to be, a value of 2.
But when it comes to finding robust settings, a really good tool to use is the the contour profiler. It's under factor profiling.
And I'm going to put a contour right here at 3, because we said specification limits were 1 to 3 and at the high end (3), anywhere along this contour here the predicted value will be 3
and above this value into the shaded area will be above 3, out of our specification range. That means that anything in the white is expected to be within our specification limits.
So right now the way we have it set up, anything that is less than a temperature at 80 and a rate anywhere between 1.5 and 4 should give us product that meets specifications on average.
But what if the temperature in in the process that, when we scale this product up is, is something that we can't control completely accurate. So there's gonna be some amount of variation in the temperature. So how can we develop the product and come up with these set points so that
the product will be
insensitive to temperature variation?
So in order to do that, or to think about that, it's often useful to add some contour grid lines to the contour plot overlay here.
And I like to round off the low value in the increment, so that the the contours are at nice even numbers 1.5. 2, 2.5, and 3, going from left to right. So anywhere along this contour here should give us
a predicted value of 2.
But we want to be down here where the contours are close together or up here where they're further apart with respect to temperature.
As the contours get further apart, that's an indication that we're nearing a flat spot in the
in response surface. So to be most robust at temperature, that's where we want to be near the top here. So a setting of near 75 degrees and rate of about 4 might be most ideal.
And we can see this also in the prediction profiler when we have these profilers linked, because in this setting,
we're predicting the b* to be 2. But notice the the relationship between b* and temperature is relatively flat, but if I click down to this lower level, now even though the b* is still 2, the relationship between b* and temperature is very steep.
So if we know something about how much variation is likely to occur in temperature when we scale this product up, we can actually use a model that we've built from our DOE to simulate the process capability into the future.
And the way we can do that with JMP is to open up
the simulator option.
And it allows us to input random variation into the model in a number of different ways. And then use the model to calculate the output for those
selected input conditions. We could add random noise, like common cause variation that could be due to measurement variation and such, into the design.
We can also put random variation into any of the factors. In this case we're talking about maybe having trouble controlling the temperature in production, so we might want to
make that a random variable.
And it sets the mean to where I have it set. So I'm just going to drag it down a little bit to the very bottom. So it's about a mean of 70.
And then JMP has a default of a standard deviation of 10. You can change that to whatever makes sense for the process that you're studying. But for now, I'm just going to leave that at 10 and you can choose to randomly select from any distribution that you want.
And I'm going to leave it at the normal distribution.
I'm going to leave the rate fixed. So maybe in this scenario, we can control the rate very accurately, but the temperature, not as much. So we want to make sure we're selecting our set points for rate and temperature so that there is
as little impact of temperature variation on on the yellowness.
So we can evaluate the results of this simulation by clicking the simulate to table, make table button.
Now, what we have is every row, there's 5,000 rows here that have been simulated, every row as a random selection of temperature from the distribution, shown here.
And then the rate location limits that we have for this product. And we can do that with the process capability.
And since I already have the specification limits as a column property, they're automatically filled in, but if you didn't have them filled in, you can type them in here.
And simply click OK, and now it shows us the capability analysis for this particular product. It shows us the lower spec limit, the upper spec limit, the target value, and
in overlays that over the distribution of responses from our simulation.
In this particular case, the results don't look too promising because there's a large percentage of the product that seems to be outside of the specification. In fact 30% of it is outside. And if we use the capability index Cpk, which compares the specification range to the range in
process variation, we see that the Cpk is not very good at .3.

 

Comments

Very nice presentation Kevin. I have been a big proponent of pursuing robustness using DOE for almost 3 decades.  Great job !