The DFSS approach that SigmaPro utilizes at the development task level consists of five major steps: Define, Measure, Analyze, Design, Validate. At first glance it might appear that the first three phases are very similar to the DMAIC process. This is true to a degree for the Define phase, but the Measure and Analyze phases are markedly different.
Let us assume that we have executed a successful portfolio management process and identified the risky development tasks for the development programs that we wish to pursue. The Define Phase involves several important high-level objectives. First, we must define the scope, schedule, required resources, goals, deliverables, and the team for the development task. These items are typically consolidated into a project charter. This may include further refinements of the probability of technical success and DECV as discussed above, but it is focused on risk mitigation, not financial benefits.
The Measure Phase is much more focused on qualitative assessments than it is on physical measurement systems. The first step is to develop a clear understanding of the customer requirements for the task-level design activity. The customer may be defined to include internal and external stakeholders, and consideration should also be given to business financial goals such as development cost and schedule. The needs and wants are collectively known as CTX’s where CTX stands for “Critical to _______”. These CTX’s are then translated into architecture requirements, specifications, performance criteria or other objective measures for the activity. The next step is to refine the resources we need to achieve the required result. Technology, manpower, supplier, process and business constraints are all considered. An effective plan is then updated to achieve the CTX’s. Common deliverables for this stage are: a feasibility study, a definition of the customer, a needs analysis, a financial or cost analysis, system operational requirements, functional requirements, and advance product planning. Brainstorming techniques are used to identify all possible design solutions, and Pugh concept selection methods and enhanced quality function deployment matrices (EQFD) are used to select one or perhaps more than one design solution for development. Concept selection is a qualitative technique that ranks design performance against the customer requirements. We would also construct a schematic or architecture block diagram of the proposed design. Once we have an architecture, we must also identify all design parameters (inputs) and responses (outputs). Several common tools that are employed in this stage are: Architecture Block Diagrams, Functional Flow Diagrams, IDEF models, Specification Trees, Functional Hierarchy Diagram, Process Mapping, and SIPOC.
If there are physical measurement systems issues that need to be addressed, they should be resolved during this phase. Once we have several competing concepts, we should determine the measurement systems that will allow us to test the performance of the design attributes. For product design attributes, we would use statistical measurement capability methodologies, such as gauge R&R and risk analysis. However, a common mistake that DFSS practitioners make is to assume that we are only referring to hardware test in this important step. In fact, subjective testing methods, such as choice modeling, focus groups, and customer interviews, are equally important to hardware testing. For services, we need to find measurement systems that allow us to measure performance subjectively, such as customer satisfaction, recommend rates, repurchase rates, and perceptions of value. Therefore, for service design, tools such as surveys and process mapping coupled with FMEA’s are found to be extremely useful. Once we have determined the measurement system(s) that will give us the information we need regarding the performance of the design, we implement the concept testing process and evaluate our risk with regard to the CTX’s. We then make a decision to proceed or to revisit our concepts.
In the Analyze Phase we utilize a process called qualitative decomposition to determine the critical design parameters so that the designer or team can address them to optimize the system. The SigmaPro qualitative decomposition matrix begins with the Architecture Importance Matrix (AIM) which determines the importance of each design element versus the customer requirements. Then, the designer or team would create a detailed fault tree which is used as the basis for a design Failure Modes and Effects (FMEA). The design team then estimates severity, occurrence, and detection criteria for the failure modes found in the fault tree. Having the importance of the design elements from the AIM and the risk-based RPN for each element from the FMEA, we can construct a Risk vs. Importance plot. Such a plot is crucial in simplifying the problem and focusing on key design parameters.
In the Design Phase we ensure that we have the optimal design to fulfill customer and business requirements. For each CTX, the designer or team should identify the key product or process output variables (KPOV’s) that relate to the desired performance. Once the KPOV’s are identified, the next step is to find the key process or product input variables (KPIV’s) that we can control in order to optimize the KPOV’s. To quantify the relationship between KPOV’s and their associated KPIV’s — in other words, to measure the impact that KPIV’s will have on KPOV’s — we develop transfer function models. When we have developed transfer function models that relate the KPIV’s to the KPOV’s, we can then manipulate the KPIV’s to optimize the performance of the design.
For the construction of transfer function models, we basically have two alternatives. If we have a physics-based equation, we can use calculus methods to determine the mean, variance, potential specifications, and the resulting capability for a KPOV based on certain assumptions regarding the means, variances and capabilities of the KPIV’s. Unfortunately, this method, sometimes practices as Axiomatic Design, is extremely difficult even for simple problems and is rarely time- or cost-effective.
The second method is build empirical prediction equations using regression or other mathematical techniques based results from simulation or experimentation. Using such models, we can predict the mean, variance and capability of a KPOV based on the means, variances and capabilities of the KPIV’s. Monte Carlo simulation is typically used for hardware scenarios, and discrete event simulation is used for service scenarios. Engineers often have simulation engines that are adept at modeling physical systems, and these tools have been standard practice for some time. With the development of integrated process mapping and simulation tools, similar methods become available to service designers.
Having developed transfer functions and optimized the system, our next task in the Design Phase is to establish realistic performance criteria for the product or process — in other words, establish the criteria we will use to assess whether our process or products fulfills the customer’s requirements. We will then assess the risk to the CTX’s using knowledge of our KPIV’s, the transfer functions, and the performance of the KPOV’s. A key question is how well the system fulfills the customer wants and needs. If we can demonstrate that the optimized design fulfills customer and business requirements, then we will also want to assess whether the design possesses adequate quality levels and an appropriate costing. Common tools that are employed in the Improve stage are: EQFD, FMEA, Cause and Effect Matrices, Statistical Decision Making, Statistical Tolerancing; Risk Analysis, Designed Experiments, Simulation and Optimization, and Probabilistic Design. SigmaPro uses a step-by-step approach to probabilistic design that involves standard designed experiments, Taguchi designs, and modern robust design techniques. The net result of the Design Phase should be an unvalidated design that is ready to be prototyped.
In the Validate Phase, we test and validate the optimized design. In this step, we confirm the performance, capability, and reliability of our product, process, or service. The key is to make sure that we have good test systems to validate the design, some of which may be subjective in nature. Upon validation of the system through objective and subjective testing, the designer or team should establish control and action plans. The step involves the extension of risk management from the conceptual design into the production or operations environment. Statistical process control, error proofing, reliability, maintenance validations, sampling, test plans, test coverage, process FMEA’s, and measurement capability all become important tools to complete the Validate stage. Lastly, the design team should make an overall assessment of the risks and potential impact to the CTX’s.
It’s important to note that the majority of the tools involved in the DMADV methodology have been employed for a number of years. The same can be said for the DMAIC tools, many of which are used in both approaches. Thus, DFSS is a strategy for the deployment of tools that are not necessarily new. This doesn't make DFSS any less powerful since the main problem with improvement of commercialization processes has been that most organizations have not invested the time to determine when and where to employ the tools, even though they may have trained personnel on how to use the tools. The power of DFSS is in the organization of the tools into a coherent strategy that aligns well with the NPD process, not in the individual tools themselves. In other words, the DMADV Strategy provides a structure for the application of risk management tools. This structured application of the tools allows for a much higher rate of success when compared to current approaches. SigmaPro’s DMADV strategy has been proven in hundreds of task-level design scenarios. The methodology has been built into NextSigma’s RiskWizard software for effective DFSS project management.