SigmaPro Design for Six Sigma (DFSS)

Design for Six Sigma (DFSS) is an extremely effective methodology for improving the development of products, processes and services.  It brings tremendous value to design, manufacturing and service organizations that successfully deploy the program. SigmaPro’s DFSS deployment strategy and training curricula provide a significant competitive advantage to our clients, and this simple fact can be verified by our client references.  The SigmaPro DFSS training curricula consist of training and certification for all organizational roles required to support successful Design for Six Sigma (DFSS) deployment.

SigmaPro has developed and refined separate curricula for application of DFSS to industrial as well as to service or transactional design scenarios.  Technical and process design personnel receive a thorough grounding in probabilistic design, requirements definition, qualitative decomposition, risk mitigation techniques, and test & validation tools and methods.  Product and process design managers and executives receive hands-on training on tools to assist in implementing, maintaining and managing a DFSS-based development process.


Portfolio optimization, identification and selection of Design for Six Sigma projects, the DMADV methodology for DFSS projects, as well as a full suite of qualitative and quantitative tools supported by software round out the SigmaPro DFSS program.  We have published several papers in various journals that detail selected components of our Design for Six Sigma methodology. SigmaPro offers exceptional value in both on-site and public training and certification courses.  Please call or email us for a quotation or if you have further questions.

Upcoming DFSS Training and Certification Events

No events found.

SigmaPro Design for Six Sigma (DFSS) Training and Certification Curricula

The Design for Six Sigma (DFSS) Context

When Motorola deployed its initial Six Sigma model in 1987, it had not yet defined the traditional Six Sigma roles that we are accustomed to today such as Champion, Master Black Belt, Black Belt, Green Belt, etc.   Instead, Motorola’s initial Six Sigma deployment model was a tops-down approach that focused on training all employees on a simple problem-solving strategy, the “Six Steps to Six Sigma,” and the tools to support the strategy.  By today’s standards Motorola made all employees Yellow Belts.   The first few years of Motorola’s deployment were primarily aimed at improvement in operations.  However, Motorola recognized that a focus on internal improvement in manufacturing and operations was not necessarily effective at improving the commercialization of new products and services.  The Six Sigma Research Institute was formed in 1990 to rectify this problem.

What Our DFSS Clients Say

The following is the percentage of past Design for Six Sigma (DFSS) participants who have answered the respective question on our surveys with a 4 (agree) or a 5 (strongly agree).

This course will be beneficial to me in my role...

I would recommend this course to another student...

The instructor did a good job in presenting the material...

I would recommend this instructor to another student...

While the Six Sigma Research Institute developed the Black Belt model and the DMAIC improvement methodology, Motorola and others have found that the road to improving the design of products and services is not without pitfalls (How Motorola Lost Its Way,” Business Week, May 4, 1998).  Many organizations will experience certain impediments to exceptional performance and improvement in the design of new products and services such as,

  • Marketing and sales personnel may focus on “selling” to the detriment of “development.”
  • Marketing personnel may not understand their role in the actual design and development phases.
  • Marketing personnel may tend to rely on subjective rather than quantitative analyses of customer wants and needs.
  • Product or service development processes are managed using an ad-hoc approach.
  • The management team may tend to rely on the judgment of the technical team when in fact, the design personnel may have little or no understanding of customers’ wants and needs.
  • The design team may fail to make appropriate tradeoffs early in the design process.
  • The design & development team may be trying to accomplish too much.
  • In the scramble to meet time to market constraints, the design team may forego quality criteria.
  • Insufficient effort may be put into the development and testing of prototypes.
  • The development team may spend too much time trying to make infeasible designs viable.
  • Design and development personnel may rely too much on tightening component specifications in order to increase the reliability of a product.
  • The development team may not properly test the product or pilot the service prior to introduction.
  • Engineers may assume that they already “know” what the customer needs are, thereby introducing their own technical biases in the design.
  • Engineers may disdain soft skills such as “listening to the customer.”


The question before us is how the implementation of “Design for Six Sigma” can help us to overcome constraints to performance and improvement in the commercialization of new products and services.


The Portfolio Management Process

Portfolio management is the process by which we manage new product development and the existing portfolio of products in order to minimize risk and meet the strategic plan.  As internal and external challenges come up, a business’s list of new product (and R&D) projects is constantly updated and revised.  In this process new projects are selected, prioritized, accelerated, killed, or deprioritized.  Resources are constantly allocated and reallocated to adapt to uncertain and changing information.  Problems with an ineffective portfolio management process include:

1.  The reluctance to kill new product projects.

2.  Too many projects and the resulting lack of focus.

3.  Too many mediocre projects in the pipeline.

4.  The wrong projects have been selected.

5.  Strategic criteria to align resource allocation to the strategic plan are lacking.


There are several requirements for an effective portfolio management process.  First, corporate goals, objectives, strategies, and measures must be the basis for new product (or R&D) portfolio selection.  Second, senior management must be the driver of strategy and hence must be closely involved in new product (or R&D) project selection decisions.  Next, portfolio management methods must mesh with the decision framework of the business.  These program selection methods must accommodate change and the interaction of goals and players.  Lastly, the portfolio selection method must accommodate decision making at different levels in the organization.


When we have an effective portfolio management process, we can pursue any combination of three main goals:

1.  Maximization of Value:  Most organizations will seek to allocate resources to maximize the value of the portfolio in terms of some company objective such as long-term profitability or return on investment.

2.  Balance:  Some organizations seek to achieve some desired balance the portfolio in terms of long-term vs. short-term projects, high-risk vs. low-risk projects, market mixes, differing technologies, or project types.

3.  Strategic Direction:  The final portfolio must truly be aligned with the organization’s strategic plan. 


Common considerations that are used in managing the portfolio include: cost reduction and process improvement, product enhancements that benefit the customer, new products that utilize existing technologies, and new systems that are based on new technologies.  Since we rarely have ample resources to pursue all available opportunities, we must choose wisely how we allocate our development and capital resources.  A key concept is that risk must be accounted for in the program selection technique.  Classical risk management techniques have been built around classical scoring and sorting models, mapping approaches and bubble diagrams, and mathematical programming techniques.


Classical Portfolio Valuation Methods

A common method used to prioritize new product development opportunities (both ongoing and future programs) is the Adjusted NPV method.  The Adjusted NPV method estimates the value of an NPD program as the net present value of all future cash flows (NPV) and then adjusts for the development (D) and commercialization (C) costs.


NPVadj = NPV – (D + C)


Unfortunately, the adjusted NPV is easy to understand, but it does not consider the risk associated with each NPD program.  A metric that does consider risk along with future cash flows, development cost, and commercialization cost is the Expected Commercial Value (ECV).  The model is:


ECV = [(NPV x PCS – C) x PTS – D]


where we discount NPV by the probability of commercial success PCS before subtracting the commercialization cost C, and then we discount the result by probability of technical success PTS before adjusting for the development cost D.  The ECV method seeks to maximize the value or commercial worth of the portfolio subject to budget constraints while minimizing risk and is therefore one of the better financial models.  The ECV model will favor programs that are closer to launch, programs that have little money left to be spent on them, programs that have a higher likelihood of success, and programs that utilize less scarce resources.  ECV is a probability weighted stream of cash flows from the program, discounted to the present and assuming technical success.  Some might argue that the major weakness of the method is the dependency on estimates of probabilities and other quasi-quantitative data.  This is entirely true, but then again, there aren’t any methods that are NOT subject to these same limitations.


To illustrate these ideas, let us consider the portfolio for a hypothetical company which only has enough development resources to develop and launch two NPD programs.  Engineering and marketing personnel have identified six potential new product development programs along with the projected NPV, commercialization cost, and development cost as listed below.  The engineering team has constructed estimates of the probabilities of technical success, and the marketing team has created the associated probabilities for commercialization success.  If the organization were to take a relatively naïve approach and simply want to maximize adjusted NPV, it would select the Leia and Deathstar programs.  However, a better approach might be to include risk in the assessment and compute ECV by program.  Doing so using NextSigma’s RiskWizard software product yields the Luke and Leia programs as the two best choices.  After considering development risk to the Deathstar program, it might not be such a viable opportunity.


Table 1:  Hypothetical Product Portfolio



























































The question then becomes how we can intelligently and consistently create estimates for the probabilities of commercial and technical success.  There are two common approaches:


1.  Delphi Consensus Approach:  Individuals (senior technical leaders) are asked to independently and anonymously estimate the probabilities for success.  The results are then discussed as a group until consensus is achieved.

2.  Matrix Approach:  Probability of commercial success is decided based on two measures, market newness and degree of competitive advantage.  Probability of technical success is based on the newness of the technology to the company in terms of the product and the process.


Having done Design for Six Sigma deployments for a number of organizations, we at SigmaPro have found that combining the two methods tends to work best.  We will construct criteria matrices for our clients so that they have something to work from during the Delphi Consensus discussion.


To construct a matrix for the probability of commercial success (Pcs), we consider two key criteria – our competitive advantage were we to bring the product to market and the maturity or relative “newness” of the product to our organization and to the market.


Table 2: Matrix of Probabilities for Commercial Success



Competitive Advantage





Very High

Mature/Current Product 





New/Current Product 





New to Company 





New to Market 






For the competitive advantage, a “Low” selection indicates a minor cost reduction, a catch-up product, or perhaps a product in which the benefits will not overcome switching costs.  A “Moderate” selection indicates a product for which the benefits are about the same as customer switching costs.  A “High” selection indicates a product for which the benefits to the customer outweigh the switching costs.  A “Very High” selection indicates a product for which the product will fundamentally change customers’ business models in a positive way.


We can construct a similar matrix for the probability of technical success (Pts) by considering two key development activities – developing a successful manufacturing process and associated supply chain and developing the product itself.  In both cases, we would examine the degree of difficulty in meeting our own or the customer requirements. 


Table 3: Matrix of Probabilities for Technical Success



Product Development

Process Dev.

Very Low




Very High

Very High 
























Very Low 







A rating of “Very Low” indicates that the solution is probably beyond the company’s current level of sophistication and/or that the development path is unknown.  This would involve development of new technology which may or may not be successful.  “Low” indicates that the technology is feasible but that the company may not have the expertise to properly develop it.  A “Moderate” rating indicates that the technology is feasible and needs to be prototyped.  A “High” rating indicates that the prototype is meets key requirements although it may need certain enhancements.  A “Very High” rating indicates that the solution is demonstrated and that it meets all requirements to proceed.

Design for Six Sigma and Your New Product Development Process

The term “Design for Six Sigma” is a misnomer.  It is a holdover from the early days of Motorola’s Six Sigma program in which technical people focused extensively on the quality and reliability of hardware, and very often with minimal attention to customer needs, product architecture and product features.  A more apt name for DFSS might be “Six Sigma Design,” since the intention of DFSS is to introduce a level of excellence in the design process in order to achieve many competing goals, of which product quality is only one consideration. 


DFSS is not simply an extension of the quality improvement activity into the design organization.  Most practitioners do not take Motorola’s stated performance goal of 3.4 PPM defective literally, as was done in the early 1990’s.  There is ample statistical evidence that the 1.5 sigma shift is a crude approximation that is only true for a certain sample size and confidence level under statistical sampling scenarios (to date the Central Limit Theorem has not been disproven). Customer wants and needs will dictate a level of performance based on the product or service, the market, the cost, and possibly other considerations.  Therefore, the focus of DFSS is to provide a mechanism for new product development (NPD) that helps us to achieve customer expectations and business performance goals in a repeatable way. 


A typical new product development (NPD) process might include several high-level phases such as those in the 7-step systems engineering model:  Needs Assessment, Concept Design, Preliminary Design, Detail Design, Process Design/Construction, Manufacturing, and End of Life.  Most organizations will have instituted management reviews called “checkpoints” or “toll gates” at various points within these phases to monitor progress and ensure that transition from one phase to the next is warranted based on a risk assessment. 


Figure 1: Typical New Product Development Process



Key deliverables that are expected at these checkpoints are:


Needs Assessment:

  • Define the opportunity or problem
  • Define the systems goals and objectives and identify the constraints that must be met.

Concept Design:

  • Feasibility study
  • Needs analysis
  • System operational requirements
  • System maintenance requirements
  • Functional requirements
  • Advance product planning

Preliminary Design

  • System Functional Analysis
  • Preliminary Synthesis and Allocation of performance factors, design factors, support requirements
  • System Optimization and Tradeoffs
  • System Synthesis and Definition of Preliminary Design and Detail Specifications

Detail Design

  • Detail design of functional system
  • Develop prototype model(s) and refine requirements
  • System Prototype Test & Evaluation
  • System analysis and evaluation; design review

Process Design/Construction

  • System assessment – analysis and evaluation
  • Modifications for corrective action and/or for product improvement


  • System assessment, analysis and evaluation
  • Modifications for corrective action or for product improvement

End of Life

  • Returns, repair, maintenance, support
  • End of life disposal


Simply put, the goal of DFSS is to help companies dramatically improve performance in these design and development phases in order to create a repeatable, robust delivery mechanism for new products and services. To do this, we must focus on the quantification and mitigation of risks with regard to the deliverables at each of the NPD checkpoints.


Unfortunately, many organizations attempt to execute the overall NPD process without formal checkpoint criteria and rely solely on technical judgment.  Many of these same organizations have little or no structure that pertains to how individual contributors and development teams should approach individual or team-oriented design activities.  Hence, we have a need for the integration of DFSS criteria at the program-level as well as a need for designers to execute at the task-level using probabilistic design.   This is exactly what recent implementations of DFSS consist of -- the integration of DFSS deliverables into the checkpoint criteria for the overall NPD process, training the management team on how to resource and guide the effort, and a structured approach to the application of DFSS principles at the project level.  At the program-level, there is no consistent standard for integration of DFSS with an existing NPD process because NPD processes are as widely varied as the products and processes they generate.  SigmaPro works with its clients to map the NPD process and develop a customized approach that utilizes best practices within the organization as well as incorporates DFSS deliverables into the checkpoint criteria.


Many executives also believe they need to radically change their current design process in order to implement Design for Six Sigma (DFSS), when in fact this is not the case.  It is true that many engineers, while highly educated in their technical disciplines, rely on judgment and expect management to do so as well as a team develops a new product or service.  A culture change will be required since the execution and management of a repeatable NPD process requires certain quantitative deliverables at the various checkpoints in the process.


Many organizations have seen a high return on investment from the traditional Six Sigma improvement methodology known as DMAIC (Define, Measure, Analyze, Improve, Control).  While this methodology is powerful and can often yield impressive results on existing products and processes, it is not well-aligned with the typical NPD process.  The reason is that the DMAIC process works best when we have a system that we can experiment with, take data from, and optimize.  We often don’t have such a system in a development process until well into the detail design phase. 


Another common mistake in the application of DFSS is to deploy it only in R&D, when marketing and finance play a key role in the initial phases of the commercialization process.  Marketing, sales, finance and other support organizations should also be trained on their respective roles in the support of DFSS and the NPD process.  If we consider the financial indicator ECV as mentioned previously, we can map the leverage or control variables directly to functions in the organization. 


In order to understand how to best utilize the Design for Six Sigma methodology known as DMADV (Define, Measure, Analyze, Design, Validate) or some similar approach, we must differentiate between design programs and design projects.  Design programs are those which involve the design and development of an entirely new product or process or the major redesign of an existing one.  Design projects are smaller in nature and pertain to the execution of a subtask within a design program opportunity.  To manage the NPD process during development, most organizations have instituted their own NPD process based on the 7-step systems engineering model mentioned above.


Figure 2: NPD Process with Design Tasks



Unfortunately, all too many organizations believe that DFSS is a design process in and of itself, when it is not. DFSS is an enhancement to an existing NPD process that provides more structure and a better way to manage the deliverable, resources and tradeoffs.  DFSS is the means by which we employ strategies, tactics and tools to enhance the existing NPD process to achieve entitlement performance. It integrates qualitative and quantitative tools as well as key performance measures that allow progressive organizations to manage the NPD process more effectively and optimize several competing key drivers such as cost, quality, time to market, etc.  We focus on risk analysis in new product development because defects, processes, prototypes, etc. may not exist until the later design phases.  Put another way, all that exists early in the commercialization process is the risk of future problems. 


Mature organizations recognize that the objective of Lean Six Sigma is to maximize EVA, and that expense reduction is only one aspect of improved EVA.  DMAIC is a method that we implement in operations in order to decrease expenses and drive profitability up.  Lean includes a set of tools and techniques that help us to minimize the amount of capital tied up in the organization in terms of inventory, land, buildings and machines.  DMAIC and Lean collectively offer problem-solving strategies for the manufacturing and distribution stages of the product lifecycle. Product quality, however, is highly dependent on the decisions made during the research, design and development process — manufacturing cannot make a technology, product, or process that measures up to customer requirements if it is inherently flawed. 


Unlike Lean and Lean Six Sigma, Design for Six Sigma is not driven by financial results.  Rather, Design for Six Sigma is intended to minimize risk in development. If we do a better job of defining our customer requirements (including our business requirements), we will design what the customer wants and thus sell more and increase revenue.  DFSS is a method that we will want to implement primarily in the early phases of development in order to help develop new products and processes that are more saleable because they better meet customer wants and expectations.  Even so, the financial impact of DFSS goes beyond increased revenue. It actually decreases costs as well through proper definition of models and tolerances to help operations avoid future problems.


DFSS offers the tools to both assess what the customer wants and to measure whether the design delivers on these expectations. The key to successful DFSS implementation is, first, that managers accept the need for rigor in the design process and, second, that managers understand how to implement the DFSS tools.  While DFSS does not replace current NPD processes, it is not simply a matter of applying dMAIC to an existing NPD process — another mistaken notion held by some executives. Even though the DMAIC strategy is highly effective for products and processes that exist or do not require radical redesign, in general, the DMAIC improvement strategy is not well suited to the activities involved in an NPD process.  It is well accepted among practitioners of Six Sigma that DFSS is the more appropriate approach when we are considering a new opportunity or one that requires radical improvement.


The link between DFSS and the voice of the customer is linear and logical.  If a customer is dissatisfied, it is because the organization did not meet his or her expectations.  If we did not meet customer expectations, it is either because we did not understand the expectations or because we failed to deliver on the expectations.  If we did not understand the customer expectations, we have not properly addressed the “voice of the customer” in the early design cycle.  If we failed to deliver on the expectations, we have not properly addressed the development of the product, process, or service.  For any problem that the customer perceives, the risk associated with the problem must have existed before the problem manifested itself.  The areas of opportunity for Design for Six Sigma lie in the quantification and mitigation of risk in the definition of customer expectations and in the delivery of products, processes, and services.  The bottom line is that if we define the customer expectations and manage risks well in the development of our product, process, or service, we stand a good chance of being successful in the marketplace.


Sometimes, customers want to play suppliers against each other, and thus are not very forthcoming about their expectations. In addition, evaluating whether a supplier has met expectations can be very subjective on the part of the customer. The supplier may feel that expectations have been met, while the customer may have a different opinion.  Another problem is that the customer may not know what he or she wants. Thus, the supplier begins the product development process with a set of expectations that may change in the middle of the process. However, at the beginning of the process, the supplier organization should help its customers figure out what they want. A rigorous requirements definition process, run by Marketing, will help in this regard.


In terms of resources, DFSS implies the expenditure of more money up front in order to spend less money over the life of the product or service. As Figure 3 illustrates, the overall cost of a design process includes the original budgeted costs (the solid yellow curve), plus the extra costs generated by unforeseen problems (the red curve). DFSS spending at the beginning of the process (the green curve) should eliminate most of these unforeseen problems as well as shorten the development cycle. In other words, if we are designing this system correctly, setting good specifications, identifying the risks a priori, and so forth, we will cut our costs and make the line of business more profitable.


Figure 3: Typical Resourcing Levels for DFSS vs. Traditional NPD Processes




If we “multiply” time by resources (integrate the curves for the mathematically inclined) we arrive at a result that is proportional to the dollars expended for development.  Therefore, the intended application of DFSS to a development process is no more expensive than our typical “planned” project, and it will even save money by eliminating many of the unplanned problems.  Modern scheduling techniques including stochastic scheduling, simulation, and sensitivity analysis are used to determine leverage activities in a schedule in order to cut time to market in a manner that is more effective than the traditional critical path approach.


Selecting Development Projects for the Application of Design for Six Sigma

 If we limit our attention for the moment to what we can do differently during development, the effect of DFSS can be estimated using the Expected Commercial Value (ECV) method for one or more projects in the portfolio.  The modified model is:


Delta ECV = [ (NPV x PCS – C) x (PTS-DFSS - PTS-NODFSS) – D]




      Delta ECV =           the change in the expected commercial value for a program

      NPV =                    net present value of future cash flows discounted to the present

      PCS =                     probability of commercial success

      C =                        commercialization or launch costs

      PTS-DFSS =             probability of technical success under the DFSS model

      PTS-NODFSS =         probability of technical success under the current design model

      D =                        development costs remaining in the project


To select the design tasks that can benefit most from the application of the DMADV methodology, we much re-estimate the probability of technical success while assuming that risky design tasks will become DFSS projects in and of themselves.  Using our previous hypothetical example, let us suppose that the portfolio management team has found the resources to undertake three development projects, and based on ECV and adjusted NPV, the team has elected to pursue the Leia, Han, and Deathstar programs.  The engineering team has identified the development tasks that involve significant risk and determined a new probability of technical success assuming that the risky tasks are addressed with a rigorous DMADV approach. 


Table 4: Delta ECV Analysis for Hypothetical Product Portfolio


The table was constructed using NextSigma's RiskWizard portfolio analysis tool.  For further information see www.nextsigma.com.


As we can see from Table 4, the Leia program would not benefit from the application of the Design for Six Sigma methodology.  This is evident by comparing the ECV for the current NPD process to the ECV for the NPD process with a DFSS supplement.  On the other hand, we expect to add $4.12M in value to the Han program and $31M to the Deathstar program through the application of DFSS to the risky development tasks.


We can summarize the DFSS project selection methodology in a six easy steps.

1.  Make a list of the design programs that you are considering.  The list should contain both the current development opportunities as well as potential future programs.

2.  Estimate the net present value, development cost, commercialization cost, probability of technical success under the standard NPD process, probability of technical success under the NPD process as supplemented by DFSS , and the probability of commercial success.

3.  Based on your portfolio programs and the actual and potential value of each, choose the programs that the organization plans to pursue.

4.  For the selected programs, compute Delta ECV to determine which programs are the best candidates for DFSS implementation.

5.  For the programs that are good candidates for DFSS, if you haven’t already done so, determine which development tasks entail the most risk with respect to failing to meet customer requirements. 

6.  For the risky development tasks, define a DMADV project for each and launch it.


Bubble diagrams are often used to depict the risk and reward for multiple DFSS projects.  Other criteria include:  fit with corporate strategy, strategic importance, competitive advantage, financial reward, competitive impact, probability of success, R&D cost, time to market, capital and marketing investment, market segments, product category or line, product type, and platform. 


The DMADV Methodology for Design for Six Sigma

The DFSS approach that SigmaPro utilizes at the development task level consists of five major steps: Define, Measure, Analyze, Design, Validate.  At first glance it might appear that the first three phases are very similar to the DMAIC process.  This is true to a degree for the Define phase, but the Measure and Analyze phases are markedly different.


Let us assume that we have executed a successful portfolio management process and identified the risky development tasks for the development programs that we wish to pursue. The Define Phase involves several important high-level objectives.  First, we must define the scope, schedule, required resources, goals, deliverables, and the team for the development task.  These items are typically consolidated into a project charter.  This may include further refinements of the probability of technical success and DECV as discussed above, but it is focused on risk mitigation, not financial benefits.


The Measure Phase is much more focused on qualitative assessments than it is on physical measurement systems.  The first step is to develop a clear understanding of the customer requirements for the task-level design activity.  The customer may be defined to include internal and external stakeholders, and consideration should also be given to business financial goals such as development cost and schedule.  The needs and wants are collectively known as CTX’s where CTX stands for “Critical to _______”.  These CTX’s are then translated into architecture requirements, specifications, performance criteria or other objective measures for the activity.   The next step is to refine the resources we need to achieve the required result.  Technology, manpower, supplier, process and business constraints are all considered.  An effective plan is then updated to achieve the CTX’s.  Common deliverables for this stage are: a feasibility study, a definition of the customer, a needs analysis, a financial or cost analysis, system operational requirements, functional requirements, and advance product planning.  Brainstorming techniques are used to identify all possible design solutions, and Pugh concept selection methods and enhanced quality function deployment matrices (EQFD) are used to select one or perhaps more than one design solution for development.  Concept selection is a qualitative technique that ranks design performance against the customer requirements.  We would also construct a schematic or architecture block diagram of the proposed design.  Once we have an architecture, we must also identify all design parameters (inputs) and responses (outputs).  Several common tools that are employed in this stage are:  Architecture Block Diagrams, Functional Flow Diagrams, IDEF models, Specification Trees, Functional Hierarchy Diagram, Process Mapping, and SIPOC. 


If there are physical measurement systems issues that need to be addressed, they should be resolved during this phase.  Once we have several competing concepts, we should determine the measurement systems that will allow us to test the performance of the design attributes.  For product design attributes, we would use statistical measurement capability methodologies, such as gauge R&R and risk analysis. However, a common mistake that DFSS practitioners make is to assume that we are only referring to hardware test in this important step.  In fact, subjective testing methods, such as choice modeling, focus groups, and customer interviews, are equally important to hardware testing.  For services, we need to find measurement systems that allow us to measure performance subjectively, such as customer satisfaction, recommend rates, repurchase rates, and perceptions of value.  Therefore, for service design, tools such as surveys and process mapping coupled with FMEA’s are found to be extremely useful.  Once we have determined the measurement system(s) that will give us the information we need regarding the performance of the design, we implement the concept testing process and evaluate our risk with regard to the CTX’s.  We then make a decision to proceed or to revisit our concepts.


In the Analyze Phase we utilize a process called qualitative decomposition to determine the critical design parameters so that the designer or team can address them to optimize the system.  The SigmaPro qualitative decomposition matrix begins with the Architecture Importance Matrix (AIM) which determines the importance of each design element versus the customer requirements.  Then, the designer or team would create a detailed fault tree which is used as the basis for a design Failure Modes and Effects (FMEA).  The design team then estimates severity, occurrence, and detection criteria for the failure modes found in the fault tree.  Having the importance of the design elements from the AIM and the risk-based RPN for each element from the FMEA, we can construct a Risk vs. Importance plot.  Such a plot is crucial in simplifying the problem and focusing on key design parameters.


In the Design Phase we ensure that we have the optimal design to fulfill customer and business requirements.  For each CTX, the designer or team should identify the key product or process output variables (KPOV’s) that relate to the desired performance.  Once the KPOV’s are identified, the next step is to find the key process or product input variables (KPIV’s) that we can control in order to optimize the KPOV’s.  To quantify the relationship between KPOV’s and their associated KPIV’s — in other words, to measure the impact that KPIV’s will have on KPOV’s — we develop transfer function models.  When we have developed transfer function models that relate the KPIV’s to the KPOV’s, we can then manipulate the KPIV’s to optimize the performance of the design.  


For the construction of transfer function models, we basically have two alternatives. If we have a physics-based equation, we can use calculus methods to determine the mean, variance, potential specifications, and the resulting capability for a KPOV based on certain assumptions regarding the means, variances and capabilities of the KPIV’s.  Unfortunately, this method, sometimes practices as Axiomatic Design, is extremely difficult even for simple problems and is rarely time- or cost-effective.


The second method is build empirical prediction equations using regression or other mathematical techniques based results from simulation or experimentation.  Using such models, we can predict the mean, variance and capability of a KPOV based on the means, variances and capabilities of the KPIV’s.  Monte Carlo simulation is typically used for hardware scenarios, and discrete event simulation is used for service scenarios.   Engineers often have simulation engines that are adept at modeling physical systems, and these tools have been standard practice for some time.  With the development of integrated process mapping and simulation tools, similar methods become available to service designers. 


Having developed transfer functions and optimized the system, our next task in the Design Phase is to establish realistic performance criteria for the product or process — in other words, establish the criteria we will use to assess whether our process or products fulfills the customer’s requirements.   We will then assess the risk to the CTX’s using knowledge of our KPIV’s, the transfer functions, and the performance of the KPOV’s.  A key question is how well the system fulfills the customer wants and needs.  If we can demonstrate that the optimized design fulfills customer and business requirements, then we will also want to assess whether the design possesses adequate quality levels and an appropriate costing.  Common tools that are employed in the Improve stage are:  EQFD, FMEA, Cause and Effect Matrices, Statistical Decision Making, Statistical Tolerancing; Risk Analysis, Designed Experiments, Simulation and Optimization, and Probabilistic Design.  SigmaPro uses a step-by-step approach to probabilistic design that involves standard designed experiments, Taguchi designs, and modern robust design techniques.  The net result of the Design Phase should be an unvalidated design that is ready to be prototyped.


In the Validate Phase, we test and validate the optimized design.  In this step, we confirm the performance, capability, and reliability of our product, process, or service. The key is to make sure that we have good test systems to validate the design, some of which may be subjective in nature.  Upon validation of the system through objective and subjective testing, the designer or team should establish control and action plans.  The step involves the extension of risk management from the conceptual design into the production or operations environment.  Statistical process control, error proofing, reliability, maintenance validations, sampling, test plans, test coverage, process FMEA’s, and measurement capability all become important tools to complete the Validate stage.  Lastly, the design team should make an overall assessment of the risks and potential impact to the CTX’s.  


It’s important to note that the majority of the tools involved in the DMADV methodology have been employed for a number of years.  The same can be said for the DMAIC tools, many of which are used in both approaches.  Thus, DFSS is a strategy for the deployment of tools that are not necessarily new. This doesn't make DFSS any less powerful since the main problem with improvement of commercialization processes has been that most organizations have not invested the time to determine when and where to employ the tools, even though they may have trained personnel on how to use the tools.  The power of DFSS is in the organization of the tools into a coherent strategy that aligns well with the NPD process, not in the individual tools themselves.  In other words, the DMADV Strategy provides a structure for the application of risk management tools.  This structured application of the tools allows for a much higher rate of success when compared to current approaches.  SigmaPro’s DMADV strategy has been proven in hundreds of task-level design scenarios.  The methodology has been built into NextSigma’s RiskWizard software for effective DFSS project management.


Successfully Deploying Design for Six Sigma in Your Organization

It is important to reiterate that DFSS is not intended to replace an organization’s current design process. Instead, the DFSS methodology should be used as a framework at the program-level for deliverables and performance criteria for the design process already in place.  In other words, we determine when estimates for profitability, suitability, marketability, capability, reliability, etc. should be called for in the NPD process and add those criteria to the appropriate checkpoints.  We shouldn’t tell our engineers that we’re discontinuing the process that they’ve been working with for 10 years and replacing it with DFSS. Instead, we should integrate DFSS deliverables into the current development process and ask project managers to commit to providing them.


DMAIC-oriented Black Belts (commercial and industrial) require a working knowledge of DFSS because they will likely be redesigning existing products, processes and services to achieve the desired performance levels as a result of improvement projects. Traditional design teams (commercial and industrial), on the other hand, require a more in-depth knowledge of DFSS. The development of a new product, process or service requires much more in-depth analysis and management of risk to successfully meet time to market, quality, cost and schedule constraints.  SigmaPro offers a full suite of DFSS-focused training and certification courses.


In the book, The Innovator’s Dilemma, author Clayton Christensen discusses “disruptive technologies” which are technologies that change the market so much that the customer has no reason to buy anything else other than our product.  One of the common mistakes that practitioners make is to assume that DFSS is a disruptive technology.  It is not.  DFSS as it is typically practiced relies heavily on the voice of the customer to determine the appropriate design approach and level of performance that is required.  The reality is that customers often don’t know what the next leap in development will or can be, and therefore an organization may be eternally destined to incremental improvements if it relies solely on voice of the customer to dictate product development strategies.  This is why SigmaPro always couples the concept of “planned innovation” with voice of the customer training and activities.


Because there is no standard approach for DFSS, many corporate executives will attempt to deploy DFSS in their organizations on their own. In many cases, these executives will hire good people with statistical and design backgrounds, ask them to develop some training, then cycle their engineers through that training. This author has personally seen this occur in three organizations, and each time the result is mixed at best. Six or nine months into the deployment, the organization has not only spent significant money on training, it has also delayed R&D projects by diverting engineers into that training — and yet there is no significant change to show for this money and time. The problem is that none of the design managers participated in the training and therefore were not able to ask or resource for the appropriate DFSS deliverables.  Based on my experiences with a number of organizations, I found that we can prevent this type of failure in the deployment of DFSS by following certain key guidelines.


First, we must train managers before engineers. Many organizations focus on training people at the tactical level on DFSS tools and processes before they’ve brought the people who are managing the process on board. If we're going to make a difference with DFSS, the people who are managing the projects — the cross functional teams (core teams) that lead our development processes — have to be trained in the methodology first. Then, these teams should put together a plan for the DFSS implementation on a project by project basis. Then we can focus on training engineers and designers on specific tools and deliver the training at the appropriate time in the development process.


Second, we should not train in waves unless it really makes sense. When an organization introduces DMAIC-based Six Sigma, it often follows a standard training approach involving waves of employees using a schedule that is typically one week of training per month for four months. DFSS, however, needs to be applied at the project level.  An organization might fill a room with laptops and software and bring in waves of employees to go through DFSS training. However, introducing waves of engineers to DFSS over a period of four months serves no purpose if the development process takes two years and the designers have forgotten most of what they were taught in wave training before they have applied the learning.  For example, the design process of a computer scanner might take 15 months, while designing an automobile takes anywhere from three to five years.  SigmaPro recommends that we train people on a project by project basis — or, more accurately, we integrate the training with the NPD schedule and provide it in discrete chunks at the appropriate time in the project.


As with DMAIC projects, we must remember to link DFSS to policy deployment — the concept of taking top-level objectives and pushing them to all levels of the organization.  This means that Executives, Senior Management, Middle Management and cross functional project managers must all understand their roles in DFSS deployment.  The initiative is not simply training on technical tools.  If DFSS is not driven by top-level objectives, it is very difficult to make things happen in the lower levels of the organization.


Another problem is that the financial objectives that drive DMAIC-based policy deployment cannot be applied to DFSS. DFSS is a cost-avoidance and revenue enhancement approach, not a cost-reduction approach. With DMAIC, a cost-saving objective is set for the entire company, which then translates into cost-saving objectives for specific functions and, in turn, specific departments within those functions. With DFSS, however, it is much more difficult to set financial objectives, since the goal of DFSS is to avoid costs in the first place. While we can guess how much more expensive the development of a new product would have been without DFSS, we cannot truly quantify this cost-avoidance. Therefore, attempting to track savings for DFSS projects is somewhat less than useful.


Another way to ensure that DFSS is applied successfully in the organization is to link DFSS activities to high impact design projects where DFSS skills can be nurtured both among the technical contributors and among the design management team. In other words, go for the low-hanging fruit. Pick out some low-risk, high-impact projects, turn them into successes, and publicize that success extensively.  If we can leverage that success into support and acceptance; then we can tackle the more difficult tasks. A client recently wanted us to implement DFSS on the toughest project in their portfolio to see if DFSS worked.  Our response was that we were 100% certain that would be the best way to kill the program!


DFSS is not limited to the design and development of products and processes. The great majority of the tools, concepts and methods in the DFSS approach can be applied to service industries and processes equally as well to industrial products and processes.  We may encounter greater resistance in the service sector, however, because we are asking non-analytical people to apply analytical tools. However, typical service applications can be very successful through the use of simple tools.


In service DFSS, customer and business requirements are organized and linked to the attributes of the new service based on a process map.  Then the process is modeled and optimized using a simulation engine.  FMEA’s are generated for each node on the process map, and root causes are identified using the cause and effect matrix.  Lastly, corrective actions, control plans, and management scorecards are developed to mitigate the risks.


Design for Six Sigma References

  • Beam, W. R. (1990). Systems Engineering : Architecture And Design. New York, Mcgraw-Hill.
  • Chestnut, H. (1965). Systems Engineering Tools. New York, Wiley
  • Defense Systems Management College. Press. (2001). Systems Engineering Fundamentals : Supplementary Text. Fort Belvoir, Va., The Press.
  • Defense Systems Management College. Technical Management Dept. (1990). Systems Engineering Management Guide. [Fort Belvoir, Va.] Washington, D.C., Defense Systems Management College; For Sale By The Supt. Of Docs. U.S. G.P.O.
  • Jackson, S. (1997). Systems Engineering For Commercial Aircraft. Aldershot, England ; Brookfield, Vt., USA, Ashgate.
  • Jenkins, G. M. And P. V. Youle (1971). Systems Engineering: A Unifying Approach In Industry And Society. London,, C. A. Watts.
  • Lacy, J. A. (1992). Systems Engineering Management : Achieving Total Quality. New York, Mcgraw-Hill.
  • Mog, R. A., OR Applications (Firm), Et Al. (1997). Systems Engineering Metrics
  • Organizational Complexity And Product Quality Modeling, (H-27289D), Final Report. [Washington, DC Springfield, Va., National Aeronautics And Space Administration ; National Technical Information Service Distributor.
  • Sage, A. P. (1992). Systems Engineering. New York, Wiley.
  • Sage, A. P. And J. E. Armstrong (2000). Introduction to Systems Engineering. New York, Wiley.