Six Sigma is a set of implementation principles and techniques for rigorously, centrally, and efficiently improving the quality of business process management. With a pursuit of "zero defects," it drives a significant reduction in quality costs, ultimately leading to improved financial performance and a breakthrough in corporate competitiveness. What tools are needed when learning Six Sigma? Let's take a look.
1. Lean Production
Lean Production (LP) is a manufacturing model proposed in 1990 by MIT based on its research and summary of Toyota's production system during its "International Motor Vehicle Project." Its core is the pursuit of eliminating all "waste," including inventory, and a series of specific methods have been developed around this goal, gradually forming a unique production and management system.
Lean production is a process that transforms the system structure, personnel organization, operation methods, and market supply and demand to enable the production system to quickly adapt to the ever-changing user needs and to streamline all useless and redundant elements in the production process, ultimately achieving the best results in all aspects of production, including market supply and sales.
2. Kano model
Japanese quality expert Kano categorizes quality into three types based on customer perception and the degree to which customer needs are met: expected quality, anticipated quality, and attractive quality.
A: Quality is to be expected. When its characteristics are insufficient (do not meet customer needs), customers are very dissatisfied; when its characteristics are sufficient (meet customer needs), there is no question of satisfaction or dissatisfaction, at best customers are satisfied.
B: Expected quality is also known as one-dimensional quality. When its characteristics are insufficient, customers are very dissatisfied; when they are sufficient, customers are satisfied. The less sufficient, the more dissatisfied; the more sufficient, the more satisfied.
C: Attractive Quality. When its characteristics are insufficient, and are insignificant, customers are indifferent; when its characteristics are sufficient, customers are very satisfied.
The quality that is taken for granted is the baseline quality, which is the most basic requirement to be met.
Expected quality is a common form of quality.
Attractive quality is a competitive element of quality. It typically has the following characteristics:
1. It features entirely new functions that have never appeared before;
2. Performance is greatly improved;
3. By introducing a new mechanism that had never been seen or even considered before, customer loyalty was greatly improved;
4. A very novel style.
The Kano model's classification of three quality levels provides direction for improvements to Six Sigma:
If the quality is expected, the basic quality characteristics must be guaranteed to meet the specifications (standards) to satisfy the customer's basic requirements. The project team should focus on how to reduce the failure rate.
If the desired quality is the primary concern, the project team will not focus on whether the specifications (standards) are met, but rather on how to improve the specifications (standards) themselves, continuously enhance quality characteristics, and promote increased customer satisfaction.
If it is attractive quality, then it requires achieving unexpected new qualities in products or services by meeting the potential needs of customers.
The project team should focus on how to explore customer needs, create new products, and add unexpected new qualities while maintaining the first two qualities.
3. POKA-YOKE
POKA-YOKE stands for "error prevention system." Mr. Shingeo Shingo, a Japanese quality management expert and the renowned creator of the Toyota Production System, pioneered the concept of POKA-YOKE based on his extensive experience in on-site quality improvement. He further developed it into a tool for achieving zero defects and ultimately eliminating the need for quality inspection.
The basic principles of POKA-YOKE mainly consist of the following three aspects:
(1) We must never allow even the slightest defective product to appear. To become a world-class enterprise, we must achieve "zero" defects not only in our mindset but also in reality.
(2) The production site is a complex environment where anything can happen every day. Mistakes lead to defects, and defects lead to customer dissatisfaction and waste of resources.
(3) We cannot eliminate errors, but we must detect and correct them in a timely manner to prevent them from becoming defects.
4. Quality Function Deployment (QFD)
Quality Function Deployment (QFD) is a multi-level deductive analysis method that transforms customer or market requirements into design requirements, component characteristics, process requirements, and production requirements. It embodies a market-oriented approach, with customer requirements as the sole basis for product development. In the robust design methodology, QFD plays a crucial role. It is a preliminary step in robust design, identifying key stages, components, and processes in product development, thus providing direction and defining the target for stability optimization design. It closely links all product development activities with meeting customer requirements, thereby enhancing the product's market competitiveness and ensuring successful product development from the outset.
According to literature reports, using the QFD method can shorten product development cycles by one-third, reduce costs by half, significantly improve quality, and multiply output. Quality Function Deployment (QFD) is widely used in the US civilian and defense industries, not only for specific product development and quality improvement, but also by major companies for developing quality policies and setting engineering management objectives.
5. SOW
The Statement of Work (SOW) is an appendix to the contract and has the same legal effect as the main text of the contract. The SOW details the work to be completed by both parties during the contract period, such as feasibility studies, design, analysis, testing, quality control, reliability, maintainability, supportability, standardization, and metrological assurance.
The specifications for work (SMO) specify the items to be provided to the other party, such as interface control documents, hardware, computer software, technical reports, drawings, and materials, as well as the timing and type of reviews. Therefore, the SMO, in the form of a contractual document, further clarifies the customer's requirements and the work that the contractor must perform to meet those requirements. It establishes product management and quality assurance on a legal basis, becoming a powerful tool for the contracting party (customer) to monitor the quality of the contractor (contractor). Detailed requirements for the SMO can be found in GJB2742A-2017. The content of the SMO is an important input for Quality Function Deployment (QFD).
6. WBS
Work Breakdown Structures (WBS) are hierarchical systems that break down the tasks to be completed during the research and development and production of weapons and equipment projects from top to bottom. This hierarchical system centers on the product to be developed and produced, and consists of product (hardware and software) items, service items, and documentation items.
The Work Breakdown Structure (WBS) is formed through systems engineering work. It shows and defines the work of a weapon system project, and illustrates the relationships between various tasks and between them and the final product, fully reflecting the system's integrity, orderliness (hierarchy), and relevance. GJB2116-94 provides a typical development process and basic requirements for the compilation of a WBS, and provides outline WBSs for seven types of weapon systems in the appendix.
Applying the hierarchical structure of Work Breakdown Structure (WBS) in Quality Function Deployment (QFD) and system design, and referring to the outline of WBS given in GJB2116-94, will greatly facilitate the conceptualization of product functions, structures, and development work, aid in the completion of QFD and system design, and also contribute to the preparation of Statement of Work (SOW). WBS is an effective tool for implementing systems engineering management in the development of weaponry and equipment, and a guarantee of design integrity. The principles and concepts of WBS are also applicable to various large-scale, complex, and high-tech civilian products.
7. Concurrent Engineering
Concurrent engineering is a systematic and comprehensive approach to designing products and related processes (including manufacturing and support processes) in parallel. It requires developers to consider all elements of the entire product lifecycle (from concept formation to end-of-life disposal) from the outset, including quality, cost, schedule, and customer requirements. Concurrent engineering places particular emphasis on early design, striving to integrate all information needed for product development from the initial design phase, and bringing together the experience and wisdom of experts from multiple disciplines.
In robust design, especially in quality function deployment and system design, the principles and guiding ideology of concurrent engineering must be implemented.
8. Parameter Design
Parametric design follows system design. The basic idea of parametric design is to minimize the impact of external, internal, and inter-product interferences by selecting the optimal combination of all parameters in the system (including raw materials, parts, components, etc.), thereby ensuring that the designed product has minimal fluctuations in quality characteristics and good stability. Furthermore, during the parametric design phase, components meeting the lowest quality level required for the operating environment and cost-effective machining precision are generally selected to improve both product quality and cost.
Parametric design is a multi-factor optimization problem. Because it requires considering the impact of three types of interference on the fluctuations of product quality characteristics and exploring design schemes with good anti-interference performance, parametric design is much more complex than orthogonal experimental design. Dr. Taguchi used the direct product of inner and outer orthogonal arrays to arrange the experimental scheme and used the signal-to-noise ratio as a stability index of product quality characteristics for statistical analysis.
Why is it that even when using components of low quality and with large fluctuations, the system's function remains very stable through parametric design? This is because parametric design utilizes nonlinear effects.
Typically, there is a non-linear relationship between the product quality characteristic value y and the levels of certain component parameters. Suppose a product has an output characteristic value of y, a target value of m, and a selected component parameter x with a fluctuation range of Dx (generally following a normal distribution). If parameter x is set to level x1, the fluctuation of Dx causes a fluctuation of y of Dy1 (as shown in the figure). Through parameter design, x1 is moved to x2. At this point, the same fluctuation range Δx causes the fluctuation range of y to shrink to Dy2. Due to the significant non-linear effect, Dy2...
9. Divergent thinking
Divergent thinking, also known as divergent or radial thinking, refers to thinking that starts from a single goal and explores multiple solutions along various different paths, in contrast to convergent thinking. Many psychologists believe that divergent thinking is the most important characteristic of creative thinking and one of the main indicators for measuring creativity.
American psychologist Guilford believed that divergent thinking has three main characteristics: fluency, flexibility, and originality.
Fluency refers to the speed and smoothness of intellectual activity, with few obstacles, and the ability to express a large number of ideas in a short period of time. It is a quantitative indicator of divergent thinking.
Flexibility refers to thinking that has multiple directions, can draw inferences from one instance to another, can adapt to changing circumstances, and is not constrained by functional fixedness or set patterns, thus enabling the generation of extraordinary ideas and the proposal of remarkable new concepts.
Originality refers to thinking that possesses extraordinary novelty, thus it better reflects the essence of divergent thinking.
Divergent thinking skills can be cultivated by thinking about the same problem from different perspectives, such as "multiple solutions to one problem", "multiple ways to write about one thing", and "multiple uses for one object".
10. Analysis of variance and regression analysis
Analysis of Variance (ANOVA) is a commonly used data processing method in mathematical statistics, and an effective tool for analyzing experimental data in industrial and agricultural production and scientific research. It also forms the mathematical foundation for experimental design, parametric design, and tolerance design. A complex phenomenon often involves many interdependent and mutually restrictive factors. The purpose of ANOVA is to identify, through data analysis, the factors that significantly influence the phenomenon, the interactions between these factors, and the optimal levels of the significant influencing factors. ANOVA is a technique that decomposes the total variation of data in a comparable dataset according to specified sources of variation. The sum of squared deviations is used to measure the variation. The ANOVA method decomposes the total sum of squared deviations into partial sums of squared deviations traceable to specified sources. This is a very important concept.
Regression analysis is a mathematical tool used to study the correlation between a variable Y and several other variables X. Based on a set of experimental or observational data, it seeks to identify dependencies between variables that are masked by randomness. Roughly speaking, it can be understood as approximating a complex correlation with a deterministic functional relationship; this function is called the regression function, or in practical problems, an empirical formula. The main problem studied in regression analysis is how to use the observed values (samples) of variables X and Y to make statistical inferences about the regression function, including estimating it and testing related hypotheses.
11. Customer Satisfaction Assessment
The ISO 9000 series of standards requires companies to measure and monitor information about customers’ perceptions of whether the organization has met their requirements.
There are various methods for assessing customer satisfaction. In recent years, countries such as the United States and Sweden have adopted the Customer Satisfaction Index (CSI) for evaluation, with great success. CSI is a parameter used to evaluate the degree to which products (hardware, software, services, and process materials) meet customer needs; it is also a comprehensive index for evaluating product quality. Suppose a customer makes n demands on a product, and the degree to which each demand is met is qi, (i=1,2,…,n), then the Customer Satisfaction Index (CSI) is a function of qi.
For the customer satisfaction index (QI), market development personnel should conduct random sampling surveys of the customer base, combining this with analysis and statistics of customer complaints collected through after-sales service and product quality issues. Evaluating the customer satisfaction index is a rather complex matter. Businesses, societies, and government agencies can, as needed, commission neutral professional organizations to evaluate the customer satisfaction index of products, services, and industries to guide quality improvement.
12. FMEA and FTA Analysis
Failure Mode and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) are both widely used analytical techniques in reliability engineering, and these techniques have been successfully applied abroad to solve various quality problems.
Through FMEA and FTA analysis, various potential quality problems and failure modes affecting product quality and reliability were identified, along with their causes (including design defects, process problems, environmental factors, aging, wear, and processing errors). By implementing corrective measures in design and process, the product quality and its ability to resist various disturbances were improved.
According to literature reports, a world-class automobile company achieves approximately 50% of its quality improvements through FMEA and FTA/ETA.
13. Uniform Design
Orthogonal experimental design has two key characteristics when selecting experimental points: uniform distribution and comparable availability. "Uniform distribution" ensures representativeness of the experimental points, while "comparability" facilitates data analysis. To guarantee "comparability," orthogonal design requires at least q² trials. To reduce the number of trials, the comparability requirement must be removed. Uniform design, on the other hand, considers only the uniform distribution of experimental points within the experimental range. Similar to orthogonal design, uniform design utilizes a carefully designed table—the uniformity table—for experimental design and uses regression analysis to analyze the results.
Each uniform design table has a symbol or , where U represents a uniform design, n indicates n trials, q indicates q levels for each factor, and s indicates s columns. A superscript "*" on U and the absence of "*" indicate two different types of uniform designs. Generally, uniform designs with "*" have better uniformity. A significant characteristic of uniform designs is that the number of trials decreases significantly as the number of factor levels increases.
14. Pareto chart
A Pareto chart, also known as a primary-secondary factor chart, is a method used to identify the primary factors among various factors affecting product quality, thus determining directions for quality improvement. This is because most problems in reality are usually caused by a few factors.
The 80/20 rule in economics, when applied to management, is based on the principle of distinguishing between the "critical few" and the "minor many." This helps to identify key factors and solve major problems. For visual clarity, this can be represented graphically as a Pareto chart.
15. Balanced Scorecard
Robert S. Kaplan (Professor of Leadership Development at Harvard Business School) and David P. Norton (Founder and President of Renaissance Global Strategy Group), Director of the Norton Institute, developed a new organizational performance management approach called the Balanced Scorecard after a year-long study of 12 leading companies in performance measurement. The Balanced Scorecard was published in the January/February 1992 issue of the Harvard Business Review.
The core of the Balanced Scorecard lies in its departure from traditional performance management methods that solely focus on financial metrics. The Balanced Scorecard argues that traditional financial accounting models can only measure past events. While this approach was effective in the industrial age, it is incomplete in the information society. Organizations must invest in areas such as customers, suppliers, employees, organizational processes, technology, and innovation to achieve sustainable growth. Based on this understanding, the Balanced Scorecard method posits that organizations should examine their performance from four perspectives: customers, business processes, learning and growth, and finance. The objectives and performance indicators in the Balanced Scorecard originate from organizational strategy, translating the organization's mission and strategy into tangible goals and metrics.
16. Tolerance Design
Tolerance refers to the allowable range of fluctuations in quality characteristic values from an economic perspective. Tolerance design studies the relationship between tolerance range and quality cost, achieving a comprehensive balance between quality and cost. It is performed after system design is completed and the optimal combination of controllable factors is determined through parametric design; at this stage, the quality level of each component (parameter) is relatively low, and the parameter fluctuation range is relatively wide.
The purpose of tolerance design is to determine appropriate tolerances for each parameter based on the optimal conditions established during the parameter design phase. The basic idea of tolerance design is as follows: Based on the magnitude of the contribution (impact) of each parameter's fluctuations to product quality characteristics, consider from an economic perspective whether it's necessary to grant smaller tolerances to parameters with significant impacts (e.g., replacing lower-quality components with higher-quality ones). Doing so can, on the one hand, further reduce fluctuations in quality characteristics, improve product stability, and reduce quality losses; on the other hand, increasing the quality grade of components will increase product costs. Therefore, the tolerance design phase must consider both further reducing quality losses that still exist after parameter design and the increased costs that reducing the tolerances of some components will incur. A balance must be struck between these two considerations to make the optimal decision.
In summary, tolerance design aims to determine the optimal tolerance for each parameter, thereby maximizing (minimizing) the total loss (the sum of quality and cost). We know that reducing the tolerance of certain parameters increases cost, but this improves quality and reduces losses due to functional fluctuations. Therefore, the goal is to find the tolerance design scheme that minimizes the total loss. The main tools used for tolerance design are the quality loss function and orthogonal polynomial regression.
Parametric design and tolerance design are complementary. According to the principles of parametric design, each level of the product (system, subsystem, equipment, component, part), especially the final product delivered to the customer, should minimize quality fluctuations and reduce tolerances to improve product quality and enhance customer satisfaction. However, on the other hand, each level of the product should have a strong ability to withstand various disturbances (including processing errors), meaning that its subordinate components should be allowed a relatively large tolerance range. For subordinate components, a scientifically reasonable tolerance is determined through tolerance design, serving as the basis for conformity control during the manufacturing stage. However, it should be noted that this conformity control differs from traditional quality management conformity control in two ways:
First, the inspection process should not only record whether a product passes or fails, but also record the specific values of the quality characteristics; it should not only give the non-conformance rate, but also formulate scientific statistical methods based on the theory of quality loss to give data on the quality level.
Second, adopt online quality control methods adapted to robust designs (such as advanced SPC methods) to monitor product quality fluctuations in real time, provide feedback, and adjust process parameters. For existing problems, continuously take measures to improve process design and enhance product quality, making quality characteristics closer and closer to the target value while reducing total losses. When conditions permit, the tolerance range should be reduced.
17. Design of Experiments (DOE)
Design of Experiments (DOE) is a mathematical theory and method that studies how to formulate appropriate experimental plans for effective statistical analysis of experimental data. Orthogonal experimental design utilizes a standardized table—an orthogonal array—to rationally arrange experiments and scientifically analyze the results using the principles of mathematical statistics. It is a scientific method for handling multi-factor experiments. The advantage of this method is that it can clarify the influence of each factor on the experimental index through a highly representative number of experiments, determine the order of importance of the factors, and find better production conditions or optimal parameter combinations. Experience has shown that orthogonal experimental design is a highly effective method for solving multi-factor optimization problems. An orthogonal array is a table constructed using combinatorial mathematics theory based on Latin squares and orthogonal Latin squares. It is a fundamental tool of orthogonal design, possessing the characteristics of balanced distribution and neat comparability.
Experimental design should follow three principles: randomization, local control, and replication. Experimental designs can be broadly categorized into four types: factorial design, block design, regression design, and uniform design. Factorial design is further divided into full-scale and partial-scale implementations. The factorial experimental design method is what we commonly refer to as orthogonal experimental design.
Experimental design has a history of over 70 years and is widely used in the United States and Japan in almost all industrial fields, including agriculture, pharmaceuticals, chemicals, machinery, metallurgy, electronics, automobiles, aviation, and aerospace, to improve product quality. The US automotive industry standard QS9000, "Requirements for Quality Systems," lists experimental design as a mandatory technique. The well-known parametric design method is also developed based on orthogonal experimental design. Furthermore, experimental design not only finds optimal parameter combinations but, in many cases, can also qualitatively determine the impact of various error factors, such as environmental factors and processing errors, on desired product characteristics by setting error columns and conducting variance analysis, and then taking improvement measures to eliminate these errors. Therefore, for some simple engineering problems, direct application of experimental design can yield satisfactory and robust design solutions. Experimental design can also be applied to improve enterprise management, adjust product structure, and develop more efficient production plans.
18. Horizontal comparison method
Benchmarking, also known as horizontal comparison, is a continuous process of comparing and measuring a product's performance, quality, and after-sales service against the strongest competitors or industry leaders, and then taking corrective measures.
The horizontal comparison method comprises two important aspects. Firstly, it involves developing a plan to continuously identify and establish benchmarks at the domestic and international advanced levels, discovering gaps in one's own products through comparison and comprehensive thinking. Secondly, it involves continuously implementing improvement measures in design, processes, and quality management, learning from others' strengths and compensating for one's own weaknesses to continuously improve the product's technical and quality levels, surpassing all competitors and reaching and maintaining world-class standards. Employing the horizontal comparison method is not simply imitation, but rather creative learning from others.
Through in-depth thinking and research, and by drawing on the strengths of various sources, we can carry out technological innovation and achieve breakthroughs in product performance. Only by mastering groundbreaking technologies can we potentially lead the world. To better implement the horizontal comparison method, a relevant database should be established and continuously updated. The horizontal comparison method has been widely applied and has yielded significant results in the United States.
19. Statistical Process Control (SPC)
Statistical Process Control (SPC) was proposed by Dr. Shewhart in the 1920s. Since World War II, SPC has gradually become a fundamental method for online quality control in Western industrialized countries. SPC methods have been further developed; for example, Boeing, in order to implement robust design principles, introduced a new supplier quality assurance specification, DL-9000. The main change was the requirement to establish an Advanced Quality System (AQS). The AQS system incorporates Taguchi's concept of quality loss into the quality management of the manufacturing stage, proposing a complete set of manufacturing quality control requirements adapted to robust design.
According to SPC theory, fluctuations in product quality characteristics are the root cause of quality problems. Quality fluctuations exhibit statistical regularity, and anomalies can be detected through control charts. Process Control and Diagnosis (SPCD) theory can be used to identify and eliminate the causes of these anomalies. Commonly used Shewhart control charts include the mean-range (xR) control chart, the mean-standard deviation (xS) control chart, the median-range (xR) control chart, the individual value-moving range (x-Rs) control chart, the percentage of defective products (P) control chart, the number of defective products (Pn) control chart, the number of defects (C) control chart, and the number of defects per unit (u) control chart. The SPC method is a powerful tool for maintaining production line stability and reducing quality fluctuations.
The AQS (Advanced Quality System) first requires identifying the critical characteristics of the product during the manufacturing stage. For these critical characteristics and their related components, robust process design is required to determine a robust process. During manufacturing, monitoring measures for these critical characteristics must be established. In addition to using standard SPC (Statistical Process Control) control charts, AQS provides three types of small-batch control charts: individual moving range control chart, target control chart, and proportional control chart; two improved control charts: moving average control chart and geometric moving average control chart; and measures to improve the sensitivity of control chart monitoring. Based on monitoring results and actual needs, process parameters or process design are improved to correct any human, machine, material, method, and environmental factors that cause quality fluctuations, thereby achieving continuous quality improvement.
20. Brainstorming
Brainstorming, also known as the intellectual stimulation method, was proposed by Osborn, the founder of modern creativity studies, and is a group training method for creative abilities. It has four basic principles:
First, avoid critical commentary; comments on proposed ideas should be made later.
Second, encourage "free imagination." The more absurd the proposed ideas, the more valuable they may be.
Third, a certain number of ideas must be proposed. The more ideas proposed, the more likely it is to yield more valuable ideas.
Fourth, explore the combination and improvement of ideas. In addition to the ideas proposed by the participants themselves, participants are asked to point out how to combine several ideas to introduce another new idea; or participants are asked to elaborate on the topic and improve the ideas proposed by others.