Share this

Research on Online Detection of Damaged Instant Noodles Using Machine Vision

2026-04-06 04:29:45 · · #1
Abstract: This paper proposes a method suitable for real-time online detection of instant noodle breakage. The method involves establishing a computer vision system to acquire noodle block images, segmenting the images using an "encirclement algorithm" tailored to the characteristics of instant noodles, and then removing edge burrs using a "slicing algorithm" to obtain the "true boundary" of the noodle block. Finally, the ratio of the area of ​​the noodle block to its circumscribed rectangle is calculated to quickly determine the breakage of the instant noodles. This experimental approach offers significant advantages, including high recognition rate and fast speed. Keywords: Instant noodles; Shape recognition; Machine vision; Image segmentation Abstract: An online method was proposed to identify damaged instant noodles. Computer vision was used to capture the image. Then, the “around shock method” was used to segment the image. The burr of edge is removed after “cutting processing”, so the “real borderline” was obtained. Then, the area ratio of dough and exterior rectangle was available, and rapid identification was made on speedy inspection of the defective. This study method is novel, superior, nice, and fast. Key words: Instant noodle; Shape recognition; Machine vision; Image Segmentation 0. Introduction In recent years, with the fierce competition in the instant noodle market, manufacturers have been reducing costs, improving product quality, establishing brand image, and enhancing corporate competitiveness. Without compromising product quality, enhancing corporate competitiveness requires large-scale operation; and the core goal of large-scale operation is “reducing costs through scale,” that is, expanding production scale, tapping corporate potential, seeking efficiency from scale, seeking efficiency from management, and reducing production and management costs. Therefore, the level of automation in instant noodle production has gradually become an important issue that cannot be ignored. Currently, the downstream of instant noodle production lines utilizes automatic noodle conveyor belts, automatic noodle injectors, automatic bagging machines, automatic bag folding machines, and automatic carton packing machines (such as those used by Ting Hsin International Group). However, the sorting of defective products still relies on manual vision. Manual visual inspection suffers from drawbacks such as low speed, significant subjective influence, and high false positive and false negative rates. Therefore, developing an efficient automated instant noodle inspection system can improve product quality, liberate productivity, save costs, and meet the needs of modern industry. Machine vision technology allows machines to replace human eyes in measurement and judgment, and this technology has been successfully applied to the quality inspection of numerous products both domestically and internationally. Compared with manual visual inspection, machine vision has advantages such as high automation, strong recognition capabilities, and high measurement accuracy, and has broad application prospects. With the increasing sophistication of image processing technology, the decrease in computer hardware costs, and the improvement in computer processing speed, the application of machine vision in the field of automatic quality inspection and grading of food and agricultural products has become increasingly attractive. Defective products selected from the instant noodle production line based on appearance mainly include broken pieces, over- or under-fried pieces, large lumps, clumped pieces, and piled-up pieces, with over 80% being broken pieces. This paper uses square noodle pieces as an example for online rapid damage detection. Because the edges of the noodle pieces are uneven and often have some burrs, conventional visual recognition methods are difficult to use. This paper uses a "cutting process" to remove burrs, obtaining the true boundary of the noodle piece, and then uses the area ratio of the noodle piece to its circumscribed rectangle to determine whether it is defective. Experiments show that this method has a high recognition rate and fast speed, making it suitable for online detection. 1. Experimental Materials and Apparatus The experimental samples came from the Baixiang instant noodle series of Henan Zhenglong Food Co., Ltd.: Big Bone Noodles, which is representative of fried instant noodles. It was produced by Workshop 3 of Zhenglong Group Xinzheng Branch. A total of 128 experimental samples were collected, of which 70 were defective pieces. The experimental apparatus consisted of a computer, a CCD camera, an image acquisition card, a light source, and a conveyor line (Figure 1). The CCD camera used is the American Uniq-uc610, and the image acquisition card is the Canadian Matrox Meteor-II (which has an external trigger function). The system employs a closed lighting system, with two symmetrical 30W fluorescent lamps on either side of the upper part of the lighting chamber as the light source. The conveyor line is black, and the face blocks are transported to the vision inspection section via an automatic face block arrangement conveyor line (which arranges the face blocks neatly and ensures uniform spacing between them). Image acquisition uses a trigger-based capture method, with the detection element being an OMORN E3C-DS10 trigger used in conjunction with an E3C-3C amplifier. This is a reflective trigger. When a face block arrives on the conveyor belt, the signal strength changes, outputting a pulse signal to control the camera to capture the image. The images captured in this experiment are 640×474 pixels and stored in 24-bit BMP format. The image processing algorithm was compiled and passed on the Visual C++ 6.0 platform. 2. Image Processing The acquired images first undergo preprocessing, including noise filtering, image segmentation, and edge burr removal, to facilitate subsequent shape judgment. 2.1 Image Noise Removal This experiment uses a fast median filtering method to remove noise. This effectively suppresses noise in the image while protecting the image's contour boundaries from blurring. This algorithm has a fast processing speed and can meet the requirements of online detection. 2.2 Image Segmentation The background of the image is black, while the face blocks are relatively light-colored. Experiments show that when the I component in the HIS color space is selected as the judgment condition, the histogram is an ideal bimodal shape (Figure 3). Selecting valleys as grayscale thresholds will yield reasonable object boundaries. The conversion of the same color from RGB to HIS is a non-linear transformation, and its conversion relationship is as follows: Image segmentation generally uses the thresholding method. In this study, because the grayscale values ​​of deep-set areas and holes in the face blocks are not significantly different from the background, the thresholding method easily removes the face blocks as background, thus affecting subsequent processing (Figure 4b). This paper uses the "encirclement algorithm" to segment the image. The basic idea is to scan the image to find the edge points around the face blocks, set the grayscale values ​​of the pixels outside of these edges to white (R, G, and B grayscale values ​​are all set to 255), while leaving the face block area unchanged. This method first uses a vertical scanning approach, starting from the leftmost edge of the entire image and scanning each column sequentially. It begins by scanning from the top of a column, stopping when a facet point is encountered and recording that point as point 1 (Figure 4a). (If no facet point is encountered by the time the scan reaches the bottom boundary, it indicates that the entire column consists of background points, and their grayscale values ​​are set to white, then the next column is scanned). Then, scanning begins from the bottom of the column, again stopping when a facet point is encountered and recording that point as point 2. The area between points 1 and 2 is the facet region, and its grayscale value remains unchanged. The area outside points 1 and 2 is the background region, and its grayscale value is set to white. After the vertical scanning is complete, the same method is used for horizontal scanning, thus cleanly removing the background while completely preserving the facet region (Figure 4c). 2.3 Removing Edge Burrs: Facet edges are often uneven and have irregular serrated burrs. During manual visual inspection, people tend to ignore these burrs when judging the overall shape of the facet, but these burrs hinder computer shape recognition. This study uses a "shaving algorithm" to remove burrs from the edges of facets, obtaining the "true boundaries" of the facets. The specific steps are as follows: First, a vertical scanning method is used, starting from the leftmost edge of the entire image and scanning each column sequentially (Figure 5). The total number of pixels (N<sub>total</sub>) and the total number of facet pixels (N<sub>faces</sub>) between the top facet pixel (a) and the bottom facet pixel (b) of that column are recorded. The proportion of facet pixels within this interval is calculated as Ratio = N<sub>faces</sub> / N<sub>total</sub>. If Ratio < 0.70, all pixels in that column are set to white (R, G, B are set to 255). When the scanning reaches a column where Ratio >= 0.70 and N<sub>faces</sub> > 30, the scanning ends (because the burrs are jagged and the edges gradually change; experiments show that at this point, the burrs are basically eliminated without damaging the facet shape). The boundary value `left` is recorded at this point. The top, right, and bottom boundaries are processed in the same way, and the corresponding three boundary values ​​`right`, `up`, and `bottom` are recorded. The shaving process is now complete. (The processed result is shown in Figure 6) 3. Defect Judgment 3.1 Feature Value Extraction The two feature values ​​extracted in this study are the initial area of ​​the face (A<sub>initial</sub>) and the area ratio of the face to its circumscribed rectangle after the cutting process (R<sub>r</sub>). A<sub>initial</sub> is represented by accumulating the number of pixels in the scanned image. R<sub>r</sub> is represented by the following formula: Where: A<sub>face</sub> is the area of ​​the cut face, calculated in the same way as A<sub>initial</sub>; (right-left+1)(bottom-up+1) is the area of ​​the circumscribed rectangle of the cut face, where up, bottom, left, and right are the four values ​​obtained in steps 2 and 3. 3.2 Judgment of Defect Presence: Through extensive experimental statistics, the R<sub>r</sub> value of a normal face is 0.95–0.10, while the R<sub>r</sub> value of a defective face is generally lower than 0.95. Therefore, if the R<sub>r</sub> < 0.95, the face is judged to be defective. The experiment also revealed that a very small number of defective pieces were broken parallel to the edges. Even after being cut, these defective pieces retained their rectangular shape, and their R-ratio was close to that of normal pieces, leading to misjudgment. However, a common characteristic of these pieces was their large breakage area, resulting in a significantly smaller initial area (A<sub>initial</sub> < 8000, while A<sub>initial</sub> for normal pieces is around 10000). Therefore, pieces that passed the R-ratio test needed to be checked again for the A<sub>initial</sub> value; if A<sub>initial</sub> < 8000, the piece was considered defective. This paper conducted experiments on 128 pieces (70 of which were defective), achieving an accuracy rate of 96.8%. 4. Conclusion This paper simulated an instant noodle production line by designing an image acquisition device that used trigger control to capture high-contrast images of instant noodles. An effective software system was developed using Visual C++ 6.0. A "surrounding algorithm" was cleverly used to remove the background, and a "cutting algorithm" was used to process the edges of the pieces, removing interference from surrounding burrs. Finally, two feature parameters are extracted: the area ratio R<sub>r</sub> of the cut surface block to the circumscribed rectangle and the initial area A<sub>initial</sub> of the surface block to determine whether the surface block is damaged. This transforms the complex shape recognition problem into an area calculation problem. While this method seems simple, its effectiveness is difficult to achieve with conventional shape recognition algorithms. This experimental method takes a novel approach, with significant advantages: high recognition rate, fast speed, and strong practicality, fully meeting the requirements for online detection. The authors' innovations include: 1. Designing an image acquisition device simulating a ramen production line, using trigger control to capture high-contrast ramen images; 2. Using the I-value of pixels in the HIS color space as a parameter, employing an "encirclement algorithm" to cleverly segment the image; 3. Utilizing a "cutting algorithm" to process the surface block edges, removing interference from surrounding burrs, thus solving the interference problem in subsequent computer shape recognition; 4. Extracting two feature parameters (R<sub>r</sub> of the cut surface block to the circumscribed rectangle and the initial area A<sub>initial</sub> of the surface block) to determine whether the surface block is damaged, achieving a high recognition rate. References [1]. Jian Qinglong, Xu Xingkun. Research on the application of image processing in tobacco leaf impurity removal system [J]. Microcomputer Information, 2005, No.5:149-150. [2]. V. Leemans, H. Magein, M.-F. Destain. On-line Fruit Grading according to their External Quality using Machine Vision. Biosystems Engineering, 2002, 83(4):391~404. [3]. Milan Sonka. Image Processing, Analysis, and Machine Vision. Beijing. Posts & Telecom Press. 2003. 12~16,83~128 [4]. Growe TG et al. Real-time defect detection in fruit Part III: An algorithm and performance of a prototype system. Transaction of the ASAE, 1996, 39(6):2309~2317. [5] Wang Jianwei, Improvement and application of median filtering algorithm for color images [J]. Journal of Harbin University of Commerce (Natural Science Edition), 2006, V.24, N.4, 67-69 [6]. Huang Xingyi, Lin Jianrong, Zhao Jiewen. Research on identification technology of apple stem and defects [J]. Journal of Jiangsu University: Natural Science Edition, 2004, 25 (3): 193-195 [7]. Fang Ruming, Cai Jianrong, Xu Li. Computer image processing technology and its application in agricultural engineering [M]. Beijing: Tsinghua University Press, 1999: 67-69 [8]. Kenneth R. Castleman. Digital Image Processing [M]. Beijing: Electronic Industry Press, 2003
Read next

CATDOLL Himari Hybrid Silicone Head

The hybrid silicone head is crafted using a soft silicone base combined with a reinforced scalp section, allowing durab...

Articles 2026-02-22