Research and Application of Computer Vision System Design for Virtual Instruments
2026-04-06 08:49:46··#1
Abstract: This paper utilizes the LabVIEW virtual instrument development platform and IMAQ Vision image processing software to perform fruit edge detection using computer vision technology. The software implements image processing procedures such as median filtering, thresholding, image segmentation, and morphological filtering. The thinning results after erosion and dilation meet the design requirements. This research demonstrates that applying virtual instrument computer technology to fruit image processing is feasible and has broad application prospects. Keywords: Virtual instrumentation; Computer vision technology; LabVIEW; IMAQ Vision; Image processing Abstract: This paper relies on the development platform of virtual instrument LabVIEW and the software of image processing IMAQ Vision tool. Using computer vision, edge detection of fruit is realized. The designed image software can be used for median filtering, thresholding, segmentalizing, and morphologic filtering. According to the thinning arithmetic of erosion method and dilation arithmetic, it is found that the image processing results are very ideal. These results indicate that the application of vision system of virtual instrumentation in image processing of fruit is feasible and promising. Keywords: virtual instrumentation, computer vision, LabVIEW, IMAQ Vision, image processing 0 Introduction With the continuous development of computer technology, machine vision technology has developed rapidly in the last thirty years, and its applications have spread to various fields such as industry, agriculture, scientific research, and military. However, traditional image processing software often adopts procedural language design, and users need to spend a lot of effort to develop programs to complete specific tasks. Therefore, the development cycle is long, and the developed program is also hardware-oriented (image acquisition card), and the portability of the program is also poor. In recent years, PCs have continued to develop, including Pentium processors with enhanced media function MMX technology, stable operating systems, PCI local bus and user-friendly interfaces, which have laid a good hardware foundation for the gradual application of virtual instruments in image processing and computer vision [1]. In terms of computer fruit external quality detection, some research institutes at home and abroad have proposed fruit stem discrimination method and fruit axis determination method by using image morphology and the boundary shape features of apples. Based on the fruit axis, shape feature parameters are extracted, and the fruit shape is graded by using genetic neural network. In terms of color detection, Wigger transformation is first performed on the RGB color space, and then the color area is obtained by summing the cumulative pixels of the hue object. In terms of defect detection, after segmenting defects by using color ratio features, the bruising (brown) and sunburn (white) are detected by color first, and then the feature parameters of the remaining defect suspicion areas are extracted. The black or gray suspected defect areas are classified by genetic neural network. This study focuses on color and shape in the fruit grading process and provides a methodology for fruit contour edge detection. 1. Composition of the Computer Vision System for Virtual Instruments 1.1 Hardware Configuration The computer vision system for virtual instruments consists of a light source, a CCD camera, an image acquisition card, and a PC. To improve image acquisition accuracy and speed, this design uses a Panasonic WV-CP240/G color camera, an NI IMAQ PCI/PXI-1411 high-speed and flexible image acquisition card, and a PC. 1.2 Software Configuration Digital image processing is the core of a computer vision system, and in this virtual instrument system, this is achieved through software. Therefore, the software component is its core, consisting of a development platform, application software packages, and device drivers. This system uses LabVIEW 7.1 as its development platform. This is because NI's IMAQ Vision software integrates machine vision and image processing functions into LabVIEW; and it fully utilizes its graphical interface capabilities for rapid display, analysis, and processing to perform numerical analysis, signal processing, and device driving, meeting the system's functional requirements and improving work efficiency. IMAQ Vision provides a complete image processing function library and functional modules for the platform, including a series of MMX optimized functions. It offers a large number of image acquisition and processing functions commonly used in scientific research and engineering, such as various edge detection algorithms, automatic thresholding, various morphological algorithms, filters, FFT, etc. 2 Image Acquisition and Processing Program Design This design program is divided into two main modules: an image acquisition and storage module and an image processing module. The acquisition and storage module converts the image signal acquired by the CCD to an A/D converter and then inputs it into the computer for storage in the required format. The image processing module is divided into several parts, including image preprocessing, image segmentation, feature extraction, and filtering. 2.1 Digital Image Acquisition Using the LabVIEW 7.1 platform and the driver program for the PCI/PXI-1411 image acquisition card, the image acquisition and storage module software was designed, as shown in Figure 1. The image acquisition board performs an A/D conversion process on the standard video signal (PAL or NTSC format) from the CCD. The quantized data is then transferred to the computer's RAM via the PCI bus. The image acquisition card is controlled by the control function provided by NI-IMAQ, and the image is stored in various file formats such as BMP, JPEG and PNG using the sub-VI of the Quick VI in LabVIEW 7.1 [2]. [align=center] Figure 1 System image acquisition program module[/align] 2.2 Image processing process 2.2.1 Median filtering Due to the influence of various noise sources during the acquisition process, some isolated pixels often appear on the image. These pixels are significantly different from adjacent pixels, which interferes with the image acquisition effect. If filtering is not performed, it will affect the subsequent image region segmentation, analysis and processing [3]. Nonlinear filters can better eliminate noise interference in image acquisition. This design adopts median filtering, which effectively suppresses noise, filters out pulse interference and image scanning noise, avoids the blurring of image details caused by linear filters, and preserves edge information. The median is defined as [4]: a set of numbers X1, X2, X3, ... Xn (X1 ≤ X2 ≤ X3 ... ≤ Xn), and these n numbers are arranged in order of value size as follows: (1) y is called the median of the sequence X1, X2, X3, ... Xn. The following steps are required for image denoising using median filtering: set the size of the filter module, such as a 5×5 module; roam the module in the image and make the center of the module coincide with a certain pixel position in the image; read the gray value of the next corresponding pixel of the module; sort these gray values from smallest to largest; find the middle one of these values as the median value; assign the median value to the corresponding template center pixel. At this time, the gray value difference of the surrounding pixels can be made to approach zero, thereby eliminating isolated noise points. IMAQ Vision can be used to perform median filtering on color images. The method is as follows: extract the red, green and blue color palettes from the original 32-bit image. In IMAQ Vision, the color images R, G and B are represented by a 32-bit integer. Median filtering was performed on the red, green, and blue color palettes (8 bits) of the RGB image to attenuate random noise while ensuring clear boundaries and preserving the fruit size characteristics. The processed RGB color palettes were then used to convert the original image using corresponding bitwise operations to generate a new color image with noise removed. A comparison of the images before and after processing is shown in Figure 2. [align=center] Figure 2 Comparison of images before and after median filtering[/align] 2.2.2 Color Image Thresholding Algorithm In IMAQ Vision, the RGB thresholding algorithm is used. By manually adjusting and setting the RGB threshold, the RGB image is first converted to a grayscale image, and then thresholding is performed using the grayscale image histogram to obtain a binary image. In the RGB color coordinate system, if only chromaticity is of interest, only the relative values of R, G, and B need to be considered. The relative values r, g, and b are called chromaticity coordinates, and their calculation formula is as follows: Where Rm, Gm, and Bm are the maximum component values in the RGB color coordinate system, respectively. Traditional algorithms have high requirements for lighting conditions, requiring a large grayscale difference between the background and the object. However, in IMAQ Vision, each RGB pixel is divided into 8 bits for thresholding, which can still obtain a high-quality binary image under poor lighting conditions. The grayscale histogram of the original image processed from Figure 2(b) is shown in Figure 4(a). 2.2.3 Image segmentation using only thresholding is difficult to obtain ideal image segmentation results, so morphological algorithms are still needed for image segmentation. Image segmentation is the process of dividing a digital image into non-overlapping regions. To ensure that the original image is not damaged before image segmentation, edge detection is performed on the image before segmentation to obtain complete boundaries. First, erosion is performed, and the connectivity criterion is determined to be 8-connectivity, which is close to human perception. A 7×7 matrix template is taken as the structuring element, with the center of the matrix as the origin of the structuring element. As shown in Figure 3, shifting the structuring element B by 'a' results in Ba. If Ba is contained within X, we record this point 'a'. The set of all points 'a' satisfying the above conditions is called the result of X being eroded by B. This can be expressed by the formula: E(X) = {a | Ba X} = X B. [align=center] Figure 3 Schematic diagram of erosion and dilation algorithms[/align] Figure 3(a) X is the object being processed, and B is the structuring element. It is easy to see that for any point 'a' in the shaded area, Ba is contained within X, so the result of X being eroded by B is that shaded area. The shaded area is within the range of X and smaller than X. Based on the complete edge detection result, multiple erosion processes can be performed using IMAQ Vision. Shifting the structuring element B by 'a' results in Ba. If Ba hits X, we record this point 'a'. The set of all points 'a' satisfying the above conditions is called the result of X being dilated by B. This can be expressed by the formula: D(X) = {a | Ba↑X} = XB. In Figure 3(b), X is the object being processed, and B is the structuring element. It is easy to see that for any point a in the shaded area, Ba hits X, so the result of X being expanded by B is that shaded area. After multiple erosions, expansion is performed again, expanding to the edge to complete image segmentation [5]. The related processing results are shown in Figure 4(b). [align=center] Figure 4 Related processing effect diagram[/align] 2.2.4 Morphological filtering In the application, after segmentation, the edges and background may still have spots of different sizes, as shown in Figure 4(b), which will have a certain impact on the results. Therefore, morphological filtering is required [6]. Mathematical morphology thinning algorithm is used for filtering to remove some points from the original image, but the original shape must still be maintained. It is necessary to judge based on the situation of the 8 adjacent points, as shown in Figure 5. [align=center]Figure 5. Determining whether a point can be deleted based on its 8 neighboring points[/align] In the figure, (a) the non-deletable part represents internal points, requiring the skeleton to be preserved; internal points cannot be deleted; (b) the non-deletable part represents the boundary skeleton; (c) the deletable part represents non-skeleton points; (d) the non-deletable part, deleting it would cause a break from the original connected part; (e) the deletable part represents non-skeleton points; (f) the non-deletable part represents the endpoints of straight lines. The image after thinning and filtering is saved as shown in Figure 4(c), ensuring its edge information. The result is then restored to its original shape before erosion. The final processing result is shown in Figure 4(d). 3. Conclusion The virtual instrument computer vision system fully utilizes its powerful functions and high scalability. Practice has proven that during the development process, developers mainly focus on image processing and analysis, without spending a lot of time writing source files, interface management programs, and low-level image processing functions. This greatly shortens development time and improves efficiency. With the rapid development of PC technology, computer vision systems based on virtual instruments have broad application prospects. References: [1] Jin Hao. Research on computer vision system based on virtual instrument [J]. Electronic Technology Application, 2000, (4): 10-12. [2] Mao Yimei. Design and implementation of virtual instrument vision system [J]. Journal of Instrumentation, 2002, 23 (3): 192-193. [3] Wang Sihua, Chen Lifeng. New computer vision technology and its application in IC mark quality inspection [J]. Electronic Technology Application, 2000, (9): 25-27. [4] Huo Hongtao, Lin Xiaozhu, He Wei, et al. Digital Image Processing [M]. Beijing: Beijing Institute of Technology Press, 2002. [5] Xu Guili, Mao Hanping, Hu Yongguang. Measurement of leaf area based on reference object method of computer vision technology [J]. Transactions of the Chinese Society of Agricultural Engineering, 2002, 18 (1): 154-158. [6] Rafael C. Gonzalez, Richiard E. Wood. Digital Image Processing [M]. Beijing: Publishing House of Electronics Industry, 2003.