Applications of Machine Computer Vision - Design of Image Positioning System for Drilling Machines
2026-04-06 04:52:00··#1
Preface Computer vision applications can be broadly categorized into four main types: localization, measurement, recognition, and defect detection, with localization being the most widespread. Machine vision systems can be used to inspect electronic components on motherboards and to control robotic arms. By mounting CCDs on robotic arms and utilizing image recognition for localization, these arms can be used for high-risk medical research such as virus research and drug mixing. Besides precision, this approach also offers greater safety for human lives. Coordinate Transformation After Image Localization Many image matching libraries are available on the market, allowing users to choose the appropriate one. The system described below uses eVision EasyMatch developed by Euresys, an image matching library based on grayscale correlation. It is very fast and achieves sub-pixel accuracy in matching results. It can accurately locate the template image (Golden Image) even with rotation, scaling (scaling/expansion), and translation. Therefore, this paper only discusses the "displacement" and "rotation" of the two-dimensional coordinates after image localization. ● Coordinate displacement formula: X2 = X1 + ΔX Y2 = Y1 + ΔY [align=center] Figure 1 Schematic diagram of coordinate displacement[/align] ● Coordinate rotation (1) Convert (X1,Y1) to polar coordinates → (X1,Y1) = (R1,θ1) Where, R1 = √X12 + Y12 θ1 = arctan( Y1 / X1 ), that is, arctangent function (2) θ2 = θ1 + θ, where, θ = represents the rotation angle. X2 = Cos (θ2) * R1 = Cos (arctan(Y1/X1)+θ) * √X12 + Y12 Y2 = Sin(θ2) * R1 = Sin(arctan(Y1/X1)+θ) * √X12 + Y12 [align=center] Figure 2 Schematic diagram of coordinate rotation[/align] ● When coordinate displacement and rotation occur simultaneously, calculate the displacement first, then apply the rotation formula to obtain the final result. The following describes how to design an automated positioning system combining "mechanical motion" and "computer vision". Basic Architecture ● GEME-3000 Main Controller: Includes HSL control card, running Windows XP ● 3-Axis Positioning Platform: Mitsubishi servo motor + ball screw ● Motion Controller: HSL-4XMO control module ● Computer Vision Component: Uses IEEE 1394 CCD to acquire images, and utilizes Euresys eVision's EasyMatch for image comparison (Pattern Match) to calculate positioning offset correction. The complete actual system is shown in Figure 3. [align=center] Figure 3 System Architecture Diagram[/align] System Calibration ● Mitsubishi Driver Calibration: 10,000 pulses/roll, meaning the motor will rotate once for every 10,000 pulses sent by the motion control card. ● Ball screw pitch vs. Pulse/Roll: For example, if the pitch is 10mm/roll, 10,000 pulses/roll means 1μm/pulse, meaning the screw advances 1μm for each pulse. ● FOV (Field of View) selection: The FOV should be larger than the size of the positioning point. Too small a FOV results in a smaller acceptable "preliminary positioning" error; too large a FOV results in a large image positioning error due to the small image size of the positioning point. ● CCD working distance selection: The working distance should be greater than the drilling pin to prevent the drilling pin from hitting the workpiece during focusing. Once the FOV and working distance are determined, the lens and extension ring are calculated. Instructional Exercises: ● Start the system and return all 3 axes to their initial positions. After the 3 axes are back in position, manually place the workpiece on the positioning platform of the 3 axes for "preliminary positioning." ● Manually control the Z-axis to slowly descend, bringing it close to the top of the positioning platform (approximately 0.5–1.0 mm). ● Manually control the X/Y axes so that the punch pin is just above the first hole on the workpiece; then slowly descend the Z-axis to insert it into the first hole. If the positioning is inaccurate, manually move the workpiece to make the positioning more accurate. ● After precise positioning, raise the Z-axis until the complete "positioning point" can be seen in the real-time image of the CCD, then execute the flowchart shown in Figure 4. [align=center] Figure 4 Image Processing Soft Flowchart[/align] Automatic Positioning ● The workpiece is manually placed on the 3-axis positioning platform for "preliminary positioning" and then the system is started; ● The system will drive the 3-axis positioning platform to move the CCD above the positioning point (two different positions), capture the image, and use the "taught" standard image to perform "image comparison"; ● Calculate the offset (Shift X/Y) and rotation angle (Rotation Angle) of the "preliminary positioning"; tx = GoldeXY[CCD_Find][1] - m_Find.GetCenterX(); ty = GoldeXY[CCD_Find][0] - m_Find.GetCenterY(); if (CCD_Find==0) { // First positioning shiftx = ZeroX - tx*Calibration; shifty = CCD_Y - ty*Calibration; } else { // Second positioning dx = CCD_Locate[1][0] - tx * Calibration; dy = CCD_Y - ty * Calibration; angle = atan2(dy - shifty, shiftx - dx); CalNewLocate(angle, shiftx, shifty); } ● Recalculate the new coordinates (Point Table) of all holes on the workpiece through "polar coordinate transformation". void CalNewLocate(F64 angle, F64 shiftx, F64 shifty) { int i; F64 P[TOTAL_POINT * 2]; F64 t; for (i=0; i Conclusion Machine vision systems have not only significantly improved industrial productivity but also enhanced user capabilities. Using machine vision systems can protect human eye health and improve inspection accuracy. These systems can operate 24/7, perform inspections at high speeds, and maintain relatively stable accuracy. Furthermore, machine vision systems show great promise for applications in hazardous work environments, the rapid processing of military weapons, real-time, high-volume production lines, and in high-precision tasks such as measurement, positioning, and object identification.