With the advent of Industry 4.0, machine vision is playing an increasingly important role in the field of intelligent manufacturing. In order to enable more users to acquire basic knowledge about machine vision, we have prepared this introductory learning material on machine vision for you.
Application areas of machine vision:
Identification
Decoding standard 1D and 2D barcodes
Optical Character Recognition (OCR) and Authentication (OCV)
Detection
Color and defect detection
Inspection of the presence or absence of parts or components
Target position and orientation detection and measurement
Size and capacity inspection
Measurement of preset markers, such as the distance between holes.
robotic arm guidance
Output spatial coordinates to guide the robot arm to precise positioning
Classification of machine vision systems
Smart camera
Based on embedded systems
PC
Composition of machine vision system
Image acquisition: light source, lens, camera, acquisition card, mechanical platform
Image processing and analysis: industrial control host, image processing and analysis software, graphical user interface.
Judgment execution: Telephone unit, Mechanical unit
Light source --- optical path principle
A camera cannot see objects; it sees light reflected from the surface of an object.
Specular reflection: A smooth surface that reflects light at its apex.
Diffuse reflection: Rough surfaces diffuse light from all directions.
Divergent reflection: Most surfaces have both texture and smooth surfaces, which cause light to diverge and reflect.
Light source --- function and requirements
Its role in machine vision
Illuminate the target and increase brightness
To create an effect that is beneficial to image processing
Overcome the influence of ambient light and ensure image stability
Used as a tool or reference for measurement
Good light field design requirements
The contrast is obvious, and the boundary between the target and the background is clear.
The background should be as light and uniform as possible to avoid interfering with image processing.
Color-related aspects also require accurate color reproduction, moderate brightness, and avoidance of overexposure or underexposure.
Light source --- light field construction
Bright field: Light reflected into the camera
Dark field: Light reflected away from the camera
Light source --- constructing a light source
Using different lighting techniques will have different effects on the target being measured. Taking ball bearings as an example:
camera
Types: Line & Surface, Separated/Progressive, Black/Color, Digital/Analog, Low/High, CCD/CMOS
Specifications: Pixel size, resolution, target size, sensing curve, dynamic range, sensitivity, velocity noise, fill factor, volume, mass, operating environment, etc.
Working modes: Freerun, Trigger (multiple modes), long exposure, etc.
Transmission methods: GIGE, Cameralinker, analog
Cameras – Classified by Image Sensor
CCD camera: A camera that uses a CCD image sensor as its image sensor. It integrates photoelectric conversion, charge storage, charge transfer, and signal readout, and is a typical solid-state imaging device.
CMOS camera: A camera that uses a CMOS image sensor chip to integrate a photosensitive element array, image signal amplifier, signal readout circuit, analog-to-digital converter circuit, image signal processor and controller on a single chip. It also has the advantage of programmable random access to local pixels.
Cameras – differentiated by the color of the output image:
Monochrome camera: A camera that outputs monochrome images.
Color camera: A camera that outputs color images.
Cameras – differentiated by output signal
Analog signal camera: The signal transmitted from the sensor is converted into an analog voltage signal, i.e., a normal video signal, and then transmitted to the image acquisition card.
Digital signal cameras: The signal is directly digitized and output internally after being output from the pixels of the sensor. Digital cameras include 1394 cameras, USB cameras, Gige cameras, CameraLink cameras, etc.
Cameras – Classified by Sensor Type
Area scan camera: A camera whose pixels are distributed in an area on the sensor, and the image it produces is a two-dimensional "area" image.
Line scan camera: A camera whose sensor is arranged in a line (one or three rows), and the image it produces is a one-dimensional "line" image.
Camera -- Sensor Size
The size of the photosensitive area of the image sensor. This size directly determines the physical magnification of the entire system. Examples include 1/3" and 1/2". Most analog camera sensors have an aspect ratio of 4:3 (H:V), while digital camera sensors have various aspect ratios such as 1:1, 4:3, and 3:2.
Camera -- Pixels
It is the smallest unit of an image that is formed on a camera chip. Taking a 2-megapixel camera as an example, the full screen has 1600*1200 pixels, which are formed on a 1/1.8-inch CCD chip.
Camera -- Resolution
The resolution of an area scan camera is determined by the resolution of the chip used in the camera, which is the number of pixels arranged on the chip's target surface. Typically, the resolution of an area scan camera is expressed by two numbers: horizontal and vertical resolution, such as 1920 (H) x 1080 (V). The first number indicates the number of pixels per row, i.e., a total of 1920 pixels, and the second number indicates the number of rows of pixels, i.e., 1080 rows.
Camera -- Frame Rate and Line Frequency
The frame rate/line frequency of a camera indicates the frequency at which the camera captures images. Area scan cameras are usually expressed in frame rate, with the unit fps (Frames Per Second). For example, 30fps means that the camera can capture a maximum of 30 frames of images in 1 second. Linear cameras are usually expressed in line frequency, with the unit kHz. For example, 12kHz means that the camera can capture a maximum of 12,000 lines of image data in 1 second.
Camera -- Shutter Speed
Most CCD/CMOS cameras use electronic shutters, which control the sensor's light integration (exposure) time by controlling the width of electrical signal pulses. For general-performance cameras, shutter speeds can reach 1/10000 to 1/100000 of a second.
Rolling shutter: The shutter used on most CMOS image sensors, characterized by line-by-line exposure, with each line having a different exposure time.
Global shutter: A shutter used in CCD sensors and very few CMOS sensors, where all pixels on the sensor are exposed simultaneously.
Camera -- Smart Camera
Intelligent industrial cameras are highly integrated, miniature machine vision systems. They integrate image acquisition, processing, and communication functions into a single camera, providing a multifunctional, modular, highly reliable, and easy-to-implement machine vision solution. Intelligent industrial cameras typically consist of an image acquisition unit, an image processing unit, image processing software, and network communication devices. Utilizing the latest DSP, FPGA, and high-capacity storage technologies, their intelligence level is continuously improving, meeting the diverse application needs of machine vision.
Lens - Key Parameters
Industrial lenses are mostly composed of multiple lens elements. In calculations, the effect of thickness on the lens is ignored, treating it as an equivalent convex lens model with no thickness, i.e., an ideal convex lens.
Parameters: Focal length/Field of view/Object distance/Image distance/Aperture/Depth of field/Resolution/Magnification/Distortion/Interface
Resolution: The ability to distinguish colors and textures.
Distortion: The magnification of the center area and the surrounding area of the lens is different.
Distortion correction is typically performed using a black-and-white grid image, and the process is not complicated. Generally, if the distortion is less than 2%, it is not noticeable to the human eye; if the distortion is less than one pixel of the CCD, it is also invisible to the camera.
Lens --- Telecentric Lens
In a measurement system, the object distance often changes, which in turn changes the image height, resulting in a change in the measured object size, thus introducing measurement error. Even when the object distance is fixed, measurement error can still occur because the CCD sensitive surface is not easily and precisely aligned with the image plane. Using an image-side telecentric objective can eliminate measurement errors caused by changes in object distance, while an object-side telecentric objective can eliminate measurement errors caused by inaccurate CCD positioning.