Abstract: This paper explores the CMOS image sensor technology and architecture requirements for machine vision applications. Furthermore, it analyzes in detail the basic components of machine vision, the conditions required for cameras to meet application needs, and how to design cameras that can satisfy various machine vision application requirements while achieving a balance between image quality and cost.
In general, machine vision involves connecting an electro-optical system (camera) to a processing unit such as a computer to perform image processing and control related systems. In other words, a machine vision system is a system or computer capable of visually identifying target objects. Computer-controlled systems can include production units, product quality control systems, and gripping/releasing equipment.
What are the requirements for machine vision?
A machine vision system should include an image sensor and lens system, which is usually referred to as a camera system as a whole. It may need to be connected to a computer via an electrical interface such as FireWire, USB or Ethernet, and the computer is connected to a control device.
a) Camera
b) Computer (mainframe)
c) Frame receiver
d) Application software
Machine vision applications require a combination of hardware and software to ensure successful operation. While choosing appropriate hardware is crucial, the vision inspection software forms the core of all machine vision systems. Typically driven by a pixel clock, the sensor's resolution, operating speed, gain control, exposure time, and integration time are configured by the user through registers set via SPI or I2C interfaces. The sensor outputs frame synchronization and line synchronization pulses, as well as the digital data to be processed. The sensor's electrical interface is CMOS, supporting frequencies up to 200MHz. For even higher speeds, an LVDS interface is required to ensure signal integrity.
The typical system architecture of a machine vision camera is as follows:
1. Camera with offline processing function
The machine vision system configuration described above includes a standalone camera with industry-standard electrical interfaces such as FireWire, USB, or Gigabit Ethernet. The camera can be powered independently and transmit raw data to the host computer via the electrical interface. Video transmission can be either continuous frames or single frames, depending on the application requirements. Single-frame capture and video transmission is called trigger mode, which requires an external system to send electronic pulses to the camera system, typically at the CMOS level. The camera logic initiates a frame integration and sends the scanned data to the host computer via the electrical interface. In some cases, raw data is sent via a bus along with synchronization signals, clock, and data to terminal data acquisition systems such as frame receivers. The frame receiver stores the data in its memory, which can then be accessed, processed, and controlled by the host application software.
The electrical interfaces connecting the camera to the host include:
1 FireWire IEEE1394 interface
2 USB ports
3. GigEVision interface defined by the Automation Image Association.
4 Synthetic Analog Video Interface
5LVDS
One major advantage of offline processing is that a single host can handle both camera control and system control. However, because there is a certain delay in the transmission of video data from the camera frame by frame, this processing method is not suitable for real-time processing applications, such as product inspection on conveyor belts during device manufacturing.
2 cameras with online processing capabilities
Recently, DSP processors have developed rapidly, now possessing the computational capabilities to execute complex algorithms in real time, thus enabling online processing of cameras. Such cameras consist of sensors and a DSP processor, which can be connected via either non-adhesive logic or some form of adhesive logic. DMA directly sends the video scanned by the sensor to the DSP memory for frame-by-frame processing. The final result of the control function is either initiated directly by the processor in the controlled system or initiated as a command on the host machine.
The advantage of performing video processing in a camera is that data processing can be done in real time, and there is no packet processing burden on FireWire, USB, or Gigabit Ethernet interfaces. We can use byte-optimized assembly code to accelerate real-time processing on DSP processors with clock frequencies exceeding 300MHz.
Real-time image processing is crucial for inspection applications, such as detecting fast-moving devices on conveyor belts. The next image frame can only be transmitted to the system after the computation of one frame has been completed and appropriate action taken.
Crucial specifications:
For machine vision systems, image quality is a key factor directly affecting the final image processing result. Especially under natural lighting conditions, image quality varies significantly with changes in light source conditions. Adjusting camera settings such as gain and exposure time can compensate for unstable ambient light conditions, thereby improving image quality.
Depending on the end application and the distance between the sensor and the object being scanned, the light source can be provided by a separate device or it can be part of the periphery of the camera lens. If the light source is around the camera, then the camera can move with the light source. Commonly used light sources include halogen bulbs, fluorescent bulbs, and light-emitting diodes (LEDs).
Factors affecting image quality include:
1. Light Intensity
2. Light direction
3 Target distance
4 focal length
5 sampling rate
6. Exposure time and gain
7. Dark leakage current
8 resolution (number of pixels)
Lens selection and requirements:
High-quality lenses are just as important as sensor quality. A camera is an electro-optical system that requires optical and electronic components to work together to generate an image. Image blur is often caused by inappropriate lens selection. The optimal lens size and shape depend on the focal length, but for smaller object distances, a C-mount lens is generally used. If the camera needs to operate in highly reflective environments, an anti-reflective lens is best. The overall field of view coverage of a camera depends on the area of view required, the working distance, and the lens used.
Another key parameter in lens design/selection is the final object resolution (in millimeters or mils, or one-thousandth of an inch).
If the camera is used to measure the dimensions of objects in a production environment, the following important parameters need to be considered:
1 field of view
2. Sensor resolution (number of pixels)
3 Image quality
4. Accuracy of visual tools
For example, if an IBIS5-1300 sensor (1.3 million pixels, resolution 1280(h) x 1024(v)) is used, and the tool's accuracy is one-tenth of a pixel, then a 5-inch wide and 4-inch high object can achieve an accuracy of 0.0004 inches in the horizontal direction of a 6-inch FOV.
Resolution: Depending on the field of view and the image granularity of the final scanned object, VGA to megapixel array standards are generally used.
Sensitivity: Monochrome or Color: Most inspection applications can use monochrome sensors to generate grayscale levels. Typical applications include barcode readers, fingerprint scanners, and dimensional measurements in manufacturing equipment.
If color information of an object is required for quality and production control, a color sensor is used, such as for grading and classifying peppers or apples. The sensor's 24-bit color data can capture 17.4 million different colors.
Sensor parameters and selection:
In machine vision applications, sensors and cameras need to support a variety of resolutions and frame rates. If the sensor supports programmability, then more versatile camera designs can be added for various machine vision applications. Commonly supported features include:
*Window and resolution selection
*Users can program for high frame rates.
*Standard electrical CMOS interface
*Low sensor leakage current
Wide dynamic range
Reliability and sensor performance must be ensured in industrial operating environments. The equipment should be industrial-grade and typically operate between 0 and 80 degrees Celsius.
application:
*Guidance: Equipment systems employing robotic gripping and releasing technology
*Inspect: Texture, surface, trademark, assembly
*Measurement: Physical dimensions of product components, dimensions of assembled components
*Recognition: Grab and release devices, robotics, character reading, code reading
Cypress Semiconductor offers image sensors with high frame rates and user-selectable parameters, while operating within industrial temperature ranges, making them ideal for machine vision camera designs. For the IBIS and LUPA series sensors, frame rates range from 30 to 500 frames per second.