Image sensors are core components in various industrial and surveillance cameras, portable VCRs, digital cameras, scanners, and more. Currently, this rapidly growing market has expanded into fields such as toys, mobile phones, PDAs, automobiles, and biotechnology.
Image sensor definition and types
An imaging lens projects an image of a scene illuminated by external light (or its own light) onto its image plane, forming a two-dimensional light intensity distribution (optical image). A sensor capable of converting this two-dimensional optical image into a one-dimensional time-series electrical signal is called an image sensor. Image sensors are a crucial component of digital cameras.
Depending on the components used, image sensors can generally be divided into two main categories: CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide Semiconductor).
Besides the two commonly used types mentioned above, there is another type called CIS (short for Contact Image Sensor), which is generally used in scanners. Because it is a contact scanning method (it must be very close to the original document), it can only use LED light sources. Its depth of field, resolution, and color performance are currently inferior to CCD photosensitive devices, and it cannot be used to scan transmissive films.
Contact CIS
With the rapid development of solid-state imaging applications in the 1970s and 1980s, CCD technology and manufacturing processes were optimized in terms of optical characteristics and imaging quality. For the last 25 years of the century, CCD technology dominated the trend of image sensor devices, providing high-resolution and high-quality image sensors that could be integrated onto a very small chip.
CMOS image sensors have seen rapid development in recent years, showing a strong potential to surpass CCDs. In mid-range and low-end applications, CMOS offers performance comparable to CCDs, while maintaining a clear price advantage. With further technological advancements, CMOS is poised to secure a place in high-end applications as well.
How Image Sensors Work
How Image Sensors Work
An image sensor is a semiconductor device that converts optical images into digital signals. Tiny photosensitive materials embedded in the sensor are called pixels. The more pixels a sensor contains, the higher the image resolution it provides. It functions like film, but it converts image pixels into digital signals.
The development history and characteristics of CCD and CMOS
The CCD was invented in 1969 by Willard S. Boyle and George E. Smith of Bell Labs in the United States.
Bell Labs
At the time, Bell Labs was developing videophones and semiconductor bubble memory. Combining these two new technologies, Boyle and Smith came up with a device they named the “charge bubble device.”
The characteristic of this device is that it can transfer charge along the surface of a semiconductor, so it was attempted to be used as a memory device. At that time, memory could only be input from a temporary register by "injecting" charge. However, it was soon discovered that the photoelectric effect could generate charge on the surface of this type of element, thus forming a digital image.
By the 1970s, researchers at Bell Labs were able to capture images using simple linear devices, and the CCD was born. CCDs are still widely used in digital cameras and astronomy.
We all know that CCD is a medium that replaces traditional film in the digital age. Its working principle is also derived from the principle of light sensing by the chemical substances on the original film.
It is made of a highly sensitive semiconductor material that can convert light into electrical charge, which is then converted into digital signal by an analog-to-digital converter chip. After compression, the digital signal is stored in the camera's internal flash memory or built-in hard drive card, so the data can be easily transmitted to a computer and the image can be modified as needed with the help of the computer's processing capabilities.
A CCD is composed of many photosensitive units, typically measured in megapixels. When light shines on the surface of a CCD, each photosensitive unit reflects an electrical charge onto the component. The signals generated by all the photosensitive units are added together to form a complete image.
The most significant characteristics of CCD are:
1. Mature technology
2. High image quality
3. High sensitivity, low noise, and wide dynamic range;
4. Fast response speed, self-scanning function, minimal image distortion, and no image afterimage;
5. Manufactured using very large-scale integrated circuit technology, resulting in high pixel integration and precise dimensions.
There are many metrics for evaluating the quality of a CCD sensor, such as pixel count, CCD size, and signal-to-noise ratio. Among these, pixel count and CCD size are the most important metrics. Pixel count refers to the number of photosensitive elements on the CCD.
We can think of the images we capture as being composed of many small dots, each dot being a pixel. Obviously, the more pixels there are, the clearer the image will be. If the CCD doesn't have enough pixels, the clarity of the captured image will be greatly affected.
Therefore, the more pixels a CCD has, the better. However, increasing the number of pixels in a CCD to achieve better image quality inevitably leads to a problem: increased manufacturing costs and decreased yield.
Therefore, in response to a series of issues such as cost, a CMOS sensor with lower cost, lower power consumption, and higher integration has emerged.
CMOS is an important chip in a computer system, which stores the most basic data for system boot.
CMOS
The manufacturing technology of CMOS is not much different from that of general computer chips. It mainly uses semiconductors made of silicon and germanium, allowing negatively charged N-type and positively charged P-type semiconductors to coexist on the CMOS. The current generated by the complementary effect of these two positive and negative electrodes can be recorded and converted into images by the processing chip. Later, it was discovered that CMOS can also be processed as an image sensor in digital photography.
CMOS
CMOS image sensors are a typical type of solid-state imaging sensor, sharing a common historical origin with CCDs. A CMOS image sensor typically consists of several parts, including a pixel array, row drivers, column drivers, timing control logic, an analog-to-digital converter (AD converter), a data bus output interface, and a control interface. These parts are usually integrated onto a single silicon chip. Its operation generally involves reset, photoelectric conversion, integration, and readout.
The photoelectric information conversion function of CMOS is basically similar to that of CCD. The difference lies in the way the information is transmitted after photoelectric conversion between the two sensors.
CMOS configuration
CMOS has the advantages of simple information reading, fast information output rate, low power consumption (only about 1/10 of CCD chip), small size, light weight, high integration and low price.
Because the manufacturing cost and yield of CMOS sensors are higher than those of CCD sensors, several well-known manufacturers have been increasing their R&D efforts on CMOS sensors since 2000. Currently, the growth rate of CMOS has reached several times that of CCD.
We can see that even in the early days of Nikon's digital SLR products, some models still used CCD sensors, but in the digital cameras launched by Nikon, Sony, and Canon in recent years, we can hardly see any trace of CCDs anymore.
Although using a CMOS sensor can save on camera costs, image quality is still the most important factor for a camera. The biggest drawback of CMOS compared to CCD is image quality. This is because early CMOS sensors had a significant disadvantage: when the current changes, the frequency increases, which inevitably generates heat and ultimately causes noise in the image, affecting image quality.
If we compare CCD and CMOS sensors, the biggest advantage of CCD is its high image quality, while the biggest advantage of CMOS is its low cost and ease of mass production. However, the disadvantages of CMOS are being continuously improved.
Currently, some medium format digital cameras or digital backs still use CCD sensors because different products have different requirements for image quality. Therefore, the price of those medium format digital products is often much higher than that of ordinary digital cameras.
Therefore, it can be said that the main development direction of the future camera market will still be based on CMOS, and on this basis, the resolution and sensitivity of CMOS will be continuously improved.
As times progress, cost-saving is a business principle that every company adheres to. The future of CCDs is not necessarily in the camera market; in other fields, CCDs will also be widely used due to their own advantages.
With the continuous development of technology, I believe that one day in the future, there will definitely be more types of sensors. It's only a matter of time. When we look back at the film era, the CCD era, and the CMOS era that we have experienced, we will be truly amazed by the rapid development of technology.
Comparison of CCD and CMOS
1. Imaging process
CCD and CMOS use the same photosensitive material, so the basic principle of generating electrons after being exposed to light is the same. However, the reading process is different: CCD transfers data frame by frame or line with the help of synchronization and clock signals. The entire circuit is very complex and the readout rate is slow. CMOS reads signals in a way similar to DRAM. The circuit is simple and the readout rate is high.
2. Integration level
The CCD readout circuit using special technology is quite complex. It is difficult to integrate A/D conversion, signal processing, automatic gain control, precision amplification and storage functions onto a single chip. It generally requires a combination of 3 to 8 chips, and also requires a multi-channel non-standard power supply voltage.
Thanks to large-scale integrated manufacturing processes, CMOS image sensors can easily integrate the above functions onto a single chip, and most CMOS image sensors have both analog and digital output signals.
3. Power supply, power consumption, and size
CCDs require multiple power supplies, resulting in high power consumption and a relatively large size. CMOS, on the other hand, only requires a single power supply (3V~5V), and its power consumption is only 1/10 that of CCDs. Highly integrated CMOS chips can be made quite small.
4. Performance Indicators
CCD technology is quite mature, while CMOS is in a period of rapid development. Although the image quality of high-end CMOS is currently not as good as that of CCD, some indicators (such as transmission rate) have surpassed CCD. Due to the many advantages of CMOS, many institutions at home and abroad have developed numerous products using CMOS image sensors.
Six Key Hardware Specifications of CCD and CMOS Image Sensors
Sometimes people may wonder why there are differences in image quality between the same high-definition network camera. Why do the nighttime results differ even when using the same accessories? This is actually related to the hardware specifications of the sensor (i.e., image sensor) we use. Whether it is a CCD or CMOS image sensor, there are six main hardware specifications: pixels, sensor size, sensitivity, electronic shutter, frame rate, and signal-to-noise ratio.
Pixels:
A sensor contains many photosensitive units that convert light into electrical charges, forming an electronic image corresponding to a scene. Within a sensor, each photosensitive unit corresponds to a pixel. More pixels mean it can sense more object details, resulting in a clearer image; higher pixel counts mean a sharper imaging effect.
Pixels
Let's relate this to our Zhongwei Century products: a 100W network camera has a resolution of 1280x720, and multiplying these two values gives you the pixel count, which is nearly 1 million pixels. A 130W camera has a resolution of 1280x960, and also has nearly 1.3 million pixels. In terms of image quality, the 130W camera is slightly better than the 100W camera.
Target surface size:
The size of the photosensitive part of an image sensor is usually expressed in inches. Similar to televisions, this data typically refers to the diagonal length of the image sensor, such as 1/3 inch. A larger sensor area means better light transmission, while a smaller sensor area makes it easier to obtain a greater depth of field.
For example, a 1/2-inch sensor can provide a larger light transmittance, while a 1/4-inch sensor can more easily achieve a greater depth of field. Let's connect this to our Zhongwei Century products: our 100W product is 1/4-inch, our 130W is 1/3-inch, and our 200W is 1/2.7-inch. You can see the difference in image quality caused by the different sensor sizes mentioned above just by looking at the screen.
ISO sensitivity:
This means that the sensor detects the intensity of incident light using a CCD or CMOS sensor and related electronic circuitry. The higher the ISO, the more sensitive the sensor is to light, and the faster the shutter speed can be. This is especially important when shooting moving vehicles or for nighttime surveillance.
This explains why different cameras have very different night vision capabilities. The unit of sensitivity is V/LUX-SEC. V (volts) is the unit of voltage, and LUX-SEC is the unit of light intensity. The higher this ratio, the better the night vision effect.
Electronic shutter:
Electronic shutter is a term derived from the mechanical shutter function of a camera. It controls the light-sensing time of the image sensor. Since the light sensitivity of the image sensor is the accumulation of signal charge, the longer the light sensitivity, the longer the signal charge accumulation time, and the larger the amplitude of the output signal current. The faster the electronic shutter, the lower the light sensitivity, making it suitable for shooting in strong light.
Frame rate:
This refers to the number of images recorded or played per unit of time. Playing a series of images continuously will produce an animation effect. According to the human visual system, when the image playback speed is greater than 15 frames per second (i.e., 15 frames), the human eye can hardly perceive the image jumps; when it reaches between 24 and 30 frames per second (i.e., 24 to 30 frames), the flickering phenomenon is basically imperceptible.
Frames per second (fps), or frame rate, indicates how many times a graphics sensor can update a scene per second. A higher frame rate results in a smoother, more realistic visual experience.
Signal-to-noise ratio:
Signal-to-noise ratio (SNR) is the ratio of signal voltage to noise voltage, and its unit is dB. Generally, the SNR values given by cameras are those with AGC (Automatic Gain Control) off. This is because when AGC is on, it amplifies small signals, thus increasing the noise level accordingly.
The typical signal-to-noise ratio (SNR) is 45–55 dB. A SNR of 50 dB indicates a small amount of noise in the image, but good image quality. A SNR of 60 dB indicates excellent image quality with no noise. A higher SNR indicates better noise control. This parameter relates to the number of noise points in the image; a higher SNR results in a cleaner-looking image, and fewer point-like noise points in night vision images.
Conclusion:
Currently, CCD still outperforms CMOS in terms of performance. However, with the continuous advancement of CMOS image sensor technology, its inherent advantages of integration, low power consumption, and low cost have been significantly improved in terms of noise and sensitivity, narrowing the gap with CCD sensors. Some industry insiders even believe that the future sensor market will belong to CMOS. So, which sensor is more suitable for the industrial camera market? Or which sensor is better suited to future needs?
The answer to the above questions is obvious: there are many trade-offs to consider when choosing a particular chip.
CCD and CMOS image sensors each have their advantages and disadvantages. In the overall image sensor market, they are both competitors and complements each other. Sometimes, the two types of sensors are complementary and can be used in different applications. Regardless of which sensor is more powerful, advancements in their technologies will undoubtedly greatly promote the development of the image sensor market and the machine vision industry.