Share this

On Machine Vision and Image Analysis Technology

2026-04-06 03:32:28 · · #1
You might still prefer expert advice, but shrink-wrapped development kits now allow developers with limited experience to take on more vision-based projects. Key takeaways: ● Not all vision-related projects require expert advice; with the help of hardware and development tool vendors, developers without vision system development experience can often complete most (if not all) of the development work, saving their companies money. ● Before starting vision system development, you must answer about five or six questions; your answers will largely determine the system's hardware cost. ● Choosing a menu-driven environment to begin device development and then refining the program through graphical or syntax programming can significantly improve efficiency. ● Get used to the idea that vision systems require careful maintenance after installation; you often can't anticipate the various reasons why the algorithm might need adjustment after the system has been running for a while. Successfully developing a vision-based device can require a great deal of expertise, so many developers who intend to do so are reluctant to attempt the task and instead turn to consultants who have built their careers by mastering the nuances of every aspect of the technology. Typically, a consultant can save you not only several times the consulting fees but also a significant amount of valuable time. Even so, some compact, packaged software packages for vision-based system development are increasingly enabling projects that even those without machine vision or image analysis experience can confidently undertake. If you lack the appropriate experience, the first good step is to determine which tasks require external assistance and which you can likely complete quickly yourself using pre-packaged software. Vendors providing development tools and hardware can often help you make this judgment. In many cases, their websites offer tools to aid in this decision-making process. Calling such a vendor will usually connect you with an application engineer who can gather information about your equipment. When appropriate, most vendors will recommend consultants familiar with their work. Often, the most economical approach is to use consulting assistance only for certain parts of a project, such as lighting. Image analysis and machine vision are related yet distinct fields. In one sense, image analysis is a part of machine vision. However, in another sense, image analysis is a broader discipline. In reality, the line between these two fields is often blurred. Machine vision applications are often commercially driven. For example, machine vision is a critical part of many manufacturing processes. On the other hand, "image analysis"—as most people understand it—is more likely to be applied in scientific research laboratories. Some experts say that image analysis often deals with less precise operations than machine vision. Characterizing or classifying images of unknown objects, such as animal tissue cells in academic laboratories or even clinical pathology laboratories, is one example. A research team at Cold Spring Harbor Laboratory (New York) and Howard Hughes Medical School used Matlab and its image capture and image processing toolboxes to study how the mammalian brain works. Using the image capture toolbox, researchers could stream microscope images directly from a camera to Matlab and use the image processing toolbox to analyze images over a period of time. To make capture and analysis as easy as pressing a button, the researchers created a vivid graphical user interface in Matlab. In machine vision, you usually have a general understanding of the object being observed by a camera or image sensor, but you need more specific information. Product inspection equipment falls into the category of machine vision. For example, you know what kind of printed circuit board model an image depicts, but you must determine if all the components are the correct type and in the correct position. Determining the correctness and proper placement of components certainly involves image analysis, but this analysis is far more intuitive than that used in clinical laboratories. Classification of Machine Vision Tasks Several experts categorize major machine vision tasks into the following types: ● Counting components such as washers, nuts, and bolts, and extracting visual information from a noisy background. ● Measuring (also known as determining) angles, dimensions, and related positions. ● Reading, including operations such as extracting information from barcodes, OCR (Optical Character Recognition) of characters etched on semiconductor chips, and reading 2D DataMatrix codes. ● Comparing objects, for example, comparing units on a production line with KGUs (Known Good Quality Units) of the same type to identify manufacturing defects such as missing components or labels. This comparison may be a simple pattern subtraction or may involve geometric or vector graphics matching algorithms. If the objects being compared are different in size or orientation, the latter must be used. Types of comparisons include detecting the presence or absence of objects, matching colors, and comparing print quality. The objects being inspected may be as simple as aspirin tablets, whose correct labeling needs to be verified before packaging. The above list is quite specific, so it may mean you can create machine vision devices using menu-driven, graphical development tools instead of writing code in a text-based language like C++. While developers with a long history of programming machine vision devices in text-based languages ​​often prefer to stick with the tools they've succeeded with over the years, you can indeed use one of several menu-driven graphical application development packages. While some in the industry criticize this reluctance to change, ask yourself how you would feel if you hired a consultant specializing in a particular device to try out a new software package for your first time. Even among graphical tools, vendors differentiate between those that truly offer programmability and those that only allow users to configure the device. This configurable approach allows you to get the device up and running faster and provides much of the flexibility developers need. Programmable functionality offers developers greater flexibility but increases development time—especially for those using a tool for the first time. In some cases, both configurable and programmable approaches produce output in the same language, allowing you to use programmable functionality to modify or improve the device you created using the configurable approach. The potential benefits of such flexibility are enormous: you can use more powerful tools to refine a device, and you can quickly get it working at a raw level with basic tools. This approach reduces the likelihood of wasting time on refinement methods that you later discover have fundamental flaws. The superiority of toolkits is exemplified by Data Translation's Vision Foundry, a leading alternative for device development. Toolkits allow you to quickly validate concepts using configurable, menu-based, interactive tools, and then improve your device through programming capabilities. In Vision Foundry, most programming tasks are accomplished by writing intuitive scripts. The Adjustments Underway Perhaps even more important is how the easy interchangeability of the two approaches simplifies the inevitable adjustments that are taking place in many machine vision devices. For example, in AOI (Automated Optical Inspection), you might want to reject any UUTs (Units Under Test) that don't match the KGU. Alas, with this strategy, the inspection process would likely reject most of the units you produce, even if most of them have acceptable performance. A simple example of an AOI system rejecting a high-quality component due to minor differences is when the date code of a component used by the UUT differs from the date code of its equivalent component on the KGU. In this case, you can anticipate the data code issue during the equipment design phase and ensure the system ignores image differences in areas containing the date code. Unfortunately, other minor differences are harder to predict, and you must anticipate the need to modify the equipment when you discover them. In fact, some AOI system software can perform such modifications almost automatically; if you inform the system that it has rejected a high-quality unit, the software will compare the unit's image with the original KGU and will not inspect subsequent units in the discrepancy area. However, this approach can sometimes produce unsatisfactory results. Suppose the inspection system is installed in a room where external light can enter through a window, thus changing the illumination of the UUT. While inspectors might adapt to this change without thinking, it can cause the vision system to classify images of the same object as different objects, leading to unpredictable inspection failures. Although blocking the window can prevent external light from entering, adjusting the test procedure to ensure the KGU passes under various lighting extremes may be more cost-effective. Even so, this example highlights the importance of lighting in machine vision and image analysis. Lighting itself is both a science and an art. Various lighting techniques have different advantages and disadvantages, and lighting methods for the UUT can solve or improve common machine vision problems (Reference 1). Project Costs and Timelines Machine vision projects vary widely in cost. Several such projects cost less than $5,000, including hardware, pre-packaged software development tools, and the labor costs of the equipment developers. However, such low project costs likely do not include the costs of adjusting and debugging the equipment to achieve satisfactory performance. At the other end of the cost range, project costs exceed one million dollars. The most common examples of such projects are probably major improvements to automated production lines in the automotive and aerospace industries. According to some suppliers, the most common project costs typically range from tens of thousands to slightly over one hundred thousand dollars. The project timeline from management approval to the vision system being operational in production is usually less than six months, often only one or two months. Unsurprisingly, almost all vision projects begin with answers to fundamental questions. The answers to these questions largely determine the cost of the vision system hardware: How many cameras are needed? What image resolution is required? Is color imaging necessary? How many frames per second must be acquired? Is a camera that produces analog output required? If so, a frame receiver board needs to be selected to convert analog signals into digital form, and, if necessary, synchronize the acquisition of image frames with external trigger events (Reference 2). Although some frame receivers for analog cameras can receive input from multiple cameras simultaneously, boards that provide one interface per camera are more common. If you choose a camera with a digital interface, will you use a "smart" camera capable of image processing and acquisition, or will the camera send raw (unprocessed) image data to a host PC for processing? Also, what interface standard or bus does the digital camera use to communicate with the host PC? Digital cameras that work with certain buses require frame receivers. However, unlike frame receivers for analog cameras, frame receivers for digital cameras do not perform analog-to-digital conversion. Hardware-related considerations may extend beyond these questions. Moreover, some questions rely on the generally correct default assumption that the host computer for the vision system is a PC running a standard version of Windows (www.microsoft.com). Machine vision systems sometimes run under real-time operating systems, while image analysis software often runs under Unix or Linux. Furthermore, like other real-time systems, many real-time vision systems employ CPUs different from Pentium (www.intel.com) or Athlon (www.amd.com) devices. [B]Camera Interface[/B] Interface between the camera and the host computer remains a critical issue in vision system design. Despite the emergence of cameras with digital interfaces, and despite imaging systems using IEEE 1394 (also known as FireWire and i-Link) to interface with cameras, the choice of camera interface still warrants careful consideration. (USB 2.0, which is rapidly becoming the mainstream high-speed PC peripheral interface, is not a factor in industrial imaging, primarily because, although its 480 Mbps data transfer rate is nominally higher than the original FireWire, USB 2.0's host-centric protocol is slower for imaging than FireWire.) FireWire is a high-speed serial bus popular in consumer video systems and home entertainment systems. This plug-and-play bus uses a multi-point architecture and peer-to-peer communication protocol. The initial specification of the standard included data transfer rates up to 400 Mbps. Data transfer rates will eventually reach 3.2 Gbps. In January 2003, IEEE released 1394b, and its proponents expected an 800Mbps version to soon be seen in vision hardware. However, despite the reasonable cost of industrial FireWire cameras, their increasing availability in consumer devices (where the required resolution—and sometimes frame rate—is more modest than that required in industrial devices), the ease of use of their thin, flexible serial cables, and the noise immunity of their bus digital technology, still limit their adoption. Cost may restrict the widespread use of FireWire in industrial imaging. Industrial FireWire cameras are more expensive than industrial analog output cameras with the same frame rate and resolution. On the other hand, cost comparisons between FireWire and analog cameras can sometimes be misleading. In systems with built-in FireWire ports, cameras typically do not require additional interface hardware. Such cameras include an ADC (analog-to-digital converter), while analog cameras require a frame receiver to perform the necessary ADC functions. National Instruments' Celeron-based CVS-1454 Compact Vision System exemplifies machine vision hardware designed for factory environments. While this system (top right) is not a standard office PC, it includes three FireWire ports, eliminating the need for special camera interface hardware. The system is used in conjunction with National Instruments' LabVIEW graphical development environment, which allows for rapid program development using interactive graphical tools, followed by full graphical programming for debugging if necessary. FireWire cameras use the synchronous protocol IEEE 1394, guaranteeing bandwidth and ensuring data packets arrive in the order they are sent (if all arrive). Other protocols of this standard (asynchronous) guarantee message delivery but not the order in which data packets arrive. Each synchronous device can issue a bandwidth request every 125 μs—at a maximum rate of 8 kHz. The device acting as the bus manager grants each requesting device the authority to send a predetermined number of data packets within the subsequent 125 μs. The more synchronous devices on the bus, the less bandwidth each device receives. When there is only one camera on the FireWire bus, a 1280×960 pixel monochrome camera can send approximately 15 frames per second. A 640×480 pixel FireWire color camera can send approximately 30 frames per second. Although neither of these examples seems to utilize the full available data transfer capacity of the bus, the number of bits per pixel and the method the camera uses to format the data will affect the maximum frame rate. Incidentally, higher resolution is not always better. Higher-resolution cameras are not only more expensive and typically have slower frame rates than lower-resolution cameras, but they also more easily reveal subtle differences between UUTs and KGUs, thus increasing the rate at which AOI systems erroneously detect faults. More Camera Interfaces In addition to FireWire, interface options for digital output cameras include RS-422 parallel interfaces and Camera Link (Table 1). RS-422 camera interfaces are not fully standardized, so a dedicated camera interface card is usually required. These cards are not frame receivers in the sense of interface cards used for analog output cameras, but they can typically be plugged into the host PC's PCI bus as well. Parallel interfaces proved unsuitable due to the need for over 50 connections at times. However, RS-422 digital cameras remained popular and continued to be widely used. AIA's Camera Link was the highest-performing digital output camera interface standard. Unlike FireWire, Camera Link allowed only one camera per bus, but many PCs could accommodate multiple Camera Link buses. Camera Link could transmit data at speeds up to 4.8 Gbps using SERDES (serialization/deserialization) technology over parallel-combined unidirectional, serial, and point-to-point links. Each link could transmit data from seven channels and used LVDS (Low Voltage Differential Signaling) technology, requiring two wires per link. The number of channels determined the maximum data rate of the Camera Link bus. A fully configured bus could have 76 channels, including 11 links and 22 wires, although the standard also accounted for buses with 28 and 56 channels (4 and 8 links and 8 and 16 wires, respectively). Each Camera Link bus typically requires a separate interface card in the PC. Choosing a Camera Link bus currently involves writing additional software. Because cards that generate Camera Link buses in PCs are scarce and not fully standardized, compact application development packages often lack Camera Link initiators. Nevertheless, if you need the impressive speed of Camera Link, you don't have many options. Sometimes, you can use smart cameras to reduce the amount of data your vision system has to process, as smart cameras can process or compress the data they acquire before sending it to the host PC. Such cameras can sometimes reduce the data rate between the camera and the host, and also reduce the data rate between loads within the host, but this is more expensive. However, you must ensure that the data compression is either truly lossless or eliminates any data loss during compression.
Read next

CATDOLL 115CM Momoko TPE

Height: 115cm Weight: 19.5kg Shoulder Width: 29cm Bust/Waist/Hip: 57/53/64cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22
CATDOLL Kara TPE Head

CATDOLL Kara TPE Head

Articles
2026-02-22
CATDOLL 126CM Laura

CATDOLL 126CM Laura

Articles
2026-02-22