Share this

Machine Vision System Design: Fundamentals

2026-04-06 05:10:25 · · #1

Design of machine vision systems

Machine vision system integration is the process of combining various different components and subsystems and making them function as a single unified system.

Machine vision system integration includes light sources, lenses, cameras, camera interfaces, and image processing software, among other things. You might be wondering how to design and implement a complete, successful machine vision system.

In this article, we will discuss the general parts of the broad tasks that make up machine vision system integration, and then focus on the step-by-step process of system design using real-world examples of basic machine vision applications.

Fundamentally, "system integration" is the process of combining various different components and subsystems and making them function as a single, unified system. The steps, stages, and related terminology for implementing different vision systems can vary significantly.

Generally, the entire system integration process in machine vision may include the following steps:

Part 1 – Preparation: Preliminary Analysis and Project Requirements Specification

Part 2 – Design: Detailed final technical/system specifications.

Part 3 – Implementation: Assembly/Build/Initial Testing

Part 4 – Deployment: Delivery/Installation/Commissioning and Acceptance Testing.

However, in this discussion, we will focus only on the very critical and sometimes complex "design" phase, and how it transitions into the "implementation" phase. This is the basic order of design, following the execution sequence:

■ Choosing a camera

■ Choosing a lens

■ Selecting a light source

■ Collect some images to verify imaging

■ Select computer (not required if using a smart camera)

■ Develop programs (image processing, operator interface, etc.)

■ Put everything together and start testing and optimization

Before you begin designing, you need a fully reviewed "requirements specification" that identifies the operations the system must perform and how it will operate. An "acceptance testing" document details how to validate the system and conduct functional demonstration tests. Appropriate assessments should be performed beforehand to confirm that the proposed machine vision design and components will meet the required specifications.

It is equally important to recognize that integration in any discipline is a team activity. In machine vision, it begins with the customer and the systems integration team (inside or outside the company). The systems integration team must have skills in optics, lighting, electronics, controls, programming, mechanical design, and project management, and may also have other skills such as robotics, motion control, documentation, and training.

With all of this in place, it's time to start the system design by selecting components.

System Design: Component Selection and Specifications

Camera selection

We will use the term "camera" broadly to describe components in machine vision systems that perform image acquisition. The basic specifications of a camera are driven by the requirements for object feature detection, recognition, positioning, or measurement, as well as processing speed (and several other considerations). The required spatial resolution, image resolution, and frame rate are determined based on the application's needs.

● Spatial resolution

Spatial resolution is determined by the number of pixels of the minimum feature being processed, or the measurement accuracy/repeatability that must be achieved, or both.

For example, in terms of specifications, you need to detect a small hole with a diameter of 0.3mm. Theoretically, two pixels would be sufficient, but experience tells us that two pixels alone are unreliable. Three, four, or more pixels are needed to cover the features of the small hole. If you say you need four pixels, then your spatial resolution is 0.3mm divided by four pixels, which is 0.075mm/pixel.

In measurement applications, you use an algorithm that achieves repeatability with subpixel accuracy. How small a subpixel is depends on several factors, including the size of the feature being measured in the image (larger features can be measured with higher precision), the contrast between the measured feature and its background, and camera noise. The practically achievable subpixel resolution depends on the specific application. Practical experience suggests that the lower limit of subpixel accuracy is approximately one-tenth of a pixel.

Suppose you need to measure with an accuracy of 0.01mm (repeatability), and based on experience or some experiments, you believe the algorithm's repeatability is one-fifth of a pixel. Then, your required spatial resolution is 0.01mm divided by 1/5 of a pixel, meaning the system requires a spatial resolution of 0.05mm/pixel.

If you have requirements for both the minimum number of pixels for a feature and the required measurement accuracy/repeatability, you can calculate the spatial resolution simultaneously and select the smaller result.

● Image resolution

The required image resolution is the number of columns and rows of pixels needed to achieve the spatial resolution determined by our calculations. Define the imaging region (“Field of View”/FOV) and divide the FOV by the spatial resolution. For example, if we specify a field of view of 133x133mm and the spatial resolution (according to our calculations) is .075mm/pixel, then our image resolution is 133mm divided by .075mm/pixel, resulting in a width and height of 1,733 pixels.

Choose a camera with pixel row and column counts equal to or greater than your calculated count. In this example, you might consider using a camera with an image sensor pixel size of 3.45µm and a resolution of 2448x2448.

● Imaging frame rate

The final essential step in camera selection is verifying whether candidate cameras can achieve the image frame rate (frames per second) suitable for the given application, and selecting a suitable image acquisition interface (not required for "smart cameras"). Most basic general-purpose machine vision applications have relatively low workpiece throughput (10-15 parts per second, and often much slower). The details of achieving high imaging rates in applications are beyond the scope of this discussion, but it's important to understand that higher resolutions generally result in slower imaging frame rates.

Lens selection

The fundamental factors determining lens selection include: lens format, required field of view, distance of the imaging component from the object to the image (working distance/WD), and required optical resolution. In our discussion, we will consider choosing a "fixed focal length" lens. (Other more specialized lenses, such as "telecentric" lenses, are useful in many machine vision applications, but that will be discussed in another section.) Lens specification selection involves three related calculations: optical resolution, magnification (also known as PMAG), and focal length of the lens.

● Lens Attributes

Lens specifications include the manufacturer, lens mount type, and compatible image sensor size, etc.

● Optical resolution

This metric helps assess a lens's ability to clearly reproduce the fine details of a subject and is an important parameter for evaluating lens quality. (This is a good starting point, although many other additional measures can be used to compare lenses.) The target optical resolution of an image sensor is 1 line/(2 * pixel size, in mm); in our example, it is 1/(2 * 0.00345), approximately 145 lp/mm.

● Magnification

The lens transforms or "magnifies" the desired field of view (FOV) and "projects" it onto the camera sensor. The required magnification for a given FOV can be calculated by dividing the shorter side of the FOV by the shorter side of the sensor's physical dimensions. For example, based on the discussion above, with a sensor shorter side of 7.07 mm and an FOV of 133 mm, the required magnification (M) is approximately 0.053x.

Finally, to obtain an estimate of the lens focal length (f), you can use a lens parameter reference table, lens calculator, or formula provided by the supplier.

For our example, assuming a working distance (WD) of approximately 500mm, the resulting focal length is 25.17mm. Our final lens choice would then likely be a 25mm focal length with an optical resolution of 160 line pairs/mm and a C-mount.

Lighting selection

The goal of machine vision lighting is to create contrast between the workpiece and its background. There is a wealth of literature on machine vision lighting techniques, and under what conditions each technique works well.

Evaluation and final design

Image evaluation and testing

Proving your design is crucial before committing to building the complete system. Design flaws are often uncovered during feasibility testing; correcting these flaws quickly before investing more time and money is essential, as changes later will be far more expensive and time-consuming.

For testing purposes, use the actual camera, lens, and light source from the design. Image multiple physical samples to represent the variations that will occur in actual production. If necessary, use a part appearance model equivalent to the production environment, including anticipated appearance changes.

The basic assessment should achieve the following objectives:

■ Confirm that the lens and camera produce the correct FOV at the required WD.

■ Confirm that the imaging system (camera, lens, lighting) creates high-quality (contrast and feature sharpness) images relative to application requirements.

■ Check whether the system resolution produces the expected requirement definition and is suitable for the application.

■ Evaluate basic processes (defect detection, measurement, etc.) to confirm expected system functionality.

Of course, it may be necessary to reconsider the component selection throughout the testing and evaluation process. Where needed, re-evaluate the component selection to find components that can provide higher quality images.

Computer/Processor

If you choose a smart camera for your camera, then you have already chosen a processor. Otherwise, you can choose a dedicated processor (a dedicated processor that integrates a processor, operating system, and machine vision software), an industrial PC, or a standard PC that can work properly in some less demanding environments, ensuring that the processor has the necessary interfaces for the camera and/or other external devices, as well as the computing power to process the results at the target rate.

System Design: Documentation

The final step in the design process is to document it. The data collected regarding the imaging design will help with troubleshooting and replication.

Other design considerations

Machine vision systems are indeed rarely integrated as standalone components. At least mechanical design is required to achieve this; it may be necessary to design and build a complete automated system. Other automation components may also play a role, including PLCs, robots, and other equipment.

in conclusion

The design phase naturally evolves into the implementation phase: assembly, construction, and testing. This is an iterative process with overlap between the design and construction phases. During integration, consider this overarching rule: test, optimize/tune, repeat. Hopefully, this basic introduction to machine vision integration design will help you succeed in your next machine vision project.

Read next

CATDOLL 146CM B-CUP Tami (TPE Body with Hard Silicone Head) Customer Photos

Height: 146cm A-cup Weight: 26kg Shoulder Width: 32cm Bust/Waist/Hip: 64/54/74cm Oral Depth: 3-5cm Vaginal Depth: 3-15c...

Articles 2026-02-22
CATDOLL 128CM Hedi

CATDOLL 128CM Hedi

Articles
2026-02-22