Firstly, technical specifications are the objective basis for characterizing the performance of a product. Understanding technical specifications helps in the correct selection and use of the product. Sensor technical specifications are divided into two categories: static specifications and dynamic specifications. Static specifications mainly assess the sensor's performance under static conditions, specifically including resolution, repeatability, sensitivity, linearity, hysteresis error, threshold, creep, and stability. Dynamic specifications mainly examine the sensor's performance under rapidly changing conditions, primarily including frequency response and step response.
Because sensors have numerous technical specifications and various documents describe them from different perspectives, different people may have different understandings, even leading to misunderstandings and ambiguities. Therefore, the following is an interpretation of several key technical specifications of sensors:
1. Resolution and Dissolved Power:
Definition: Resolution refers to the smallest change in a measurand that a sensor can detect. Resolution is the ratio of resolution to full-scale value.
Interpretation 1: Resolution is the most basic indicator of a sensor, characterizing its ability to distinguish between different quantities. All other technical specifications of a sensor are described using resolution as the smallest unit.
For sensors and instruments with digital display capabilities, resolution determines the minimum number of digits displayed in the measurement result. For example, the resolution of an electronic digital caliper is 0.01 mm, and its index error is ±0.02 mm.
Interpretation 2: Resolution is an absolute value with units. For example, the resolution of a temperature sensor is 0.1℃, and the resolution of an accelerometer is 0.1g, etc.
Interpretation 3: Resolution is a concept that is related to and very similar to distinguishing power, both of which characterize the sensor’s ability to distinguish the measured quantity.
The main difference between the two is that resolution is expressed as a percentage of the sensor's resolving power; it is a relative number and has no dimensions. For example, if the temperature sensor mentioned above has a resolution of 0.1℃ and a full-scale range of 500℃, then its resolution is 0.1/500 = 0.02%.
2. Repeatability:
Definition: Sensor repeatability refers to the degree of difference between measurement results when the same measurand is measured repeatedly under the same conditions and along the same direction. It is also known as repeatability error, reproducibility error, etc.
Interpretation 1: The repeatability of a sensor must be the degree of difference between multiple measurements obtained under the same conditions. If the measurement conditions change, the comparability between the measurement results disappears, and they cannot be used as a basis for assessing repeatability.
Interpretation 2: The repeatability of a sensor characterizes the dispersion and randomness of its measurement results. This dispersion and randomness arise because various random disturbances inevitably exist both inside and outside the sensor, causing the final measurement results to exhibit the characteristics of a random variable.
Interpretation 3: For repeatable quantitative representations, the standard deviation of a random variable can be used.
Interpretation 4: For repeated measurements, using the average of all measurements as the final result yields higher measurement accuracy because the standard deviation of the average is significantly smaller than the standard deviation of each individual measurement.
3. Linearity:
Definition: Linearity refers to the degree of deviation between the sensor's input/output curve and an ideal straight line.
Interpretation 1: The ideal sensor input-output relationship should be linear, and its input-output curve should be a straight line (as shown in the red line in the figure below).
However, real sensors have various errors to varying degrees, which means that the actual input-output curve is not an ideal straight line, but a curve (as shown in the green curve in the figure below).
Linearity is the degree of difference between the actual characteristic curve of a sensor and the offline straight line; it is also called nonlinearity or nonlinear error.
Interpretation 2: Because the difference between the actual characteristic curve of the sensor and the ideal straight line varies depending on the size of the measurand, linearity is often expressed as the ratio of the maximum difference over the entire range to the full-scale value. Clearly, linearity is also a relative quantity.
Interpretation 3: Since the ideal straight line of a sensor is unknown and cannot be obtained in general measurement situations, a compromise is often adopted: directly using the sensor's measurement results to calculate a fitted straight line that is close to the ideal straight line. Specific calculation methods include the endpoint connection method, the optimal straight line method, and the least squares method.
4. Stability:
Definition: Stability refers to the ability of a sensor to maintain its performance over a period of time.
Interpretation 1: Stability is the main indicator for assessing whether a sensor operates stably within a certain time range. Factors leading to sensor instability mainly include temperature drift and internal stress release. Therefore, measures such as adding temperature compensation and aging treatment can help improve stability.
Interpretation 2: Depending on the length of the time period, stability can be divided into short-term stability and long-term stability. When the observation period is too short, stability is similar to repeatability. Therefore, stability indicators mainly examine long-term stability. The specific length of time depends on the usage environment and requirements.
Interpretation 3: The quantitative representation of stability indicators can use either absolute error or relative error. For example, the stability of a certain strain gauge force sensor is 0.02%/12h.
5. Sampling frequency:
Definition: Sampling frequency refers to the number of measurement results that a sensor can sample per unit of time.
Interpretation 1: Sampling frequency reflects the sensor's rapid response capability and is one of the most important dynamic characteristic indicators. For situations where the measured quantity changes rapidly, sampling frequency is a crucial technical indicator that must be fully considered. According to Shannon's sampling theorem, the sensor's sampling frequency should be no less than twice the frequency of change of the measured quantity.
Interpretation 2: The accuracy of a sensor varies depending on the sampling frequency. Generally speaking, the higher the sampling frequency, the lower the measurement accuracy.
The highest accuracy provided by a sensor is often obtained at the lowest sampling rate or even under static conditions. Therefore, both accuracy and speed must be considered when selecting a sensor.
Let's take a look at the five key design techniques for sensors.
1. Start with bus tools.
The first step engineers should take when first interfacing with a sensor is to limit the unknown through a bus tool. A bus tool connects to a personal computer (PC) and then to the sensor via I2C, SPI, or another protocol that allows the sensor to "talk." The PC application associated with the bus tool provides a known and working source for sending and receiving data, and is not an unknown, uncertified embedded microcontroller (MCU) driver. Within the bus tool's operating environment, developers can send and receive messages to gain an understanding of how that part works before attempting to operate at the embedded level.
2. Write the transmission interface code in Python.
Once developers have experimented with the sensor using bus tools, the next step is to write application code for the sensor. Instead of jumping directly to the microcontroller code, the application code is written in Python. Many bus tools come with plug-ins and sample code for scripting, and Python is often one of the languages available with .NET. Writing applications in Python is quick and easy, and it provides a way to test the sensor already in the application, without the complexity of testing in an embedded environment. Having high-level code makes it easy for non-embedded engineers to dig into the sensor's scripts and test them without the need for an embedded software engineer.
3. Testing the sensor using Micro Python
One advantage of writing your first application code in Python is that the application's calls to bus tools via Micro Python are easily interchangeable. Micro Python operates within real-time embedded software, where numerous sensors are available for engineers to understand their value. Micro Python runs on a Cortex-M4 processor, providing an excellent environment for debugging application code. Not only is it simple, but there's also no need to write I2C or SPI drivers, as they are already included in Micro Python's libraries.
4. Utilize sensor supplier codes
Any sample code "scrapped" from a sensor manufacturer will take engineers a long time to understand how the sensor works. Unfortunately, many sensor vendors are not experts in embedded software design, so don't expect to find a beautiful, production-ready architecture and elegant example. Start with the vendor's code, learn how that part works, and then the frustration of refactoring will arise until it can be cleanly integrated into the embedded software. It may start like "spaghetti," but leveraging the manufacturer's understanding of how their sensors work will help save many wasted weekends before product launch.
5. Use a sensor fusion function library
The opportunity lies in the fact that sensor transmission interfaces aren't exactly new, and no one has done this before. All known libraries, such as the "sensor fusion libraries" provided by many chip manufacturers, are available to help developers quickly master, and even better master, their technologies, avoiding the cycle of redevelopment or significant modifications to the product architecture. Many sensors can be integrated into general types or categories, which facilitate driver development and, if handled properly, are almost universal or minimally reusable. Explore these sensor fusion libraries and learn about their strengths and weaknesses.
When integrating sensors into embedded systems, there are many ways to improve design timelines and ease of use. Developers who begin their design process with a high-level abstraction and learn how sensors work before integrating them into a lower-level system are unlikely to "go astray." Numerous resources available today can help developers "get off to a good start" without having to begin from scratch.