Remember the previous "blue, black, white, and gold dress" debate? This disagreement stemmed from the differences in the cone cells in our eyes used to distinguish colors. A similar debate exists in human-computer interfaces (HCIs). What's the reason for this? This article will reveal the secrets and introduce the design of HCIs from a color perspective.
This is how we see the world
Everyone knows that putting an elephant into a refrigerator requires three steps, and the human eye's process of storing the world in the brain can also be simply divided into three steps:
The eye senses the image (which is collected by sensors and converted into digital signals).
It is converted into neural signals and transmitted to the brain (and then transmitted to the processor via a communication system).
The brain processes and stores (the processor converts it into a format that can be displayed and stored on the screen).
Therefore, the image format that the human eye began to see changed from light signals to electrical and chemical signals in the nerves, and the transmission format was different!
Similarly, machines also need to be transformed.
1.1 The format in the brain—RGB image format
First, a screen is made up of individual pixels, and the vibrant colors within it originate from the three primary colors of red, green, and blue on those pixels. This method of representing colors is called the RGB color space (which is also the most widely used color space representation method in multimedia computer technology), as shown in the following figure:
According to the principle of three primary colors, any color of light F can be created by mixing different amounts of R, G, and B.
Formula 1.1 Three Primary Color Principle
White light is a mixture of multiple types of light. So when the three primary color coefficients are at their maximum, it is white; when they are zero, it is black; and what lies between the two are the myriad colors of the world.
Each pixel is like a paint box. The larger the box, the more colors it can hold, and the richer the colors that pixel can express. The size of this box is called storage space in a computer. Color adjustment is done by changing the content of the three primary colors. The further down the table you go, the more storage space is required, but the more accurate the colors that each pixel can describe, the more realistic the screen image will be.
1.2 Format on the Eyeball—YUV Image Format
To save space and facilitate packaging during storage, we encode the luminance signal Y and two color difference signals R-Y (red - luminance, i.e., U) and B-Y (blue - luminance, i.e., V) separately, and then send them out. When they arrive at the display terminal, they are converted back to RGB format. This method of color representation is called YUV color space representation.
At this point, you might ask, "Where did G (green) go?" Actually, adding brightness to two colors can roughly express the original color through algorithms, so it's considered to be blended into R and B.
Compared to RGB video signal transmission, the biggest advantage of YUV is that it only requires a small amount of bandwidth (RGB requires three independent video signals to be transmitted simultaneously). The difference in bandwidth usage between the two formats is shown in the figure below, where RGB requires much more bandwidth.
Is it really just to save bandwidth that we chose the YUV format without hesitation?
Of course not! While low bandwidth is crucial, color is also of paramount importance.
The YUV color space is important because its luminance signal Y and chrominance signals U and V are separate. This separation not only avoids mutual interference but also reduces the chrominance sampling rate without significantly impacting image quality. If U and V are zero, there is no color, resulting in a black and white television. Of course, Y is also an important parameter. In fact, the perceived depth of a color varies greatly, and this depth depends on the luminance Y.
Let me first introduce a YUV format, and you can apply that knowledge to other formats.
YUV4:2:2:
“4” indicates that there are 4 Ys in the stored stream code;
"2" indicates that there are 2 U color difference values in the stored stream code;
The second "2" indicates that there are 2 V color difference values in the stored stream code.
The following four pixels are: [Y0U0V0][Y1U1V1][Y2U2V2][Y3U3V3]
The stored bitstream is: Y0U0Y1V1Y2U2Y3V3
The mapped pixels are: [Y0U0V1][Y1U0V1][Y2U2V3][Y3U2V3]
The image above shows a YUV4:2:2 sampling network. Light samples (Y) are represented by crosses, while chromaticity samples (U, V) are represented by circles. Every point has a cross, but only half of the circles are represented. This is why the stored stream code above has all four Y values, but only half of the U and V values.
1.3 Development of the Interactive Interface
The development of graphics is primarily for the aesthetics of the human-computer interaction interface (HCI). Currently, HCI designs primarily use emWin and QT. Using QT/E often requires running an embedded operating system on a microcontroller, thus placing certain performance requirements on the MCU. Furthermore, if you are unfamiliar with QT/E, using it will incur a significant time cost. In contrast, emWin is more suitable for rapid and streamlined UI development, but its interface interaction effects and aesthetics are lower.
ZLG's next-generation embedded development platform, AWorks, developed over 12 years, integrates the GUI programming framework AWUI. AWUI currently supports Qt and emWin, allowing developers to edit the interface using Designer and develop ViewModel/Model using C++. This eliminates the need for developers to learn the Qt and emWin APIs, enabling the final application to run on both Qt and emWin (provided that the control is supported on emWin).
Based on AWUI, ZLG plans to launch AWTK, a more widely applicable and user-friendly application, within the year. AWTK includes a rich set of GUI components and innovatively introduces a drag-and-drop GUI programming mode, significantly improving the efficiency of GUI programming. Coupled with a robust design architecture, it combines the low-memory, smooth operation of emWin with the high-quality interface effects of Qt, ensuring smoothness and stability of the interactive interface. This allows embedded UI development to be integrated into the AWorks platform as components, enabling rapid development of interactive interfaces on this platform.
The ZLGM1052 crossover core board supports the AWorks embedded development platform, combining the powerful processing performance of an MPU with the ease of use and real-time advantages of an MCU microcontroller! It comes pre-installed with the AWorks real-time operating system and is designed for applications in smart hardware and industrial IoT.