Share this

A general solution for embedded multitasking GUIs

2026-04-06 04:48:30 · · #1
Abstract : This paper addresses the need for high flexibility, portability, and scalability in embedded multi-tasking GUI systems. A general solution is proposed. Employing hierarchical, modular, and object-oriented design principles, the GUI architecture is presented, and key technologies such as multi-tasking scheduling strategies and management, message-driven mechanisms, desktop and window management, and object trees are studied. The prototype of this solution has been successfully applied to DeltaOS, a real-time operating system with independent intellectual property rights in China. Keywords : Embedded system, Embedded GUI, Graphical User Interface, Multi-tasking An embedded GUI (Graphical User Interface) system is a graphical user interface system designed for specific hardware devices or environments within an embedded system. Surveys show that increasingly flexible, efficient, and portable embedded GUI systems are widely used in many fields such as office automation, consumer electronics, communication equipment, and intelligent instruments. Furthermore, with the development of hardware technology, the required functions of GUIs are becoming increasingly rich, and GUI systems are becoming more complex and diverse than ever before. Most embedded GUI systems can only simply support single-task operation. Single-task GUIs are inefficient and cannot meet the future development needs of GUIs; therefore, multi-task GUIs are the future direction of embedded GUI development. Currently, the most successful embedded multi-tasking GUI systems in the embedded application field include MiniGUI, MicroWindows, and Qt/Em-bedded. These are primarily designed for embedded Linux, using the PThrred library to support multi-threading; however, PThrred itself is quite complex, making it difficult to port these GUI systems to target platforms with interfaces that do not conform to the POSIX standard. Therefore, these GUI systems share a common drawback: excessive dependence on a specific platform, resulting in poor portability. To effectively ensure compatibility with various embedded environments, this paper proposes a general, effective, and portable embedded GUI architecture and studies the key technologies in multi-tasking GUI design. 1. Architecture Given the high flexibility, portability, and scalability required for GUIs, the architecture design adopts hierarchical, modular, and object-oriented design principles. Hierarchical architecture is used in many software systems and is widely recognized as a reasonable structure; however, the most important aspect is how to divide these layers to make the system structure most rational and clear. The design adopts the following partitioning strategy: striving for relative independence between layers, ensuring that changes to any layer do not affect its interface with the upper layer, and that the upper layer is not affected by changes to the lower layer. In this hierarchical structure, the bottom and top layers can change according to specific needs, so sufficient room for variation should be provided for these two layers, while the intermediate layers should be independent and unchanging. The hierarchy between the hardware environment, operating system, and user application in an embedded application environment is shown in Figure 1. In Figure 1, the GUI components are partially isolated from the hardware through drivers; the core is isolated from the specific operating system through the operating system abstraction layer. This hierarchical architecture gives the GUI good platform independence, making it very convenient to port between different operating systems and hardware platforms. Based on the above design concept, the GUI hierarchical model is divided as shown in Figure 2. In the figure, the GUI is divided into three layers, each of which is further divided into several modules according to different functions. 1.1 Input/Output Layer This layer's function is to shield the specific details of the devices and operating system platform in the system. This device layer is defined in the BSP, providing the GUI with the ability to manipulate the display characteristics of the devices. This layer is divided into two sub-layers: device logic and hardware abstraction. The device logic sublayer uses the concept of the same type of device to describe the external devices supported by the GUI and the logical operations on those devices, providing a unified device operation interface to the upper layer. The hardware abstraction sublayer, on the other hand, utilizes the actual device controller operations and implements the interfaces defined in the hardware abstraction sublayer based on the hardware drivers on different platforms. 1.2 Window Core Layer The window core layer implements the key functions of the GUI and can be divided into several parts based on function, including message management, buffer pool management, drawing management, timers, resource management, object management, sub-screen management, and memory heap management. Since the GUI adopts a message-driven communication method, message management constitutes the soul of the GUI, connecting various parts of the system. During application execution, messages carry the exchange information between different parts of the system. Memory heap management: The purpose is to avoid storage fragmentation caused by dynamically allocating and releasing memory during system operation. Two relatively frequent dynamic memory allocation operations are message space allocation/release and screen object clipping area refresh. Drawing management: Completes drawing operations such as drawing points, lines, and circles. To improve the portability of the GUI, this layer mainly completes hardware-independent drawing processes. For application platforms with special display capabilities, this layer can also extensively call hardware-provided function calls (interfaces wrapped by the hardware-independent output layer) to achieve special drawing effects. The GUI provides this flexibility in its structure. The drawing management layer provides calling interfaces to applications in the form of a set of drawing primitives. Timer: Provides counting information to applications based on the system clock. Resource Management: Primarily manages fonts, images, and color palettes, requiring the implementation of two main functions: resource storage and providing appropriate interfaces to applications. Object Management: Employs a reasonable mechanism to organize objects displayed on the screen. The GUI refers to all GUI elements that can be displayed on the screen as "objects" and manages them through mechanisms such as object trees, Z-order, and clipping domains. The window core layer also provides application with interface functions for adding, deleting, and hiding objects. 1.3 Application Interface Layer The application interface layer encapsulates all the interfaces provided by the GUI to the user. The GUI seen by the application is all the interface functions provided by this layer, including three parts: the toolbox, the set of drawing primitives, and the set of object operations. Toolbox: A set of controls provided by the GUI to the user. The size of this part can be adjusted according to the needs of the application, which also greatly affects the size of the GUI library. Commonly used controls include buttons, scroll bars, windows and edit boxes. Drawing primitives set: drawing function interface provided by the drawing management layer. The toolbox is also implemented on its basis. Object operation set: mainly implements the operation functions of adding and deleting GUI objects. 2 Analysis of key technologies of multi-task GUI The "task" mentioned in this article is executed in the same address space and can directly access all shared resources without constraints. The key technologies in the design of multi-task GUI are analyzed below. 2.1 Multi-task scheduling strategy and management A multi-task system needs a reasonable task scheduling strategy to manage all tasks. After the GUI is started, a system task, an event task and a timer task will be generated by default; while the generation of application tasks depends on the specific user needs. (1) System task The task running on the desktop object is called "system task". The system task is the core of the operation of the entire graphical user system. It continuously retrieves messages from the main message queue of the system and dispatches them to the corresponding target task according to the purpose and use of the message; at the same time, it is responsible for the management and maintenance of all application tasks and desktop management. There is only one system task in a system. (2) Event task The event task is responsible for collecting external events, interpreting the events as corresponding GUI messages, and putting them into the system's main message queue. User input is transmitted to the GUI core for processing from here. Generally, there is only one event task in a system. (3) Timer task The timer task generates GUI timers through the operating system's system calls. (4) Application task In addition to system tasks, other tasks run by the window are called "application tasks". Application tasks are the basic unit of user program execution. Application logic runs in this task, has its own message queue, receives messages from the GUI core, and independently performs message loops according to certain rules. Application tasks interact with system tasks through messages and are managed by system tasks; they use the system's hardware and software resources through the application interface layer. The upper limit of the number of application tasks is only limited by the number of platform resources. In embedded GUI, system tasks are given the highest priority, and other tasks can use different priorities lower than the system task priority; tasks with the same priority should run in a time-slice round-robin manner. In short, if necessary, the embedded GUI system will simultaneously adopt a task scheduling strategy of time-slice round-robin and priority preemption, as shown in Figure 3. When no message arrives or while waiting for an event, a task needs to suspend itself. Once a message enters the message queue, the task will be awakened to process it. In this way, limited CPU resources can be fully utilized. Furthermore, the system task maintains a list for tracking and managing application tasks. Each application task corresponds to a task information block containing its attribute information. Task attributes include a message queue pointer, task handle, and task entry point, providing a detailed description of the specified task. The creation and destruction of an information block must be synchronized with the creation and destruction of application tasks. The system task must maintain this list to ensure the correctness of these information blocks. The multi-task management mechanism is shown in Figure 4. For the user, it is only necessary to simply specify the task entry point and priority (if necessary), and all other work is automatically completed by the system task. This scheduling and management method makes the system more user-friendly and efficient. 2.2 Message-Driven Mechanism The message mechanism was initially proposed to solve the problem of event handling based on hardware interrupts in early program design. Interrupt events are unpredictable and sudden, so problems arise when multiple applications are waiting to process interrupt events. Message mechanisms effectively solve event-driven multi-application design problems and provide a concise and reliable method for handling relationships between multiple systems, within a system, and between components. In GUI systems that only support single tasks, there is only a serialized message queue, and messages are processed strictly in sequence, leading to slow response and low efficiency. Therefore, it is necessary to adopt parallelized message queues. When a message in one queue is busy processing a lengthy task, the input focus can switch to another queue. The system maintains a system message queue and multiple designated task information queues, each corresponding to an application task, as shown in Figure 5. Event tasks convert input into messages and place them in the system message queue. After retrieving an input message, the system task first checks the message and then mails it to the target application task or processes the message directly. Each application task removes messages from its message queue and sends them to the appropriate window application for processing. An application task can mail messages to its own message queue or to the message queues of other application tasks. Furthermore, to achieve different purposes, the system provides two basic message types: synchronous and asynchronous. 2.3 Desktop and Window Management Windows on the screen typically overlap, and their relative positions are constantly changing. These windows may belong to different tasks but share the same screen. Therefore, it is important to calculate and maintain windows conveniently and effectively. First, we introduce two concepts: global clipping domain and window clipping domain. Both are related to application tasks; the former indicates which areas occupy the screen, and the latter gives the clipping relationship of all objects within the same application task. In addition to maintaining its own clipping domain, the system task is also responsible for managing the global clipping domain of all application tasks, as shown in Figure 6. Once a window position changes, the system task must update the information and notify the application task to make the corresponding changes. On the other hand, when calculating the window clipping domain, the application task only needs to focus on itself and is not affected by other tasks, as if it were the only task running on the entire screen. The final actual clipping result is obtained by performing an AND operation between the global clipping and window clipping results. 2.4 Z-Order and Object Tree The Z-order actually defines the hierarchical relationship between sets of objects (displayable windows) on the screen. GUI users can change the Z-order by selecting the window to be brought to the foreground. Many GUI systems implement Z-order as an explicit list with a specified set of operations. However, embedded GUI systems do not use this approach; instead, they use an object tree to represent the hierarchical relationships and Z-order between GUI objects. Theoretically, every GUI object has a parent, children, and siblings. Therefore, all objects displayed on the screen form an inverted tree with the desktop as the root node. The Z-order can be easily obtained by performing a "post-root traversal" of the tree. Figure 7 illustrates the process of building the object tree. The use of an object tree greatly simplifies desktop management, enabling convenient combination of objects and implementation of Z-order management without adding extra work. 3. Summary Future GUI systems will become increasingly complex, requiring richer functionalities, necessitating a more open and scalable architecture. The embedded GUI architecture proposed in this paper is highly flexible and portable, making it well-suited for various embedded environments.
Read next

CATDOLL 42CM Silicone Reborn Baby Doll

Height: 42cm Silicone Weight: 2.4kg Shoulder Width: 15cm Bust/Waist/Hip: 28/28/30cm Oral Depth: N/A Vaginal Depth: 3-8c...

Articles 2026-02-22