Share this

The development history, composition, and working principle of human-computer interfaces

2026-04-06 06:20:09 · · #1

I. Introduction to Human-Computer Interface

A Human-Machine Interface (HMI), also known as a user interface or interface, is the medium and dialogue interface for the transmission and exchange of information between humans and computers, and is an important component of a computer system. It serves as the medium for interaction and information exchange between the system and the user, realizing the conversion between the internal form of information and a form that humans can understand. HMIs exist in any field involving human-computer information exchange.

II. The Development History of Human-Computer Interfaces

1. Command Language User Interface

Early human-computer interfaces (HCIs) used command-language interfaces, and human-computer dialogue was entirely in machine language. Interaction was limited to commands and queries, with communication conducted entirely in text format through user commands and user queries to the system. This required an astonishing amount of memorization and extensive training, demanding a high level of expertise from the operator. For the average user, command-language user interfaces were error-prone, unfriendly, difficult to learn, and had weak error handling capabilities. Therefore, this period is considered the era of human-computer confrontation.

2. Graphical User Interface

With the development of hardware technology and the advancement of software technologies such as computer graphics, software engineering, and window systems, graphical user interfaces (GUIs) have emerged and become widely used, becoming the mainstream human-computer interface. Mature commercial systems include Apple's Macintosh, IBM's Presentation Manager (PM), Microsoft's Windows, and X2 Window running in Unix environments. GUIs are also known as WIMP interfaces, where windows, icons, menus, and positioning devices form a unified desktop. Windows are the basic interactive area, mainly including a title bar, elements that support movement and resizing, a menu bar, toolbars, and an operation area. Windows are typically rectangular, but many software programs now use irregular shapes for a more dynamic and personalized look. Icons are graphic symbols used to identify objects. Many are derived from technical terms and need to be memorized upon first encounter, such as minimize and close; others are derived from everyday life, being more pictographic and requiring less memorization, such as a speaker for volume control, a house for home, and an envelope for mail. Menus are action commands that users can select; all user commands in a software program are contained within menus. Menus are usually displayed through windows, and common types include toolbars (including graphical toolbars), drop-down menus, pop-up menus (right-click menus), and cascading menus (multi-level menus). A pointer is a graphic used to visually describe the position of a pointing device (mouse or trackball) input into the system. Common graphical interface pointers include arrows, crosshairs, text input "I", and hourglasses. Graphical user interfaces can display different types of information simultaneously, allowing users to switch between several environments without losing connection between tasks. Users can conveniently perform tasks through drop-down menus, greatly improving interaction efficiency while reducing keyboard input. This period is considered the human-computer coordination period.

3. Multimedia User Interface

The rapid development of multimedia technology has provided an opportunity for the advancement of human-computer interfaces. Multimedia technology has introduced dynamic media such as animation, audio, and video into user interfaces that previously only used static media. In particular, the introduction of audio media has greatly enriched the forms in which computers present information and broadened the bandwidth of computer output. At the same time, the introduction of multimedia technology has also improved people's ability to select and control information presentation formats, enhanced the integration of information presentation with human logic and creativity, and expanded human information processing capabilities. With the help of multimedia, users can improve the efficiency of information reception; therefore, multimedia information is more attractive than single-media information and is more conducive to people's active exploration of information.

Unfortunately, while multimedia user interfaces have become richer in terms of information output, they still force users to use conventional input devices (keyboard, mouse, and touchscreen) for information input. This means input is single-channel, resulting in a significant imbalance between input and output, which limits its application. Although the combination of multimedia and artificial intelligence technologies will change this situation, today's multimedia user interfaces are still in the exploratory and improvement stage. At this juncture, the rise of research into multi-channel user interfaces undoubtedly brings greater hope for solving the input-output imbalance in human-computer interfaces.

4. Multi-channel user interface

Since the late 1980s, multi-channel user interfaces (MCAs) have emerged as a new field of research in human-computer interaction (HCI) technology, attracting significant international attention. Research on MCAs arose to address the shortcomings of current graphical user interfaces (GUIs)—such as WIMP/GUI and multimedia user interfaces—due to unbalanced communication bandwidth. MCAs integrate new interaction channels, devices, and technologies, including eye contact, voice, and gestures, enabling users to engage in natural, parallel, and collaborative human-computer dialogue using multiple channels. The machine, in turn, captures the user's interactive intent by integrating precise and imprecise inputs from multiple channels, thus improving the naturalness and efficiency of the interaction. Research primarily focuses on input channels other than keyboards and mice, including voice and natural language, gestures, writing, and eye movements, with a particular emphasis on specific system-level studies.

Multi-channel user interfaces, along with multimedia user interfaces, jointly improve the naturalness and efficiency of human-computer interaction. Multimedia user interfaces primarily focus on the efficiency of users' understanding and acceptance of computer-output information, while multi-channel user interfaces primarily focus on the methods of user input and the computer's understanding of that input. The goals of the multi-channel human-computer interfaces studied today can be summarized as follows: enabling users to interact with computers using their existing everyday skills as much as possible; increasing the throughput and variety of human-computer communication information, leveraging the different cognitive potentials of humans and computers; and incorporating the achievements of existing human-computer interaction technologies, ensuring compatibility with traditional user interfaces, especially the widely popular GUIs, so that the knowledge and skills of experienced and expert users can be utilized.

In the process of human-computer interaction, people are no longer satisfied with simply displaying or printing information on a screen; they further demand interaction through senses such as sight and hearing, leading to the development of multimedia user interfaces. People are also no longer satisfied with single-channel input, wanting to utilize more of their senses of smell, touch, and body language, gestures, or verbal commands, resulting in multi-channel user interfaces. Furthermore, people want to more naturally "enter" the environmental space, forming a "direct dialogue" between humans and computers and achieving an "immersive" experience; for this purpose, virtual reality human-computer interfaces have emerged.

5. Virtual Reality Human-Computer Interface

Virtual Reality (VR), also known as Virtual Environment, aims to provide users with an immersive experience. We know that in traditional human-computer systems, humans are the operators, and machines passively react; in current computer systems, humans are the users, and the interaction between humans and computers is a dialogue. However, since the 1990s, a new computer interface theory has emerged that argues that dialogue-based human-computer interfaces are a flawed development model. This model easily misleads inexperienced users, and even experienced software designers can be misled into creating unusable interface systems. This theory posits that the interaction between humans and computers should not be a dialogue between humans, but rather an exploration of another world by humans. The job of programming is to create a world that can be explored and traversed. Even a beginner can wander within it, accumulating experience through ingenuity and hands-on operation. This theory argues that the computer should be seen as a world, not an object of conversation, and the research direction of human-computer interfaces is to shorten the distance between humans and this world. Therefore, future human-computer interaction models should break through the limitations of the screen, allowing users to directly enter the virtual space of the computer and interact directly with 3D objects. This is the starting point for the development of virtual reality human-computer interface theory.

III. How to Use the Human-Computer Interface

Define the monitoring task requirements and select appropriate HMI products;

Edit the "project file" on a PC using screen configuration software;

Test and save the edited "project file";

Connect the PC to the HMI hardware and download the "project file" to the HMI;

Connect HMI and industrial controllers (such as PLCs, instruments, etc.) to enable human-machine interaction.

IV. Composition and Working Principle of HMI (Human Machine Interface) Products

Human-Machine Interface (HMI) products consist of two parts: hardware and software. The hardware includes a processor, display unit, input unit, communication interface, and data storage unit. The processor's performance determines the overall performance of the HMI product and is its core component. Depending on the product level, 8-bit, 16-bit, or 32-bit processors can be selected. HMI software generally consists of two parts: system software running on the HMI hardware and screen configuration software running on a PC with a Windows operating system. Users must first create a "project file" using the HMI's screen configuration software, and then download the created "project file" to the HMI's processor for execution via the serial communication port between the PC and the HMI product.

V. Basic Functions and Selection Criteria of HMI (Human Machine Interface) Products

Basic functions of human-computer interface:

1) Equipment operating status display;

2) Data and text input operations, and printing output;

3) Production formula storage and equipment production data recording;

4) Simple logic and numerical operations;

5) It can connect to a network of various industrial control devices.

Human-computer interface selection criteria:

1) Display screen size, color, and resolution;

2) HMI processor speed performance;

3) Input method: Touchscreen or membrane keyboard;

4) Screen storage capacity: Note whether the manufacturer specifies the unit of capacity as bytes or bits;

5) Types and number of communication ports, and whether printing function is supported.

VI. How to control a PLC using a human-machine interface

Human-Machine Interface (HMI) Control: The PLC analog input module converts the thermocouple signal into a control signal, which is then displayed on the touchscreen using a formula. Temperature control is crucial. A simpler method is comparison command control, but this results in significant temperature fluctuations. Another method is PID control, using PID commands to regulate the temperature. Thermocouples are chosen for heating control because they are contactless and have a longer lifespan.

1. Input sampling stage

During the input sampling phase, the programmable logic controller (PLC) sequentially reads all input states and data in a scanning manner and stores them in the corresponding cells of the I/O image area. After input sampling is completed, the process transitions to the user program execution and output refresh phases. During these two phases, even if the input states and data change, the states and data of the corresponding cells in the I/O image area will not change. Therefore, if the input is a pulse signal, the width of the pulse signal must be greater than one scan cycle to ensure that the input can be read under any circumstances.

2. Output refresh phase

After the user program scan is complete, the programmable logic controller (PLC) enters the output refresh phase. During this period, the CPU refreshes all output latch circuits according to the corresponding states and data in the I/O image area, and then drives the corresponding peripherals through the output circuits. This is when the PLC actually outputs.

Read next

CATDOLL Sabrina Hybrid Silicone Head

The hybrid silicone head is crafted using a soft silicone base combined with a reinforced scalp section, allowing durab...

Articles 2026-02-22
CATDOLL 135CM Tami

CATDOLL 135CM Tami

Articles
2026-02-22
CATDOLL 126CM Emelie

CATDOLL 126CM Emelie

Articles
2026-02-22
CATDOLL Cici Hard Silicone Head

CATDOLL Cici Hard Silicone Head

Articles
2026-02-22