Share this

Development of Service Robots for Smart Home and Intelligent Living

2026-04-06 06:21:54 · · #1

In service robot systems designed for smart homes and smart living, a reliable network environment is essential for real-time and seamless communication. This network involves communication between home/community networks and the external cloud, data sharing and management between the information center within the home/community environment and the service robot itself, home sensor systems, smart appliances, and other devices, as well as the interaction of internal data between the service robot and the sensor systems. This paper focuses on two key technologies in service robot technology and systems for smart homes and smart living: cloud-based integration technology for service robots and high-quality human-robot interaction technology. These two technologies are integrated into a home service robot, enabling human-robot interaction based on WeChat and voice cloud, thus validating the feasibility of a cloud-based home service robot architecture.

Cloud-based fusion technology for service robots

The profound evolution of computer and network technologies has given rise to a series of cloud technologies, represented by cloud computing and cloud storage. For different service needs, cloud technology can provide service robots with common technical solutions and platforms. This solution can both adopt a centralized approach to network computing on the server side, embodying the "thin client" concept, and balance the trend of integrating server and client centralization with ubiquitous connectivity.

Human reaction speed primarily depends on the sensitivity of receptors (vision, hearing, etc.), the efficacy of the central nervous system, and the excitability of effectors (muscle fibers), with the fastest speed on the order of 100ms. In human-robot coexistence environments, the reaction speed of service robots can be considered to be in the range of 10 to 1000ms. The most limiting factor for robot reaction speed is the information processing and synthesis capabilities corresponding to the human central nervous system. Based on the latest advances in machine learning, deep learning, artificial intelligence, and brain science, common service robot technologies such as speech recognition, image recognition, video interpretation, task coordination, and path planning can be integrated in the cloud to meet the different speed requirements of service robots.

On the other hand, the key to service robots is service itself, and the key to service is service skills. Regarding the question of how to express, evolve, and optimize service skills, we can leverage the "Internet+" approach with the support of big data technology to achieve preliminary results and establish a service skills loading and upgrading system for specific service robots.

The integration of cloud technology and smart homes ultimately boils down to the connection and data interaction between external cloud servers and the home's internal network. The home information center, which supports service robots within a certain scope, acts as a communication hub between the cloud server and the internal network. It is responsible for uploading and downloading local tasks and resources, managing and maintaining data on the service robot itself and the status data of various sensors and smart appliances in the home. It can also plan various tasks for the service robot, thereby further reducing the task difficulty and hardware configuration requirements of the robot. Therefore, the home information center will play a crucial bridging role in service robot systems for smart homes/smart living.

High-user-friendliness human-robot interaction technology

User experience is a method of evaluating services based on user satisfaction. In human-robot interaction, user experience refers to the degree to which a user, within a given objective environment, approves of the service during interaction with the robot.

The richness of human-robot interaction modes is also a crucial indicator of the user experience of service robots. If typical users of service robots are the elderly, children, and people with disabilities who are unable to care for themselves normally, then high-quality human-robot interaction technology is key to their success. In practical applications, human-service robot interaction mainly includes two aspects: active control by the user and information exchange between the service robot and the user. Regarding active control by the user, utilizing next-generation human-robot interaction modes based on mobile smart terminals, EEG signals, Kinect/LeapMotion sensors, force and haptic devices, and virtual reality devices will ensure that service robots can complete control and task commands for various groups of people, truly becoming effective helpers who understand users. In terms of information exchange between the service robot and the user, the service robot detects the user's status through motion-sensing peripherals and wearable sensors, thereby completing care functions such as health assessment and fall detection. Based on human feature recognition and positioning technology using motion-sensing peripherals and wearable sensors, a low-cost mobile robot system based on general-purpose hardware and software modules will be established to achieve functions such as autonomous patrolling, dynamic mapping, user tracking, and cloud communication in environments such as homes, communities, and large-scale elderly care service institutions.

Smartphones, tablets, and other mobile devices have gradually become an integral part of people's daily lives through various apps. As a highly portable and user-friendly interactive medium, smart mobile terminals will play an increasingly important role in service robots, with typical applications in real-time monitoring, service robot control, and smart home appliance control.

With the support of cloud architecture, cloud resources can be easily utilized, such as intelligent voice interaction services such as speech synthesis, speech recognition, and speech dictation provided by voice cloud platforms like iFlytek based on cloud technology. Users do not need to maintain a huge speech recognition library locally, thus greatly expanding the capabilities of the service robot system.

Safety guarantee mechanism for human-robot interaction mode

As humans and robots increasingly share workspaces, ensuring human safety during human-robot interactions is particularly crucial for human-centric service robots. Robot safety refers to the fact that robots, whether in normal operation or abnormal situations, must not directly or indirectly cause harm to people within their workspace. It primarily encompasses two aspects: active robot safety technology and passive robot safety technology.

Active safety technology for robots, also known as pre-collision control, involves assessing the severity of a collision before it occurs and taking measures to prevent it. By designing lightweight robotic arms and novel compliant joints, and by employing active force compliance control and variable stiffness mechanism control based on the hardware design, both the performance of the robot system and the robot's user-friendly interactive safety can be ensured.

Passive safety technology for robots, also known as post-collision control, involves strictly limiting the impact force during a collision to prevent actual injury to humans. This is achieved by designing passively compliant structures in robots or wrapping links with viscoelastic materials to suppress or buffer the impact of collisions during human-robot interactions. Developing passive robot systems, such as the deformable arm shown in Figure 1, as the operating arm of a service robot, allows for safe and reliable interactive tasks while ensuring the functionality of the operating arm system.

Furthermore, a multi-layered human-robot safety mechanism also deserves attention. Safety, as an essential attribute of human-robot interaction, involves theories and technologies from multiple disciplines and fields, including artificial intelligence, information fusion, control methods, and mechanical design. Designing and implementing a multi-layered human-robot safety mechanism based on a combination of various safety technologies is crucial for solving the problem of harmonious human-robot coexistence. In the functional implementation of service robot systems, key issues include real-time collision avoidance behavior optimization and control, robot motion result prediction and environmental early warning, and environmental status information monitoring. Single trajectory planning and control methods cannot completely solve the safety problems of human-robot interaction; therefore, they should be combined with other safety strategies to improve the safety performance of human-robot interaction.

Case Studies of Home Service Robots

The home service robot "Xiao Nan" is a prototype completed with the support of the National "863 Program," as shown in Figure 2. This case study uses "Xiao Nan" as an experimental platform to design a cloud-based home service robot architecture and a ROS-based home service robot system software architecture. It implements human-robot interaction via WeChat and voice cloud, verifying the feasibility of a home service robot architecture and further exploring the possibilities for cloud-based fusion technology and high-user-experience human-robot interaction technology for service robots.

Taking the application of a home information center in the Xiaonan home service robot system as an example, the home service robot architecture based on cloud architecture is shown in Figure 4. This system pioneers a new product model for home service robots using a mobile internet approach. By combining cloud computing with robots, and leveraging the elastic resources provided by ubiquitous cloud infrastructure to overcome the limitations of traditional robots, significant weight reduction in home service robots becomes possible. Based on the design concept of Robot as a Service (RaaS), this system can configure local robot resources as cloud services for users to access, and also utilize cloud resources to provide services for the robot.

The human-computer interaction process of "Xiao Nan," based on the RaaS cloud architecture design, consists of three steps: identifying user intent through touch interaction on a mobile terminal client; decomposing the user intent into several service requests with a sequential relationship, and initiating service requests by calling the WebService interface provided by the "Information Center Layer"; the device in the "Execution and Perception Layer" responds to the service requests, returns the response result to the human-computer interaction terminal, and displays the interaction result to the user through a visualized 3D virtual simulation scene.

The system comprises four parts: the user terminal, the cloud, the information center, and the command execution terminal. Users first need to follow the "Lingjie Home Service Robot" WeChat official account and send interactive commands. The Tencent server then sends these commands to the cloud, where text keyword recognition and filtering are performed before forwarding the user commands to the home information center. Through a specific interface between the information center and the command execution terminal, the home information center sends the user commands to the command execution terminal, which then returns the execution results to the home information center, providing a response to the user in text and image format.

This system leverages a voice cloud platform for human-computer dialogue. Supported by cloud technology, and based on the iFlytek voice cloud platform, it combines local and online voice recognition to enable voice control of a home service robot. In the voice command recognition process, local voice recognition is used first. Users simply create a character library on their mobile terminal and upload it to the voice cloud platform. The voice cloud platform selects matching voice segments from the user-uploaded character library to form a local voice library, which is then downloaded to the mobile terminal. During local voice recognition, the mobile terminal matches the collected voice segments with those in the local voice library to output the recognition result. If local voice recognition fails, the mobile terminal uploads the collected voice segments to the voice cloud platform, matches them with the voice library, and returns the recognition result. After receiving the recognition result, the home information center parses the received string and calls the corresponding execution device through a WebService interface function to achieve voice control of the robot.

For more information, please follow the Machine Vision channel.

Read next

CATDOLL Yuan Soft Silicone Head

You can choose the skin tone, eye color, and wig, or upgrade to implanted hair. Soft silicone heads come with a functio...

Articles 2026-02-22
CATDOLL Hanako Hard Silicone Head

CATDOLL Hanako Hard Silicone Head

Articles
2026-02-22
CATDOLL 146CM Mila TPE

CATDOLL 146CM Mila TPE

Articles
2026-02-22
CATDOLL 123CM Momoko TPE

CATDOLL 123CM Momoko TPE

Articles
2026-02-22