Underlying intelligence
The cobot community is no stranger to AI. However, embedding AI into cobot-based applications using standard teach pendants or graphical user interfaces has always been a time-consuming challenge, even for the most dedicated engineers.
In recent years, many companies have partnered with Universal Robots (UR) to address this challenge. The latest partner is MathWorks, which previously launched mathematical software solutions MATLAB and Simulink. Earlier this year, MATLAB and Simulink received UR+ certification, meaning that these certified software programs can not only be used seamlessly on UR cobots but also through the UR+ ecosystem, helping engineers develop advanced cobot applications.
YJ Lim, Head of Technical Robotics at MathWorks, said: “MathWorks has been looking at the cobot space for many years, but this is our first formal collaboration with a cobot manufacturer. This shows that we recognize the potential of cobots in areas that require AI, offline simulation, motion planning, and computer vision capabilities.”
The collaboration between Universal Robots and MathWorks goes beyond a symbolic combination of cobots and AI. This partnership allows robotics engineers to incorporate the full capabilities of MATLAB and Simulink into cobot-based industrial applications and embed AI into their underlying system designs. The collaboration also allows engineers to deploy algorithms and AI on cobots by generating C++ code directly on embedded targets such as GPU boards.
Many cobot application suites now incorporate AI capabilities, so the combination of cobots and AI is not new. However, the collaboration between MathWorks and Universal Robots is unique and provides a model for other cobot solution providers, as it provides engineers with the tools they need to build advanced industrial automation systems using affordable cobot hardware.
Lim said, "The use of traditional automation technologies is limited to large enterprises. Introducing MATLAB and Simulink into the cobot field will allow more emerging and medium-sized enterprises to enjoy the benefits of AI and automation."
Similar perceptual abilities
Humans can immediately discern how to pick up the desired items without bumping into other parts after observing disordered objects (such as parts in a shelf). The movement path of the hand avoids collisions with the surrounding environment, and we can even pick up multiple objects and place them together with high precision.
Automation engineers know that robots can't always do this. Therefore, picking unstructured objects from shelves has historically been considered a difficult problem requiring significant investment to solve.
Apera AI's "4D Vision" technology has also received UR+ certification, providing cobots with "human-like perception capabilities." This perception capability may sound exaggerated at first, but it has actually been proven on multiple levels and can effectively improve the speed and efficiency of robots, especially in rack picking.
Eric Petz, marketing director at Apera AI, said, “Our system’s total vision cycle time is as short as 0.3 seconds (3Hz), which means that in the same amount of time it takes the human brain to process the same problem, the system can analyze a disordered scene and issue instructions to the robot. Our vision system must be faster than the robot.”
Generally, the target speed for high-speed automated robot rack picking should be 2000 times per hour, which means the robot's cycle time is only 1.8 seconds. Because the robot's movement speed is limited, visual recognition time must be minimized as much as possible.
The first step in this process is to train the AI neural network using CAD drawings or 3D scanned models of the products to be picked. Then, two 2D cameras capture images of the factory floor scene (e.g., cluttered shelves), which are then combined into a 3D scene. Next, the 4D vision system identifies the "most needed" objects and tells the cobot the fastest and safest path to pick them. Apera Vue software, embedded in the controller, provides the robot with posture and path planning data, allowing the robot to complete the picking process along a collision-free, safe path.
Petz said, "Identifying and prioritizing pickable objects is something humans excel at, and it's also what we train AI neural networks to do. This can reduce the time required from recognizing an object to issuing motion commands to the robot."
The first step of the process involves CAD and 3D scanning models, as well as AI training. This involves the robot learning from approximately one million permutations and combinations to pick the required products in a real-world environment. This step includes training with varying ambient light levels, from full sunlight to near darkness, and requires a digital twin environment.
Petz added, "If humans can see an object, we can help robots see it. Most traditional systems rely on structured light technology, lasers, or sensors to identify objects and issue commands to the robot."
Enhance flexibility
More and more manufacturers are looking for flexible automation solutions to quickly adapt to product customization and portfolio changes. The synergy of AI and cobots can help design engineers build a system that supports high-mix, low-volume (HMLV) production.
Cobots are highly mobile, flexible, and easy to program, allowing them to switch seamlessly between different application scenarios, such as palletizing, inspection, polishing, and machine maintenance. When combined with the learning capabilities of AI, this equates to a highly flexible combination of automation and intelligence, enabling cobots to perform an even wider range of tasks.
Petz of Apera AI said, "Just as humans can analyze whether manufacturing steps are correct, our AI can be trained to understand whether parts are placed or assembled correctly."
Apera AI's robots have a total vision cycle time of just 0.3 seconds (3Hz). This means that robotic work platforms using the company's vision software can achieve previously unattainable productivity levels—the vision system moves faster than the robot, rather than the robot waiting for the vision system. Prior to production deployment, the vision solution underwent millions of simulation cycles using AI, enabling the system to gain in-depth understanding of objects from all directions and integrate with specific robots, end-effector tools, and operating environments. (Apera AI)
In a deployment at a Fortune 500 manufacturing company, Universal Robots' cobot and Apera AI's visual intelligence worked together to successfully complete the high-precision task of applying sealant to the edges of metal valve structures of various sizes and shapes.
This integrated system can flexibly identify parts and automatically dispense materials in specific patterns. These functions ensure that the fixtures are in the correct position during the dispensing process, and eliminate the need to set up specific fixtures for each type of part.
Another Apera AI customer is Precision Cobotics from Pennsylvania, which has combined Universal Robots' cobot with Apera AI's technology to develop standardized machine maintenance solutions for CNC and laser marking technologies.
This solution can pick random items and place unfinished workpieces into the machine with great precision. Cobot can then transfer finished parts to another production line or place them in designated locations, such as conveyors or pallets.
Petz explained, “The current practice is to place unprocessed parts into grid racks, which requires operator intervention or additional automation. If items are taken directly from the racks, these fixing devices are not needed, thus enabling flexible, multi-variety production and more efficient use of labor.”
"Simple"
AI makes it easier for robotics engineers to create advanced cobot-based applications. End users will also benefit from easy-to-use automated intelligent technologies. However, this doesn't mean AI will complicate the core user experience.
For a small or medium-sized enterprise that lacks experience in using robots and values production speed, the company can put all its AI capabilities in the background to ensure a smooth end-user experience and faster deployment. Companies lacking labor, on the other hand, want a solution that can be quickly and easily adapted to specific applications, and it would be even better if AI could optimize this process.
This is the premise behind Rapid Robotics' development of Rapid Machine Operator. Rapid Machine Operator is a flexible, collaborative automation system developed for rapid deployment. Before deployment, Rapid Robotics uses third-party AI software to run the product in a digital twin environment through millions of permutations and combinations, teaching the robot how to choose the optimal "pick-up time" and better path planning. However, end users do not need to see or handle any of this complexity.
As John Novak, director of computer vision at Rapid Robotics, put it, “Customers don’t care what’s going on inside the black box; they just need automation because they don’t have enough staff, but the machines need to run.”
Novak made an important point: not every cobot-based application needs deep learning or machine learning capabilities, and it is essential to keep end users away from complex operations.