Adaptive Robots and The Future of Industrial Automation

Flexiv - Dexterous and Intelligent
15 min readOct 29, 2020

INTRODUCTION

Robotic technologies have been rapidly evolving over the past few decades. Today, emergence of collaborative robots alongside traditional industrial robots have become commonplace in manufacturing plants across the world and they continue to be widely adopted due to their ability to increase productivity and efficiency of production lines. Despite technological advancements in industrial robots, a large set of tasks that can be easily completed by humans remains challenging for these systems. This paper aims to review the status quo of robotics technologies, evaluate the current state of industrial robots, and finally, shed light on the concept of adaptive robots and how these next generation systems can revolutionize industrial automation.

IS THERE A LIMITATION OF EXISTING ROBOTS?

The rise of industrial robots has been triggered by technological progress meeting the requirement of the manufacturing industry during the early 90s. Today industrial robots continue to play an integral role in production lines. Design for fast and precise position control, these systems are ideal for tasks that require repeated trajectory following, for example, picking and placing an object from a known position to another known position, cutting a particular section of a metal sheet, or spray painting a workpiece. For these repetitive tasks, these robots are able to outperform humans, be it in speed or accuracy, thanks to their well-developed hardware and control systems. These systems are, however, not without their limitations.

· They are designed to execute preprogrammed paths and are unaware of the potential hazards they pose to the human operators that may be around them. Usually, these industrial robots must be guarded by safety cages while they’re working. But despite all these protective measures, accidents could still occur.

· It is very difficult to “teach” them anything new. It typically requires experienced robotic application engineers to program in a specific language the sequence and trajectory of their desired motions.

· By design, they can complete a limited set of tasks that require only position control and pre-defined trajectory. There are still many tasks that are too challenging for these robots, such as polishing on complex surfaces, assembling complicated parts with tight tolerance, or interactive tasks in open environments.

While the manufacturing industry is shifting to a new phase to adapt to the dynamic development of the demand side. Lean manufacturing, flexibility, digitalization, etc. are becoming the keywords of the current industry. Apparently, traditional industrial robots cannot satisfy the next generation of automation anymore.

To address these issues, the concept of collaborative robots (“cobots”) was brought out in the late 90s. Cobots became a trend in recent years, with traditional industrial arm companies like KUKA, ABB, and Fanuc launching their cobots, while collaborative robot companies like UR, Rethink Robotics, and Franka are gaining popularity. Unlike traditional industrial systems that needed to be placed in safety cages and separated from the human operator, cobots were designed with the objective to work alongside human operators. These systems, thus, were developed with safety features and simplified programming interfaces.

While addressing the limitations of traditional industrial robots, they come with their own set of issues in the face of current and future automation needs. To achieve safety, cobots typically have to sacrifice payload, velocity, and force limits that are crucial for productivity; they cannot be easily deployed by non-professionals and still rely on integrators with more advanced skills; Cobots also need to collaborate with human to accomplish complex tasks. To many roboticists, collaborative robots appear to be a transitional phase between traditional robots and more advanced ones, and there are opportunities to further improve these systems. In particular, they should be more flexible, intelligent, and adaptive.

WHAT DEFINES AN ADAPTIVE ROBOT?

With these contexts, the next-generation robot must evolve beyond the concept of “collaborative” to tackle the root of the problems. They should share the following characteristics:

· Have intrinsic safety but do not compromise performance

· Able to learn and accomplish new tasks just like an apprentice

· Have the capability to perform tasks that traditional robots are not capable of — there are increasing demands to automate such tasks due to shortage of labor and harmful work environments

A robotic arm that meets all these requirements must be able to adapt to complicated environments and complex tasks, so we define the next generation robot as “Adaptive Robots”.

“Adaptive” is the word that precisely describes the characteristic of the new generation robot. Traditional robots work without adaptivity: parts must be at fixed position and orientation; no disturbance is allowed while at work. Having computer vision adds a bit of flexibility, but the requirements of highly accurate object detection and well-designed lighting conditions make deploying vision systems on each production line labor-intensive and time-consuming. With collision detection implemented, the robot will immediately stop whatever job it is doing when any collision occurs, instead of adapting to the unmodeled interactions.

To meet the challenges of industrial automation in the near future, the next generation of robots would need to evolve beyond the current state of collaborative robots. At Flexiv, we believe future robotic systems should be designed with the following characteristics:

· High tolerance for position variation

· Superior disturbance rejection

· Transferrable intelligence

High Tolerance for Position Variation

Unlike robots, human arms perform badly in terms of positioning. If human try to close eyes and only use proprioception to reach a target position, the hand can be off by up to 50mm. However, even with eyes closed, most people can easily insert a key into the keyhole, which suggests that positioning ability is not the major contributor to this process. Most people can also wash cookware without staring at them all the time, but just use their senses to decide where the hands go and which direction to push and scrub. An adaptive robot should have similar capabilities to adjust its position and force output in uncertain conditions, such as inaccurate detected workpiece position, position error stacked up over a number of procedures in the production lines, or change in geometry, position, or orientation of the workpiece during the process.

Superior Disturbance Rejection

Though weak in positioning, human arms are incredibly good at rejecting disturbance. A human can carry and transport a cup of water steadily even if someone pushes the arm. People who clean the exterior wall of a skyscraper work effectively under the disturbance of strong winds and unstable body support. An adaptive robot should be able to accomplish tasks under similar disturbances so that there are fewer restrictions of its work environment and more possibilities for the robot to take care of contact applications that cause vibration and disturbance.

Transferrable Intelligence

There is no doubt that human beings are amazing in transferring skills: the skill of inserting one type of connector transfers to inserting all kinds of connectors, such as USB, mini USB, micro USB, type C, lightning, etc., then further transfers to assembling some circuit boards for a computer; opening a cap of a water bottle transfers to opening caps of all sorts of bottles, then further transfers to tightening a nut; pressing a button transfers to operate different types of buttons and switches, then further transfers to playing keyboards. An adaptive robot with polishing skills for a specific cellphone should be able to easily transfer the skill to polishing the exterior of a car and sanding wood furniture with little training effort. The ability to obtain new skills through learned skills is the core functionality of an adaptive robot.

HOW TO ACHIEVE ADAPTIVITY?

As we have stated above, the concept of cobot is a compromise to the insufficiency of existing hardware and software technology. So what technology is needed to achieve the goal of an adaptive robot? There are two key factors:

I. Force control technology with high accuracy and fast response

II. Hierarchical intelligence based on vision and force sensing technology

They are equally important and require bottom-up innovations in every aspect of a robotic system.

Force Control

Whether you realize it or not, the vast majority of our daily activities rely on our force sensing and force control capabilities. Examples include mopping the floor, pushing a button, or inserting a plug into an electrical outlet. On the contrary, force control has long been absent in robotic arms, making them uncapable of many force-guided tasks.

Force control on robotic arms is actually not a new topic in academics. Research on robot force control has been going on for more than 30 years. But in Industry, this type of technology has still not been broadly applied. It is mainly because existing hardware and software technology has not evolved enough for the cost and long-term reliability to meet the industry standard.

Position-based Force Control

A common approach to implement force control is to attach a 6-DOF force/torque sensor to the end effector of an existing position-controlled robot and implement the “outer force control — inner position control” strategy, which basically converts force control to position control. Imagine an end effector is pushing to compress a spring — the distance the end effector travels will be proportional to the force it applies to the spring. However, if the spring is extremely stiff, a tiny change of position results in large fluctuation of the force, which can easily lead to instability of the controller. Typically, a robot using this approach to do force control will cover its end effector with soft materials to deal with stiff impact. It becomes very challenging to handle tasks that naturally requires stiff contacts between the end effector and the environment.

Current-based Force Control

Position-based force control is sometimes the only available option as most of the robotic arm manufacturers will not grant access to anything beyond position control. If current control interface is provided, there is another common force control framework which use a 6-DOF force/torque sensor at the end effector and current control at each joint. Theoretically the ratio between motor output torque and input current is a known constant. That is why we can approximately control joint torque by controlling the motor current. Unfortunately, due to friction and other non-linear factors caused by the motor and transmission, this approximation is usually very rough. When commanding the end effector to apply a certain force to the environment, we map end-effector force to joint torques, then approximate it with motor currents. Since the joint torque approximation is not perfectly accurate, it requires some control effort to compensate the error based on information provided by the 6-DOF force/torque sensor, which will compromise the force control performance. The joint torque inaccuracy has even more negative impact on control of other parts of the robot where no additional sensors can be used for dynamics correction, such as controlling elbow or whole posture under interaction with the environment.

Caveat of Joint Torque Sensor

Due to the limitation of using motor current to control joint torque, some newer models of robotic arms implement joint torque sensors in each joint. There are two major approaches. One is called “Serial Elastic Actuator (SEA)”, which measures the displacement of a comparatively soft spring to calculate joint torque. The other is to use a stiff strain-gauge-based sensor, which measures the change in resistance of a metal plate to measure strain and thus calculate the joint torque. SEA is perfect for legged robots as the integrated spring acts as a protection to absorb and filter out impact to the leg and store energy efficiently for the next stride. However, if it is used in a robotic arm, the spring adds undesirable compliance to the system and reduces accuracy and response of the control, which is more important in hand manipulation than walking. Strain-gauge-based methods do not have this problem, but have some common drawbacks including drift under varying temperature, bad overload protection, impact resistance, and high cost due to fabrication difficulties.

There is another caveat of joint torque sensor. Even though it is supposed to measure torque in only one dimension, its measurement is usually affected by force/torque in other directions. This issue is unavoidable on a serial arm, as each joint is taking the load of all joints and links in its downstream. To address this problem, both the torque sensor and the joint assembly will need to be optimized or redesigned.

6-DOF Force/Torque Sensor

Many robotic arms have 6-DOF force/torque sensors installed on the end effector or even the base to further improve the force control performance. Unfortunately, existing 6-DOF force/torque sensor products in the market are usually expensive and not that durable compared to their price. Most of them are built using strain gauges, capacitors or optical transducers that measure the displacement of an elastic structure. These approaches usually lead to some negative side effects including inaccuracy, drift, noise, hysteresis, durability, and so on. There exist a few products with very good specs, but the price can be much higher. Additional devices needed for this type of sensor to properly function also make the deployment and cabling more complicated.

Whole-body Force Control

Developing force-control robot to have great enough performance for targeted adaptivity is indeed very challenging. First, existing torque sensor designs are not sufficient in terms of performance and cost. Both joint torque sensors and 6-DOF force/torque sensors need to be redesigned. Second, each joint assembly should be specially optimized for less coupling effect and better dynamics for force control. Third, low-level joint torque control should be executed with optimal electronics and well-designed algorithms to achieve ultimate performance. Finally, an advanced whole-body force control software framework built upon all these improvements and optimizations can then be realized to exploit the full potential of a force control arm.

Whole-body force control is under-developed in the industry and the potential has not been truly realized by mature products in the market. In academia papers have been posted tens of years ago to establish the fundamental theory. Since then research popularity in this field was limited by performance of existing hardware. Beyond those elegant equations that describe the fundamental framework, a great amount of scientific and engineering work needs to be done to unleash the true power of this beautiful methodology.

Hierarchical Intelligence

Force control is just the first step towards adaptivity. A robot also needs to know how to utilize force control with the integration of other information. This leads us to the concept of hierarchical intelligence.

Non-hierarchical Intelligence

What is hierarchical intelligence? Let’s look at non-hierarchical intelligence first. As deep learning technology becomes more and more popular and demonstrates its power in various areas, some researchers seek to introduce deep learning to the field of robotics. They developed an “end-to-end training” algorithm to train a deep neural network that is carefully designed with complex architecture. Such network uses only raw vision and sensor information as input to obtain the output for low level control parameters in each robotic joint.

This method did quite well in “teaching” the robot to perform some simple tasks, such as hanging cloth on a clothing rack, fitting a shaped object to a corresponding hole with loose tolerance, or opening the cap of a bottle. However, despite of large data needed to be collected by running a number of robots in experiments or simulations for a long period of time, there are three fundamental drawbacks in this methodology:

· The trained model is only applicable to specific tasks under specific environmental conditions, and even to a specific robotic arm — it is hardly transferable. We will have to re-train the arm partially or entirely each time it gets a new task in a new place.

· The model does not have the ability to deal with external disturbance or human interference while performing the task, as these are unpredictable factors and corner cases from the perspective of the training process.

· Only a limited number of simple tasks can be trained in this way. Tasks like polishing a complex part, assembling a product or wiping a window are complicated and require a longer sequence of steps. Tasks like installing a bearing have a very small solution space. Such tasks are extremely challenging for end-to-end deep learning methods to figure out.

The Three Abstract Layers of Intelligence

Humans do not seem to learn new skills “end-to-end”. As an example, when we wipe a window, we first recognize the window glass within the frame, then move our hand with a damped rag back and forth while applying some force perpendicular to the glass to increase rubbing friction; at the same time, we also watch carefully to make sure we have covered everything. In this window-wiping process, some of our capabilities like vision recognition and stain detection are used consciously, while some other capabilities like moving our arm and applying force are more encoded in our subconscious. The human brain, which is responsible for planning a trajectory based on visual and haptic information of the body, never has to think about the electric signals needed to actuate a specific muscle in order to complete a task.

We believe that the intelligence system of a robotic arm should be similar: the AI responsible for recognition of the window and stain should not be directly involved in figuring each joint position or motor current. It should not even worry about the motion primitive of “moving back and forth while applying some force”. The lowest layer of intelligence controls the basic motion of the arm and maintains stability, the mid layer intelligence encodes a different sequence of motions, while the highest layer of intelligence takes care of perception, understanding, planning and other complicated cognitive tasks — this is what we call “hierarchical intelligence”.

In a hierarchical intelligence system, each layer of intelligence is relatively independent. Lower level intelligence cannot directly affect higher-level one. The output of higher-level intelligence is executed by the lower level. Each lower level is instructed and tuned by its direct upper level.

Fast and Simple vs. Slow and Complicated

In a hierarchical intelligence system, the complexity of the intelligence grows from bottom to top, while the calculation speed decreases. A person scared of a snake will jump away at the first glance of a rope, and only realize it is just a rope a bit later. The recognition process happens much slower than the jumping away reaction. Similarly, a robotic arm disturbed by someone should be able to react quickly to ensure safety before its vision system detects the presence of a human. A robotic arm polishing a curving object should be able to quickly adapt to the shape and adjust its force and moment output smoothly. If we first model the part using depth vision and then use the information to command a trajectory to the arm, it will be very slow and highly dependent on the accuracy of 3D vision, the robot precision and relative position registration. In conclusion, an adaptive robot must have a hierarchical intelligent system, either for true intrinsic safety, for its work efficiency and effectiveness, or for good skill transferability.

The Real Potential of AI in Robotics

As we separate low level control from high level intelligence, the top-level intelligence can focus on more advanced and complicated tasks, without worrying about implementation details. For traditional industrial robots, the goal of computer vision is to recognize and locate an object precisely. Accuracy of industrial vision is the key to a successful vision-integrated application. For adaptive robots, the goal of machine vision becomes “perception and understanding.” We can use deep learning technology to find a specified object and roughly estimate its position and orientation, which is sufficient for force-guided hand-eye coordination. We can also use vision to decide if the completion of a task meets its quality requirements.

In addition to vision, force perception is a topic that is yet to be fully explored by artificial intelligence research. Humans have tactile sensing capability on the skin as well as force sensing on each muscle in each joint. With all these “force sensors”, humans can figure out the material, shape, weight and other properties of a contacted object. Humans can even find out the number of objects in a container, determine if two objects are attached together securely, or come up with the best packing strategy. AI of an adaptive robot will be able to understand the world and solve practical problems with the information in a completely new dimension.

CONCLUSION

Adaptive Robots Define the Future of Industrial Automation

Different from traditional industrial robots and collaborative robots, adaptive robots combine high-performance force control and advanced AI technologies for the capabilities of working effectively in uncertain environments on a much wider range of tasks with intrinsic safety. The three key characteristics discussed above fully define the meaning of adaptivity for an adaptive robot. This new category of the robot is created to achieve great flexibility, automate tedious and harmful tasks, and pave the way for new opportunities ahead in the future.

ABOUT FLEXIV

Flexiv Ltd. is a global leading robotics and AI company, focusing on developing and manufacturing adaptive robots which integrate force control, computer vision, and AI technologies. Flexiv provides innovative turnkey solutions and services based on Flexiv robotic systems to customers in multiple industries. Flexiv was founded in 2016, with a core team from robotics and AI laboratories at Stanford University.

--

--

Flexiv - Dexterous and Intelligent

Flexiv Ltd. is a global robotics and AI company, focusing on developing and manufacturing adaptive robots which integrate force control, computer vision and AI.