4 Reasons Why the Future of Robotics Is in Advanced Sensors and AI

by Emily Newton

Robots have come an incredibly long way in a relatively short time, largely due to developments associated with artificial intelligence (AI) and advanced sensors. Here are some eye-opening reasons why robotics’ future relies on AI and advanced sensor technology.

1. These Technologies Promote Safety

AI often excels at addressing problems, and the associated results could improve society. Consider that drivers hit and killed approximately 7,485 pedestrians in 2021. That was a 40-year high that worked out to about 20 fatalities daily.

Many people think autonomous cars could reduce vehicular accidents, including those involving pedestrians. The AI algorithms, numerous high-tech sensors and other advanced technologies aren’t perfect, but they don’t get distracted or tired. Researchers from the University of Tokyo believe adding robotic eyes to autonomous vehicles could further reduce accidents by helping pedestrians anticipate cars’ movements.

The researchers put the robotic eyes on a self-driving golf cart. They set up four scenarios to see how the eyes affected people’s decisions to stop and wait for the cart to pass or try to cross in front of it. All participants went through simulated scenarios for safety reasons. The results showed that robotic eyes caused safer and more efficient crossings for everyone in the study.

This study required researchers to manually control the robotic eyes’ movements. However, they’d like to do future investigations that connect the eye motion to that of the car or cart. Even though autonomous vehicles are not yet widely used worldwide, people must keep learning how to make them as safe as possible. This example shows that AI, sensors and robots could work together to reduce pedestrian deaths.

2. These Technologies Can Advance Medical Research

The reproducibility of experiments is an issue that continually hinders people in the scientific community. It occurs when researchers other than those who originally did the work can get the same results as the first team by following their documentation. If not, they often see them as invalid. After all, if individuals can only get the desired outcomes under carefully controlled conditions and in certain locations, those results probably aren’t very applicable to the real world.

Researchers from the University of Cambridge built a robotic scientist called Eve to test reproducibility. It uses AI to perform scientific experiments. The team had Eve focus on cancer research. They exposed the robot to an initial set of more than 12,000 academic papers and used automated text mining to narrow them down to 74.

Only 22 showed significant evidence for reproducibility or robustness, meaning different scientists under similar conditions could achieve the results. However, there were 43 cases where scientists could replicate the results in identical settings.

Knowing which experiments are reproducible could help scientists determine whether certain projects are worth pursuing. Using AI to guide the efforts is particularly useful when mini robots and sensors are used internally or may otherwise affect someone’s health.

For example, MIT researchers developed an AI-based sensor to detect when people make mistakes while giving themselves insulin. That development could solve a problem that results in more consistent at-home care.

In another case, a team used AI to create swimming microrobots to potentially use for drug-delivery applications. That could be critical in helping drugs reach precise locations in the body, thereby increasing their effectiveness.

3. These Technologies Enhance Creativity and Productivity

Robots must have accurate and repeatable movements. That makes them well-suited for environments like factories, where machines perform tasks that would quickly fatigue or bore humans. For example, pneumatic robotic arms offer up to 2-millimeter precision and repeatability of 0.02 millimeters.

In one recent case of progress with robotic arms, a Carnegie Mellon University team created FRIDA, a robot that can hold a paintbrush and respond to artistic prompts. They stress that it is a machine an artist could collaborate with, not technology to replace human creators. FRIDA uses algorithms and human input to make the art. For example, a user could input a text prompt or upload an image and ask the robot to do something similar.

Sensors are also integral to FRIDA so the robot correctly applies brush strokes. The researchers set three parameters that determine shapes — length, bend and pressure.

AI is also a productivity booster, whether used in robots or elsewhere. For example, people sometimes apply AI to 3D printing. That approach boosts efficiency when working with large quantities of data.

Elsewhere, researchers from Sweden’s ​​KTH Royal Institute of Technology developed a 3D printing solution to make customized sensors. The technique allowed the team to create approximately a dozen accelerometers in only a few hours. They said this approach lends itself well to prototyping and could enable the manufacture of tens of thousands of sensors per year.

4. These Technologies Create Incredible Possibilities

People have long been interested in robots and what they can do. Together, advanced sensors and AI may push the boundaries of possibility even further than people may have initially imagined.

Consider how Columbia University engineers combined sensors and AI to build a robotic hand with humanlike dexterity. It uses motor-learning algorithms and high-tech sensors to work and can do so while performing tasks in the dark. The team tested it by having it grasp and rotate an unevenly shaped object without relying on visual feedback.

This was a tricky task because it required the robot to use some of its fingers to keep the item steady while maneuvering the others to turn it steadily. The engineers clarified that this work merely demonstrated a proof of concept. However, they believe it will pave the way for robots that can do much more in the real world than current models.

In another impressive advancement, researchers showed how people could someday operate robots with their minds. A University of Technology Sydney team created a biosensor that allows people to manipulate objects without their voices or hands. They think it has future applications in industries ranging from health care to the military. It could also allow people with physical disabilities to move their wheelchairs or prosthetic devices.

This technology uses hexagon-patterned, graphene sensors worn on the back of the scalp that pick up on brain waves from the visual cortex. The user wears an augmented reality lens displaying flickering squares on their head. Concentrating on a particular shape makes the sensor detect the brain waves and use a decoder to translate them into commands. Experiments showed this approach achieved up to 94% accuracy in controlling a robotic dog.

AI and Advanced Sensors Are Vital for Robots

These examples show why progressing with advanced sensors and artificial intelligence will result in better robots. People already use robotic machines in industries ranging from agriculture to hospitality. Innovations in the areas that make robots even better will open new possibilities for using them in more and different ways.

 

Emily Newton is the Editor-in-Chief of Revolutionized Magazine. She has over six years experience covering stories in the manufacturing, logistics and construction industries.