LLMs Are Interesting, But Physical AI Is About to Reshape Our World

LLMs Are Interesting, But Physical AI Is About to Reshape Our World
LLMs Are Interesting, But Physical AI Is About to Reshape Our World

In all the exciting discussions of AI over the past year, the physical world has been largely overlooked. The conversations around chatbots and other tools enabled by large language models (LLMs) focus primarily on digital applications and little on the physical challenges that AI can address. Physical AI technology is ready to solve real-world problems by fusing AI and physical systems to create products that mimic human cognitive, sensory and physical capabilities. Robotics is an obvious Physical AI use case, though this kind of AI has the potential to enhance lives from medicine to retail or climate technology.
 
Researchers and major AI players are already working on Physical AI projects, including testing millions of robots for factory use and developing new learning models. For example, researchers have created a model that can help autonomous vehicles adapt in real time to rapidly changing road, wind and other conditions. The market for one aspect of Physical AI--embedded AI for healthcare, automotive and other industries--is projected to reach a value of $45 billion in 2029.


The path to Physical AI

The origins of Physical AI can be traced back to key advances to four key technologies: classical robotics, artificial intelligence, embedded computer systems and edge AI devices. Together, these fields are driving Physical AI on its own evolution from automation to augmentation to autonomy. Each stage of Physical AI evolution builds on earlier developments to deliver unique capabilities.
 
The automation stage uses Physical AI-enabled devices and systems to handle specific tasks with little to no human intervention. For example, assembly line robots can free up line workers for other tasks. At this stage, Physical AI relies on predefined rules. If the assembly line breaks down, a human has to step in to resolve the issue.
 
At the augmentation stage, Physical AI enhances human capabilities by taking actions or providing recommendations based on the system’s analytics and insights. Predictive maintenance, which analyzes industrial equipment sensor data to alert humans when a machine is about to break down, is a widely adopted example of augmentation-stage Physical AI. Another example is an image-processing system that rapidly assesses patient medical scans for anomalies and flags them for review.
 
When Physical AI reaches the autonomy stage, it can deploy advanced sensing, learning and decision-making abilities to operate without the need for human input, even when conditions change. The most popular example in this category is self-driving vehicles, but that’s far from the only use case. Package delivery drones and energy grid management systems could optimize resources and drive efficiencies. Autonomous maintenance systems could reduce the need for workers to go into risky environments, and in the healthcare space patient monitoring and treatment using wearable devices is a rapidly growing area.


Real-world applications of Physical AI

In addition to supporting more efficient and safer processes, Physical AI has the potential to maintain processes. Global workforce shortages are a growing challenge, with 75% of employers having trouble hiring enough people in 2023, compared to just 36% a decade ago. Robots, embedded systems and other Physical AI applications can help orchestrate autonomous processes to overcome worker shortages. For example, autonomous taxis and trucks will soon be able to move people and products using Physical AI sensing, real-time data processing and navigation capabilities. Much like the advancements in Generative AI at the edge, which enable real-time decision-making with localized data processing, Physical AI applications such as autonomous vehicles rely on similar principles to navigate changing environments and enhance efficiency. In warehouses and factories, autonomous robots already handle some assembly tasks and have the potential to handle sorting, retrieval and other tasks.
 
Physical AI can also enable immersive experiences for many purposes, from gaming and augmented retail to industrial training. Physics-based simulations that create realistic virtual environments are especially valuable for training AI models to do specific tasks without risking damage to equipment or product quality as the system learns. These simulations are similar to digital twins already used for training, with the addition of real-world data to create a near limitless range of scenarios. For example, a virtual simulation of a beverage factory assembly line could generate virtual models of cans with a huge variety of defects, to train the AI system to detect as many problems as possible before it’s applied to the real assembly line. This can reduce the cost of AI training, reduce the time needed for training and increase the accuracy of the model once it’s deployed.
 
We see three key industries where Physical AI offers especially promising orchestration and immersion use cases:

1. In consumer and connected spaces, AI-driven home automation has the potential to increase energy efficiency, comfort and security using in-home sensor data and other inputs like weather and power grid demand data. Robotic vacuums and smart speakers are on the path to eventual autonomy, and they already augment daily life for many users. Augmented reality consumer experiences like virtual try-ons or room design can be more realistic and useful with real-time object recognition and interaction.

2. In healthcare, AI-powered robotic surgical systems can enhance the precision of procedures and improve outcomes. Physical AI in wearables can help with patient monitoring as well as early detection and diagnosis of diseases. Patients undergoing rehabilitation and their physical therapists will be aided by AI-driven robots that enable and monitor personalized treatment and exercises.

3. In climate tech and the industrial space, Physical AI environmental sensors deployed in complex systems will support real-time monitoring and control of emissions and other outputs, incorporating predictive modeling of climate patterns and local environmental conditions and assist governments and businesses with disaster preparedness planning. Physical AI-based training and automation can address labor shortages and skills gaps, while giving smart factories the predictive maintenance capabilities they need to minimize waste and run at maximum efficiency.


Exploring possibilities with Physical AI

As powerful as current use cases like image analysis and predictive maintenance are, Physical AI’s potential to transform industries and address major global challenges is much greater than the solutions we have today.
 
Just as organizations are racing to adopt LLM AI tools to build interactive, natural interfaces, it’s wise for organizations to start thinking now about how Physical AI can add value or solve problems. The key, as with any new technology, is to start small and plan methodically, with a problem statement, a data-informed product-market fit and a plan to develop or source the talent needed to make the product or solution a reality. With those elements in place, it’s possible to run a pilot program, fine-tune it and learn from its deployment before scaling up with increasingly large Physical AI use cases.
 
Starting to explore the possibilities of Physical AI today will give organizations advantages in terms of the learning curve, scaling and progression from basic automation to augmentation and fully autonomous Physical AI. Physical AI is the next frontier of the intersection of the digital and the physical and leveraging all it has to offer is key for industry leaders who want to stay ahead both today and tomorrow.

About The Author


John Robins is director and head of AI & Data and Industrial Business at Synapse, part of Capgemini Invent. As a product management and business growth executive with over 18 years' experience, John specializes in bridging the physical and digital worlds. In his current role, John works with ambitious clients to build novel deep-tech products leveraging IoT and AI technologies across industrial, hi-tech, automotive, telco and food/agtech markets. Previously, John led product management at a large consumer electronics company and ran an industrial IoT startup. Additionally, he is a member of the TinyML Working Group and a Gartner Product Management Ambassador. 
 
Mat Gilbert, director, head of AI & Data at Synapse, part of Capgemini Invent, is a distinguished technology leader at the forefront of new product development, integrating advanced AI, data analytics and smart sensing technologies to create solutions that benefit both people and the planet. With a future-focused approach, Mat spearheads innovations that incorporate ethical and environmental considerations, working as a technical authority in the technology sector. His work not only expands the possibilities of technological advances but also ensures that these innovations are sustainable and human centric.


Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..

Subscribe