Sensing, Planning and Acting – Path to Building Intelligent Robotic Systems

Vedansh Mishra May 23, 2025

Every day, without even thinking about it, humans perform countless complex tasks by effortlessly moving through a loop, we sense, we plan, and then we act. Take crossing a busy street, for example. We look around, assess the traffic, predict which vehicle might move next, and decide the perfect moment to walk. All this happens in seconds thanks to our brain’s incredible ability to take in sensory information, process it into a plan, and take the suitable actions based on that plan. It’s so natural that we hardly notice it, yet it’s the foundation of every intelligent decision we make.

Now imagine teaching a robot to do the same. It may not have eyes, ears, or instincts like we do but with cameras, LiDAR, microphones, and advanced algorithms, robots can begin to mimic this loop. In robotics, this process is known as the Sense-Plan-Act (SPA) architecture. It’s how we enable machines to understand their surroundings, make informed decisions, and interact with the real world in meaningful ways. Just like humans, robots need to perceive, reason, and respond and this is where things start to get really interesting.

The SPA loop is the backbone of intelligent robotics. Without it, even the most advanced robot would be nothing more than an expensive statue. Sensing allows a robot to collect raw data from the world images, distances, temperatures, sounds. Planning takes that data and turns it into a strategy whether it’s avoiding an obstacle, mapping a route, or prioritizing tasks. Acting turns that plan into motion like moving arms, steering wheels, rotating joints. Together, this trio transforms machines into autonomous agents capable of adapting to uncertainty and solving real-world problems.

A perfect example of this in action is Boston Dynamics’ Spot, the robotic dog that’s made waves across industries. Whether it’s inspecting dangerous sites, assisting in search and rescue missions, or even dancing to Bruno Mars, Spot operates on the SPA loop. It senses the terrain with built-in cameras and sensors, plans stable paths using real-time mapping and AI, and acts by precisely controlling its four legs to move, climb stairs, or avoid obstacles. Spot isn’t just a show-off it’s a demonstration of robotics intelligence at its finest.

In this blog, we will gain insights into each of the three processes and understand their importance in Robotics. We will be using the CodeRobo’s Simulator which provides us a fun and easy to understand environment to program and visualize each of the processes.

Check out CodeRobo’s Simulator and Sense-Plan-Act course in the “Explore with Free Space” course on this CodeRobo.ai free of cost.

Sensing

Imagine trying to walk through a crowded room with your eyes closed, ears plugged, and no sense of touch, sounds impossible, right? That’s exactly what a robot would be like without sensing. In robotics, sensing is the robot’s way of “seeing,” “hearing,” and “feeling” the world around it.

Sensing in robotics refers to a robot’s ability to perceive and understand its environment using various input devices like cameras, ultrasonic sensors, LiDAR, infrared sensors, or touch sensors. Just as humans rely on their senses to navigate the world, robots need sensors to gather information about their surroundings such as distances to objects, light levels, sounds, movements etc. This data is crucial because it allows the robot to make informed decisions rather than blindly executing commands. Without sensing, a robot would be unaware of obstacles, people, or changes in its environment, making it incapable of adapting or functioning safely in real-world, dynamic scenarios. In short, sensing is what transforms a robot from a pre-programmed machine into an intelligent, responsive system capable of interacting meaningfully with the world around it.

Our goal here is to guide the orange robot from its starting position to the goal, marked by a green-colored marker. But there’s a catch a large rectangular obstacle stands directly in its path. So how do we make sure the robot doesn’t crash into it and instead navigates safely around? That’s where sensors and more specifically, the process of sensing come into play!

To give the robot a sense of “awareness,” we equip it with a proximity sensor. This sensor allows the robot to measure how close objects are in its surroundings. By continuously monitoring the distance between the robot and any obstacle in front of it, we can help it make decisions like turning before it crashes. This is the first step of the Sense-Plan-Act loop in action.

Here’s a simple Blockly code snippet that demonstrates this behavior. The robot keeps checking the distance to the object in front. If the measured distance falls below a set threshold value, 3 units, in our case it decides to turn left to avoid a collision. This is a basic example of how real-time sensing allows a robot to react to its environment and adjust its path dynamically.

Planning

Planning in robotics is like the robot’s brain figuring out the best next move after sensing the environment. Once a robot knows what’s around it, walls, obstacles, targets, or people it needs to decide what to do next. Should it turn left, go straight, stop, or take a completely new route? That decision-making process is what planning is all about. It involves analyzing sensor data, predicting outcomes, weighing options, and choosing the most efficient or safest action to achieve a goal. Without planning, a robot might detect an obstacle but have no idea how to avoid it like noticing a wall but still crashing into it. Planning gives robots the ability to think ahead, solve problems, and adapt their behavior based on the situation, making them truly autonomous and intelligent.

Let’s look at a simple Blockly code for our task that helps the robot plan its trajectory to the the goal position.

In the Blockly code above, we’ve programmed our orange robot to navigate around a large obstacle using a combination of sensing and conditional movement. The robot begins by moving forward while continuously checking the distance to any object directly in front. As soon as this distance falls below a certain threshold (3 units), it identifies an obstacle and makes a left turn to avoid a collision.

Once the robot turns, it begins to follow the wall by keeping track of the distance to its right side. As long as the wall remains nearby (within 4 units), the robot keeps moving forward. But when it reaches a corner or edge, the distance to the right suddenly increases signaling that there’s no longer a wall next to it. This triggers a right turn, aligning the robot to follow the next side of the obstacle.

This pattern continues: the robot keeps sensing its environment, adjusting its direction based on proximity readings from the right and back sides. At a certain point, the robot detects an opening or the end of the obstacle, and it turns again to resume moving toward the goal. Finally, it continues forward until it detects that the goal has been reached, at which point it stops.

Action

We are able to sense our environment and are ready with a plan, now it’s time to execute our plan or act upon it.

Acting in robotics is where all the sensing and planning come to life, it’s the moment the robot takes physical action based on its decisions. Whether it’s moving forward, turning, picking up an object, or even dancing, acting is how a robot interacts with the real world. You can think of it as the robot’s muscles responding to its brain’s commands. Without acting, all the sensing and planning would be useless like knowing there’s a fire and planning an escape route, but never actually moving. Action turns intention into impact. It’s what allows a robot to navigate through space, manipulate objects, and complete tasks. In short, acting is the final and essential step that lets robots bring their intelligence into motion.

Marked with black rectangles are all the commands in the Blockly code that tell the robot to take certain action.

Now we can see our robot come in action and reach the goal position successfully!

Conclusion

As robots become more embedded in our lives from warehouse automation to home assistants and autonomous vehicles the Sense-Plan-Act framework will continue to define their intelligence and autonomy. It’s the invisible engine behind every smart move a robot makes. And as we refine each part of this loop, we bring machines one step closer to moving, thinking, and responding like us only faster, safer, and tireless.

Check out the web-based simulator and courses on CodeRobo.ai to get started with understanding the key concepts in Robotics.



featured-media
Spark kids curiosity in coding and robotics with courses at CodeRobo to guide them in their robotics journey.

Get latest updates on robotics, coding, STEM, more courses, competitions and more.

Related Blogs

featured-media
How to Keep Your Child Busy and Learning All Summer Long
CodeRobo Team May 11, 2025

How to Keep Your Child Busy and Learning All Summer Long Summer break is every kid’s favorite time of the year. But for parents? It can quickly become a challenge. With school out and routines disr...

featured-media
Do Kids Need Prior Coding Knowledge to Start Robotics?
CodeRobo Team May 20, 2025

Do Kids Need Prior Coding Knowledge to Start Robotics? The short answer is no — kids don’t need any prior coding knowledge to begin exploring robotics. In fact, some of the most engaging and effect...

featured-media
Looking for an easy way to teach coding? Our lesson plans using CodeRobo.AI makes it simple. Step-by-step lessons help educators, even beginners to inspire kids in STEAM.
featured-media
Shop smart apparel for future innovators.
featured-media
Spark kids curiosity in coding and robotics with courses at CodeRobo to guide them in their robotics journey.
featured-media
Looking for an easy way to teach coding? Our lesson plans using CodeRobo.AI makes it simple. Step-by-step lessons help educators, even beginners to inspire kids in STEAM.
featured-media
Shop smart apparel for future innovators.

Related Blogs

featured-media
How to Keep Your Child Busy and Learning All Summer Long
CodeRobo Team May 11, 2025

How to Keep Your Child Busy and Learning All Summer Long Summer break is every kid’s favorite time of the year. But for parents? It can quickly become a challenge. With school out and routines disr...

featured-media
Do Kids Need Prior Coding Knowledge to Start Robotics?
CodeRobo Team May 20, 2025

Do Kids Need Prior Coding Knowledge to Start Robotics? The short answer is no — kids don’t need any prior coding knowledge to begin exploring robotics. In fact, some of the most engaging and effect...


Get latest updates on robotics, coding, STEM, more courses, competitions and more.

Sensing, Planning and Acting – Path to Building Intelligent Robotic Systems

Vedansh Mishra May 23, 2025

Every day, without even thinking about it, humans perform countless complex tasks by effortlessly moving through a loop, we sense, we plan, and then we act. Take crossing a busy street, for example. We look around, assess the traffic, predict which vehicle might move next, and decide the perfect moment to walk. All this happens in seconds thanks to our brain’s incredible ability to take in sensory information, process it into a plan, and take the suitable actions based on that plan. It’s so natural that we hardly notice it, yet it’s the foundation of every intelligent decision we make.

Now imagine teaching a robot to do the same. It may not have eyes, ears, or instincts like we do but with cameras, LiDAR, microphones, and advanced algorithms, robots can begin to mimic this loop. In robotics, this process is known as the Sense-Plan-Act (SPA) architecture. It’s how we enable machines to understand their surroundings, make informed decisions, and interact with the real world in meaningful ways. Just like humans, robots need to perceive, reason, and respond and this is where things start to get really interesting.

The SPA loop is the backbone of intelligent robotics. Without it, even the most advanced robot would be nothing more than an expensive statue. Sensing allows a robot to collect raw data from the world images, distances, temperatures, sounds. Planning takes that data and turns it into a strategy whether it’s avoiding an obstacle, mapping a route, or prioritizing tasks. Acting turns that plan into motion like moving arms, steering wheels, rotating joints. Together, this trio transforms machines into autonomous agents capable of adapting to uncertainty and solving real-world problems.

A perfect example of this in action is Boston Dynamics’ Spot, the robotic dog that’s made waves across industries. Whether it’s inspecting dangerous sites, assisting in search and rescue missions, or even dancing to Bruno Mars, Spot operates on the SPA loop. It senses the terrain with built-in cameras and sensors, plans stable paths using real-time mapping and AI, and acts by precisely controlling its four legs to move, climb stairs, or avoid obstacles. Spot isn’t just a show-off it’s a demonstration of robotics intelligence at its finest.

In this blog, we will gain insights into each of the three processes and understand their importance in Robotics. We will be using the CodeRobo’s Simulator which provides us a fun and easy to understand environment to program and visualize each of the processes.

Check out CodeRobo’s Simulator and Sense-Plan-Act course in the “Explore with Free Space” course on this CodeRobo.ai free of cost.

Sensing

Imagine trying to walk through a crowded room with your eyes closed, ears plugged, and no sense of touch, sounds impossible, right? That’s exactly what a robot would be like without sensing. In robotics, sensing is the robot’s way of “seeing,” “hearing,” and “feeling” the world around it.

Sensing in robotics refers to a robot’s ability to perceive and understand its environment using various input devices like cameras, ultrasonic sensors, LiDAR, infrared sensors, or touch sensors. Just as humans rely on their senses to navigate the world, robots need sensors to gather information about their surroundings such as distances to objects, light levels, sounds, movements etc. This data is crucial because it allows the robot to make informed decisions rather than blindly executing commands. Without sensing, a robot would be unaware of obstacles, people, or changes in its environment, making it incapable of adapting or functioning safely in real-world, dynamic scenarios. In short, sensing is what transforms a robot from a pre-programmed machine into an intelligent, responsive system capable of interacting meaningfully with the world around it.

Our goal here is to guide the orange robot from its starting position to the goal, marked by a green-colored marker. But there’s a catch a large rectangular obstacle stands directly in its path. So how do we make sure the robot doesn’t crash into it and instead navigates safely around? That’s where sensors and more specifically, the process of sensing come into play!

To give the robot a sense of “awareness,” we equip it with a proximity sensor. This sensor allows the robot to measure how close objects are in its surroundings. By continuously monitoring the distance between the robot and any obstacle in front of it, we can help it make decisions like turning before it crashes. This is the first step of the Sense-Plan-Act loop in action.

Here’s a simple Blockly code snippet that demonstrates this behavior. The robot keeps checking the distance to the object in front. If the measured distance falls below a set threshold value, 3 units, in our case it decides to turn left to avoid a collision. This is a basic example of how real-time sensing allows a robot to react to its environment and adjust its path dynamically.

Planning

Planning in robotics is like the robot’s brain figuring out the best next move after sensing the environment. Once a robot knows what’s around it, walls, obstacles, targets, or people it needs to decide what to do next. Should it turn left, go straight, stop, or take a completely new route? That decision-making process is what planning is all about. It involves analyzing sensor data, predicting outcomes, weighing options, and choosing the most efficient or safest action to achieve a goal. Without planning, a robot might detect an obstacle but have no idea how to avoid it like noticing a wall but still crashing into it. Planning gives robots the ability to think ahead, solve problems, and adapt their behavior based on the situation, making them truly autonomous and intelligent.

Let’s look at a simple Blockly code for our task that helps the robot plan its trajectory to the the goal position.

In the Blockly code above, we’ve programmed our orange robot to navigate around a large obstacle using a combination of sensing and conditional movement. The robot begins by moving forward while continuously checking the distance to any object directly in front. As soon as this distance falls below a certain threshold (3 units), it identifies an obstacle and makes a left turn to avoid a collision.

Once the robot turns, it begins to follow the wall by keeping track of the distance to its right side. As long as the wall remains nearby (within 4 units), the robot keeps moving forward. But when it reaches a corner or edge, the distance to the right suddenly increases signaling that there’s no longer a wall next to it. This triggers a right turn, aligning the robot to follow the next side of the obstacle.

This pattern continues: the robot keeps sensing its environment, adjusting its direction based on proximity readings from the right and back sides. At a certain point, the robot detects an opening or the end of the obstacle, and it turns again to resume moving toward the goal. Finally, it continues forward until it detects that the goal has been reached, at which point it stops.

Action

We are able to sense our environment and are ready with a plan, now it’s time to execute our plan or act upon it.

Acting in robotics is where all the sensing and planning come to life, it’s the moment the robot takes physical action based on its decisions. Whether it’s moving forward, turning, picking up an object, or even dancing, acting is how a robot interacts with the real world. You can think of it as the robot’s muscles responding to its brain’s commands. Without acting, all the sensing and planning would be useless like knowing there’s a fire and planning an escape route, but never actually moving. Action turns intention into impact. It’s what allows a robot to navigate through space, manipulate objects, and complete tasks. In short, acting is the final and essential step that lets robots bring their intelligence into motion.

Marked with black rectangles are all the commands in the Blockly code that tell the robot to take certain action.

Now we can see our robot come in action and reach the goal position successfully!

Conclusion

As robots become more embedded in our lives from warehouse automation to home assistants and autonomous vehicles the Sense-Plan-Act framework will continue to define their intelligence and autonomy. It’s the invisible engine behind every smart move a robot makes. And as we refine each part of this loop, we bring machines one step closer to moving, thinking, and responding like us only faster, safer, and tireless.

Check out the web-based simulator and courses on CodeRobo.ai to get started with understanding the key concepts in Robotics.



code robo landing
code robo landing
This website uses cookies to enhance the user experience. By continuing to use our site, you agree to our use of cookies. For more details, see our Privacy Policy and Terms and Conditions.