This study presents the development of an autonomous mobile robot for real-time object detection and collection by integrating a Convolutional Neural Network (CNN) with an Inertial Measurement Unit (IMU). The primary objective is to design, implement, and evaluate a sensor-fusion-based robotic system capable of detecting objects through image recognition, estimating orientation and motion via inertial sensing, and performing automated retrieval tasks in structured and semi-structured environments. The CNN is trained to recognize and localize objects using real-time video input, while the IMU provides data on the robot’s pose and dynamics. Through sensor fusion algorithms, the system achieves improved situational awareness, stability, and navigation accuracy. A closed-loop control framework translates sensory data into motion commands for the robot’s differential drive and gripper, enabling reliable object approach, grasping, and transport. Experimental results demonstrate high classification accuracy and a grasping success rate exceeding 85% in indoor tests. The proposed approach shows strong potential for applications in logistics, smart manufacturing, and service robotics, where repetitive object-handling tasks can be automated with reliability.
SUBMITTED: 04 July 2025
ACCEPTED: 10 August 2025
PUBLISHED:
9 December 2025
SUBMITTED to ACCEPTED: 38 days
DOI:
https://doi.org/10.53623/amms.v2i1.758