About Me

About Me

I am currently a Ph.D. candidate at Missouri University of Science and Technology (MST) in the Department of Electrical & Computer Engineering, under the guidance of Dr. Sarangapani.

My research focuses on reinforcement learning-based optimal tracking control for nonlinear discrete-time systems, with applications in robotics and autonomous vehicles. A significant aspect of my work is lifelong learning-based optimal controllers, where the controller continuously learns from past experiences to enhance future performance. I also emphasize safety-aware and explainable AI, ensuring the reliability and interpretability of autonomous decision-making systems.

Beyond my core research, I explore machine learning applications in cyber-physical systems, including:

  • Vision-based robotic manipulation and localization
  • Motion planning & perception
  • SLAM and mapping for real-world autonomous navigation

Research Interests

  • Reinforcement Learning & Optimal Control: Developing safe and explainable deep reinforcement learning-based controllers for nonlinear, discrete-time systems, with real-world applications in robotics and autonomous systems.
  • Navigation & Motion Planning: Designing adaptive and robust path optimization strategies for autonomous vehicles and mobile robots, enabling efficient navigation in off-road terrains (forests, deserts) and human-centered environments (sidewalks, crowded buildings).
  • Perception & Sensor Fusion: Implementing multi-sensor fusion techniques (LiDAR, GPS, IMU) to enhance state estimation, localization, and tracking in dynamic environments.
  • Artificial Intelligence in Autonomous Systems: Leveraging deep learning and AI-driven models to enhance decision-making and control in robotics, self-driving cars, and unmanned systems.
  • Machine Learning for Control & Simulation: Integrating deep learning-based controllers with traditional model-based control (MPC, PID, fuzzy logic) for improved robustness in nonlinear and uncertain systems.
  • Robotics & Autonomous Vehicles: Advancing motion control, planning, and reinforcement learning for humanoid robots, mobile manipulators, and self-driving platforms.
  • Safety & Security in Nonlinear Systems: Developing safe reinforcement learning-based controllers with performance guarantees in critical autonomous operations.

Academic Background & Research Initiatives

I earned my M.Sc. in Electrical Engineering from Amirkabir University of Technology, specializing in distributed optimal control for power systems. My thesis focused on developing advanced control strategies for power networks, optimizing system performance through mathematical modeling and distributed control techniques.

During my time at Amirkabir University, I worked on several projects in ** NN-based optimal control and state estimation of Surface Effect Ships , designing **robust controllers for smart grids and shipboard power systems (SPS). My passion for advanced control and autonomy led me to pursue further studies at Missouri University of Science and Technology, USA, where my research focuses on reinforcement learning-based optimal control for nonlinear, multitasking systems. My work involves developing safe, explainable AI-driven controllers for:

  • Autonomous Marine Vessels
  • Unmanned Ground Vehicles (UGVs) & Unmanned Aerial Vehicles (UAVs)
  • Robotic Platforms

These controllers ensure adaptive and efficient decision-making in uncertain environments.

Quanser QCar Quanser QDrone Quanser QBot 3

Technical Skills

  • MATLAB: Developed and implemented adaptive control and estimation algorithms, with results published in American Control Conferences and Journals.
  • Python: Designed and implemented deep learning and reinforcement learning algorithms for autonomous vehicles using image data, with research published in international conferences.
  • C++: Proficient in hardware-software integration, enabling efficient real-time robotic control and embedded system development.
  • Robot Operating System (ROS2): Engineered scalable and modular robotic applications for diverse autonomous robotic platforms.
  • MoveIt: Applied for robotic manipulation and motion planning in ROS2-based robotic systems.
  • SLAM (Simultaneous Localization and Mapping): Implemented SLAM techniques for navigation and mapping of differential drive and wheeled robots in ROS2 environments.
  • Gazebo: Designed and simulated complex robotic environments to support navigation, perception, and autonomous behavior in ROS2.
  • Computer Vision: Applied OpenCV, YOLO, and deep learning-based object detection for autonomous navigation, perception, and environment understanding in robotics and autonomous vehicles.
Software Proficiency & Skills

Software Proficiency & Skills

Guidance, Control and Navigation
MATLAB
ROS2 and Robatics
Python
C++
Computer Vision

Future Research Directions

I am eager to continue advancing research in intelligent control systems, ground, aerial, and marine autonomy, and robotics, exploring new partnerships and innovations in this dynamic field.