Toward Collaborative Team of Robots and Humans using Multi-objective Partial Observable Markov Decision Processes

  • Hend AlTair

Student thesis: Doctoral Thesis


Deploying robots in hazardous situations reduces human exposure to dangerous environments and increases the efficiency of responding to these incidents. Examples incidents might vary from small fires, oil spillage, earthquake, to nuclear incidents. In emergencies, particularity the large scale ones, systems that allow seamless collaboration between teams of robots and humans are highly desirable. In this thesis, I argue and show that the decision-making is a key component to induce collaboration among agents that are heterogeneous, physically and knowledgeable different by modeling a decision problem and testing decisions with various test cases. In search and rescue decision-problems there are multiple objectives that can be easily conflicting with each other. A simple example of decisions is to decide between taking a longer path to rescue a victim instead of optimal path that has a danger. An agent in this case wants to stay away from danger and at the same time he wants to rescue the victim as soon as possible. The problem model is expressed with multiple objectives and therefore multiple rewards. In this thesis, I propose a method to solve multi-objectives multi-agent Partially Observable Markov Decision Process (POMDP). The method proposed is an alternative to existing solutions that handle multi-objective problems. The methods ensure that a higher accumulative reward is given to high-priority objectives, so that there is a higher influence on the decision process than low-priority objectives. First, I applied it for a multi-agent problem using different solvers from the MADP toolbox. The proposed method was evaluated in a search and rescue scenario that involves aheterogeneous team composed of a robot and human with different complimentary skills. The search and rescue operations incorporate a number of parameters, such as risk, energy and time, into the decision-making process through a re-definition of the reward function. Furthermore, for multi-objective multi-agent problems I have selected Symbolic Perseus that use Algebraic Decision Diagrams (ADD) to represent value functions and conditional probabilities. I propose to modify the Stochastic Planning Using Decision Diagrams (SPUDD) problem formats that are passed to the Symbolic Perseus to explicitly include and describe multi-objective problems. Finally, I have proposed! POMDP to handle the heterogeneous teams of robots and humans that operate asynchronously, employing distributed consensus to solve multi-objective problems under uncertainty.
Date of AwardApr 2019
Original languageAmerican English


  • Multi-objective
  • Multi-agent
  • decision-making.

Cite this