How would you change the MDP representation of Section 13.3 to a POMDP? Take the simple robot problem and its Markov transition matrix created in Section 13.3.3 and change it into a POMDP. Hint: think of using a probability matrix for the partially observable states.
Work out the complexity cost for finding an optimal policy for the POMDP problem exhaustively
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here