A Julia interface for defining, solving and simulating partially observable Markov decision processes and their fully observable counterparts.
- General interface that can handle problems with discrete and continuous state/action/observation spaces
- A number of popular state-of-the-art solvers available to use out of the box
- Tools that make it easy to define problems and simulate solutions
- Simple integration of custom solvers into the existing interface
The POMDPs.jl package contains the interface used for expressing and solving Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) in the Julia programming language. The JuliaPOMDP community maintains these packages. The list of solver and support packages is maintained at the POMDPs.jl Readme.
Documentation comes in four forms:
- How-to examples are available in the POMDPExamples package and in pages in this document with "Example" in the title.
- An explanatory guide is available in the sections outlined below.
- Reference docstrings for the entire interface are available in the API Documentation section.
When updating these documents, make sure this is synced with docs/make.jl!!
- Getting Started
- Concepts and Architecture
- Defining POMDPs and MDPs
- Defining Static (PO)MDP Properties
- Spaces and Distributions
- Defining (PO)MDP Dynamics
- Example: Defining an offline solver
- Example: Defining an online solver
- Defining a Belief Updater
- Frequently Asked Questions (FAQ)
- How do I save my policies?
- Why isn't the solver working?
- Why do I need to put type assertions pomdp::POMDP into the function signature?
- Why are all the solvers in separate modules?
- How can I implement terminal actions?
- Why are there two versions of
- How do I implement
reward(m, s, a)if the reward depends on the next state?
- API Documentation