# POMDPs.jl

*A Julia interface for defining, solving and simulating partially observable Markov decision processes and their fully observable counterparts.*

## Package Features

- General interface that can handle problems with discrete and continuous state/action/observation spaces
- A number of popular state-of-the-art solvers available to use out of the box
- Tools that make it easy to define problems and simulate solutions
- Simple integration of custom solvers into the existing interface

## Available Packages

The POMDPs.jl package contains the interface used for expressing and solving Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) in the Julia programming language. The JuliaPOMDP community maintains these packages. The list of solver and support packages is maintained at the POMDPs.jl Readme.

## Documentation Outline

Documentation comes in four forms:

- How-to examples are available in the POMDPExamples package and in pages in this document with "Example" in the title.
- An explanatory guide is available in the sections outlined below.
- Reference docstrings for the entire interface are available in the API Documentation section.

When updating these documents, make sure this is synced with docs/make.jl!!

### Basics

### Defining POMDP Models

- Defining POMDPs and MDPs
- Defining Static (PO)MDP Properties
- Spaces and Distributions
- Defining (PO)MDP Dynamics

### Writing Solvers and Updaters

- Solvers
- Example: Defining an offline solver
- Example: Defining an online solver
- Defining a Belief Updater

### Analyzing Results

### Reference

- Frequently Asked Questions (FAQ)
- How do I save my policies?
- Why isn't the solver working?
- Why do I need to put type assertions pomdp::POMDP into the function signature?
- Why are all the solvers in separate modules?
- How can I implement terminal actions?
- Why are there two versions of
`reward`

? - How do I implement
`reward(m, s, a)`

if the reward depends on the next state? - API Documentation