API Documentation

Documentation for the POMDPs.jl user interface. You can get help for any type or function in the module by typing ? in the Julia REPL followed by the name of type or function. For example:

julia> using POMDPs
julia> ?
help?> reward
search: reward

  reward{S,A,O}(pomdp::POMDP{S,A,O}, state::S, action::A, statep::S)

  Returns the immediate reward for the s-a-s triple

  reward{S,A,O}(pomdp::POMDP{S,A,O}, state::S, action::A)

  Returns the immediate reward for the s-a pair

Contents

Index

Types

POMDPs.POMDPType
POMDP{S,A,O}

Abstract base type for a partially observable Markov decision process.

S: state type
A: action type
O: observation type
source
POMDPs.MDPType
MDP{S,A}

Abstract base type for a fully observable Markov decision process.

S: state type
A: action type
source
POMDPs.PolicyType

Base type for a policy (a map from every possible belief, or more abstract policy state, to an optimal or suboptimal action)

source
POMDPs.UpdaterType

Abstract type for an object that defines how the belief should be updated

A belief is a general construct that represents the knowledge an agent has about the state of the system. This can be a probability distribution, an action observation history or a more general representation.

source

Model Functions

Explicit

These functions return distributions.

POMDPs.transitionFunction
transition(problem::POMDP, state, action)
transition(problem::MDP, state, action)

Return the transition distribution from the current state-action pair

source
POMDPs.observationFunction
observation(problem::POMDP, statep)
observation(problem::POMDP, action, statep)
observation(problem::POMDP, state, action, statep)

Return the observation distribution. You need only define the method with the fewest arguments needed to determine the observation distribution.

Example

using POMDPModelTools # for SparseCat

struct MyPOMDP <: POMDP{Int, Int, Int} end

observation(p::MyPOMDP, sp::Int) = SparseCat([sp-1, sp, sp+1], [0.1, 0.8, 0.1])
source
POMDPs.rewardFunction
reward(m::POMDP, s, a)
reward(m::MDP, s, a)

Return the immediate reward for the s-a pair.

reward(m::POMDP, s, a, sp)
reward(m::MDP, s, a, sp)

Return the immediate reward for the s-a-s' triple

reward(m::POMDP, s, a, sp, o)

Return the immediate reward for the s-a-s'-o quad

For some problems, it is easier to express reward(m, s, a, sp) or reward(m, s, a, sp, o), than reward(m, s, a), but some solvers, e.g. SARSOP, can only use reward(m, s, a). Both can be implemented for a problem, but when reward(m, s, a) is implemented, it should be consistent with reward(m, s, a, sp[, o]), that is, it should be the expected value over all destination states and observations.

source

Generative

These functions should return states, observations, and/or rewards.

Note

gen in POMDPs.jl v0.8 corresponds to the generate_ functions in previous versions

POMDPs.@genMacro
@gen(X)(m, s, a, rng)

Call the generative model for a (PO)MDP m; Sample values from several nodes in the dynamic decision network. X is one or more symbols indicating which nodes to output.

Solvers and simulators should usually call this rather than the gen function. Problem writers should implement methods of the gen function.

Arguments

  • m: an MDP or POMDP model
  • s: the current state
  • a: the action
  • rng: a random number generator (Typically a MersenneTwister)

Return

If X, is a symbol, return a value sample from the corresponding node. If X is several symbols, return a Tuple of values sampled from the specified nodes.

Examples

Let m be an MDP or POMDP, s be a state of m, a be an action of m, and rng be an AbstractRNG.

  • @gen(:sp, :r)(m, s, a, rng) returns a Tuple containing the next state and reward.
  • @gen(:sp, :o, :r)(m, s, a, rng) returns a Tuple containing the next state, observation, and reward.
  • @gen(:sp)(m, s, a, rng) returns the next state.
source
POMDPs.genFunction
gen(...)

Sample from generative model of a POMDP or MDP.

In most cases solver and simulator writers should use the @gen macro. Problem writers may wish to implement one or more new methods of the function for their problem.

There are three versions of the function:

  • The most convenient version to implement is gen(m::Union{MDP,POMDP}, s, a, rng::AbstractRNG), which returns a NamedTuple.
  • Defining behavior for and sampling from individual nodes of the dynamic decision network can be accomplished using the version with a DDNNode argument.
  • A version with a DDNOut argument is provided by the compiler to sample multiple nodes at once.

See below for detailed documentation for each type.


gen(m::Union{MDP,POMDP}, s, a, rng::AbstractRNG)

Convenience function for implementing the entire MDP/POMDP generative model in one function by returning a NamedTuple.

The NamedTuple version of gen is the most convenient for problem writers to implement. However, it should never be used directly by solvers or simulators. Instead solvers and simulators should use the version with a DDNOut first argument.

Arguments

  • m: an MDP or POMDP model
  • s: the current state
  • a: the action
  • rng: a random number generator (Typically a MersenneTwister)

Return

The function should return a NamedTuple. Typically, this NamedTuple will be (sp=<next state>, r=<reward>) for an MDP or (sp=<next state>, o=<observation>, r=<reward>) for aPOMDP`.


gen(v::DDNNode{name}, m::Union{MDP,POMDP}, depargs..., rng::AbstractRNG)

Sample a value from a node in the dynamic decision network.

These functions will be used within gen(::DDNOut, ...) to sample values for all outputs and their dependencies. They may be implemented directly by a problem-writer if they wish to implement a generative model for a particular node in the dynamic decision network, and may be called in solvers to sample a value for a particular node.

Arguments

  • v::DDNNode{name}: which DDN node the function should sample from.
  • depargs: values for all the dependent nodes. Dependencies are determined by deps(DDNStructure(m), name).
  • rng: a random number generator (Typically a MersenneTwister)

Return

A sampled value from the specified node.

Examples

Let m be a POMDP, s and sp be states of m, a be an action of m, and rng be an AbstractRNG.

  • gen(DDNNode(:sp), m, s, a, rng) returns the next state.
  • gen(DDNNode(:o), m, s, a, sp, rng) returns the observation given the previous state, action, and new state.

gen(t::DDNOut{X}, m::Union{MDP,POMDP}, s, a, rng::AbstractRNG) where X

Sample values from several nodes in the dynamic decision network. X is a symbol or tuple of symbols indicating which nodes to output.

An implementation of this method is automatically provided by POMDPs.jl. Solvers and simulators should use this version. Problem writers may implement it directly in special cases (see the POMDPs.jl documentation for more information).

Arguments

  • t::DDNOut: which DDN nodes the function should sample from.
  • m: an MDP or POMDP model
  • s: the current state
  • a: the action
  • rng: a random number generator (Typically a MersenneTwister)

Return

If the DDNOut parameter, X, is a symbol, return a value sample from the corresponding node. If X is a tuple of symbols, return a Tuple of values sampled from the specified nodes.

Examples

Let m be an MDP or POMDP, s be a state of m, a be an action of m, and rng be an AbstractRNG.

  • gen(DDNOut(:sp, :r), m, s, a, rng) returns a Tuple containing the next state and reward.
  • gen(DDNOut(:sp, :o, :r), m, s, a, rng) returns a Tuple containing the next state, observation, and reward.
  • gen(DDNOut(:sp), m, s, a, rng) returns the next state.
source
POMDPs.initialstateFunction
initialstate(m::Union{POMDP,MDP}, rng::AbstractRNG)

Return a sampled initial state for the problem m.

Usually the initial state is sampled from an initial state distribution. The random number generator rng should be used to draw this sample (e.g. use rand(rng) instead of rand()).

source
POMDPs.initialobsFunction
initialobs(m::POMDP, s, rng::AbstractRNG)

Return a sampled initial observation for the problem m and state s.

This function is only used in cases where the policy expects an initial observation rather than an initial belief, e.g. in a reinforcement learning setting. It is not used in a standard POMDP simulation.

By default, it will fall back to observation(m, s). The random number generator rng should be used to draw this sample (e.g. use rand(rng) instead of rand()).

source

Common

POMDPs.statesFunction
states(problem::POMDP)
states(problem::MDP)

Returns the complete state space of a POMDP.

source
POMDPs.actionsFunction
actions(m::Union{MDP,POMDP})

Returns the entire action space of a (PO)MDP.


actions(m::Union{MDP,POMDP}, s)

Return the actions that can be taken from state s.


actions(m::POMDP, b)

Return the actions that can be taken from belief b.

To implement an observation-dependent action space, use currentobs(b) to get the observation associated with belief b within the implementation of actions(m, b).

source
POMDPs.isterminalFunction
isterminal(m::Union{MDP,POMDP}, s)

Check if state s is terminal.

If a state is terminal, no actions will be taken in it and no additional rewards will be accumulated. Thus, the value at such a state is, by definition, zero.

source
POMDPs.discountFunction
discount(problem::POMDP)
discount(problem::MDP)

Return the discount factor for the problem.

source
POMDPs.stateindexFunction
stateindex(problem::POMDP, s)
stateindex(problem::MDP, s)

Return the integer index of state s. Used for discrete models only.

source
POMDPs.actionindexFunction
actionindex(problem::POMDP, a)
actionindex(problem::MDP, a)

Return the integer index of action a. Used for discrete models only.

source
POMDPs.obsindexFunction
obsindex(problem::POMDP, o)

Return the integer index of observation o. Used for discrete models only.

source
POMDPs.convert_sFunction
convert_s(::Type{V}, s, problem::Union{MDP,POMDP}) where V<:AbstractArray
convert_s(::Type{S}, vec::V, problem::Union{MDP,POMDP}) where {S,V<:AbstractArray}

Convert a state to vectorized form or vice versa.

source
POMDPs.convert_aFunction
convert_a(::Type{V}, a, problem::Union{MDP,POMDP}) where V<:AbstractArray
convert_a(::Type{A}, vec::V, problem::Union{MDP,POMDP}) where {A,V<:AbstractArray}

Convert an action to vectorized form or vice versa.

source
POMDPs.convert_oFunction
convert_o(::Type{V}, o, problem::Union{MDP,POMDP}) where V<:AbstractArray
convert_o(::Type{O}, vec::V, problem::Union{MDP,POMDP}) where {O,V<:AbstractArray}

Convert an observation to vectorized form or vice versa.

source

Distribution/Space Functions

Base.randFunction
rand(rng::AbstractRNG, d::Any)

Return a random element from distribution or space d.

If d is a state or transition distribution, the sample will be a state; if d is an action distribution, the sample will be an action or if d is an observation distribution, the sample will be an observation.

source
Distributions.pdfFunction
pdf(d::Any, x::Any)

Evaluate the probability density of distribution d at sample x.

source
Distributions.supportFunction
support(d::Any)

Return an iterable object containing the possible values that can be sampled from distribution d. Values with zero probability may be skipped.

source

Dynamic decision networks

POMDPs.DDNStructureType
DDNStructure(::Type{M}) where M <: Union{MDP, POMDP}

Trait of an MDP/POMDP type for describing the structure of the dynamic Baysian network.

Example

struct MyMDP <: MDP{Int, Int} end
POMDPs.gen(::MyMDP, s, a, rng) = (sp=s+a+rand(rng, [1,2,3]), r=s^2)

# make a new node, delta_s, that is deterministically equal to sp - s
function POMDPs.DDNStructure(::Type{MyMDP})
    ddn = mdp_ddn()
    return add_node(ddn, :delta_s, FunctionDDNNode((m,s,sp)->sp-s), (:s, :sp))
end

gen(DDNOut(:delta_s), MyMDP(), 1, 1, Random.GLOBAL_RNG)
source
POMDPs.DDNNodeType
DDNNode(x::Symbol)
DDNNode{x::Symbol}()

Reference to a named node in the POMDP or MDP dynamic decision network (DDN).

Note that gen(::DDNNode, m, depargs..., rng) always takes an argument for each dependency whereas gen(::DDNOut, m, s, a, rng) only takes s and a arguments (the inputs to the entire DDN).

DDNNode is a "value type". See the documentation of Val for more conceptual details about value types.

source
POMDPs.DDNOutType
DDNOut(x::Symbol)
DDNOut{x::Symbol}()
DDNOut(::Symbol, ::Symbol,...)
DDNOut{x::NTuple{N, Symbol}}()

Reference to one or more named nodes in the POMDP or MDP dynamic decision network (DDN).

Note that gen(::DDNOut, m, s, a, rng) always takes s and a arguments (the inputs to the entire DDN) while gen(::DDNNode, m, depargs..., rng) takes a variable number of arguments (one for each dependency).

DDNOut is a "value type". See the documentation of Val for more conceptual details about value types.

source
POMDPs.DistributionDDNNodeType

DDN node defined by a function that maps the model and values from the parent nodes to a distribution

Example

DistributionDDNNode((m, s, a)->POMDPModelTools.Deterministic(s+a))
source
POMDPs.FunctionDDNNodeType

DDN node defined by a function that determinisitically maps the model and values from the parent nodes to a new value.

Example

FunctionDDNNode((m, s, a)->s+a)
source
POMDPs.GenericDDNNodeType

DDN node that can only have a generative model; gen(::DDNNode{:x}, ...) must be implemented for a node of this type.

source

Belief Functions

POMDPs.updateFunction
update(updater::Updater, belief_old, action, observation)

Return a new instance of an updated belief given belief_old and the latest action and observation.

source
POMDPs.initialize_beliefFunction
initialize_belief(updater::Updater,
                     state_distribution::Any)
initialize_belief(updater::Updater, belief::Any)

Returns a belief that can be updated using updater that has similar distribution to state_distribution or belief.

The conversion may be lossy. This function is also idempotent, i.e. there is a default implementation that passes the belief through when it is already the correct type: initialize_belief(updater::Updater, belief) = belief

source
POMDPs.historyFunction
history(b)

Return the action-observation history associated with belief b.

The history should be an AbstractVector, Tuple, (or similar object that supports indexing with end) full of NamedTuples with keys :a and :o, i.e. history(b)[end][:a] should be the last action taken leading up to b, and history(b)[end][:o] should be the last observation received.

It is acceptable to return only part of the history if that is all that is available, but it should always end with the current observation. For example, it would be acceptable to return a structure containing only the last three observations in a length 3 Vector{NamedTuple{(:o,),Tuple{O}}.

source
POMDPs.currentobsFunction
currentobs(b)

Return the latest observation associated with belief b.

If a solver or updater implements history(b) for a belief type, currentobs has a default implementation.

source

Policy and Solver Functions

POMDPs.solveFunction
solve(solver::Solver, problem::POMDP)

Solves the POMDP using method associated with solver, and returns a policy.

source
POMDPs.updaterFunction
updater(policy::Policy)

Returns a default Updater appropriate for a belief type that policy p can use

source
POMDPs.actionFunction
action(policy::Policy, x)

Returns the action that the policy deems best for the current state or belief, x.

x is a generalized information state - can be a state in an MDP, a distribution in POMDP, or another specialized policy-dependent representation of the information needed to choose an action.

source
POMDPs.valueFunction
value(p::Policy, s)
value(p::Policy, s, a)

Returns the utility value from policy p given the state (or belief), or state-action (or belief-action) pair.

The state-action version is commonly referred to as the Q-value.

source

Simulator

POMDPs.simulateFunction
simulate(sim::Simulator, m::POMDP, p::Policy, u::Updater=updater(p), b0=initialstate_distribution(m), s0=initialstate(m, rng))
simulate(sim::Simulator, m::MDP, p::Policy, s0=initialstate(m, rng))

Run a simulation using the specified policy.

The return type is flexible and depends on the simulator. Simulations should adhere to the Simulation Standard.

source

Other

The following functions are not part of the API for specifying and solving POMDPs, but are included in the package.

Type Inference

POMDPs.statetypeFunction
statetype(t::Type)
statetype(p::Union{POMDP,MDP})

Return the state type for a problem type (the S in POMDP{S,A,O}).

type A <: POMDP{Int, Bool, Bool} end

statetype(A) # returns Int
source
POMDPs.actiontypeFunction
actiontype(t::Type)
actiontype(p::Union{POMDP,MDP})

Return the state type for a problem type (the S in POMDP{S,A,O}).

type A <: POMDP{Bool, Int, Bool} end

actiontype(A) # returns Int
source
POMDPs.obstypeFunction
obstype(t::Type)

Return the state type for a problem type (the S in POMDP{S,A,O}).

type A <: POMDP{Bool, Bool, Int} end

obstype(A) # returns Int
source

Requirements Specification

POMDPs.check_requirementsFunction
check_requirements(r::AbstractRequirementSet)

Check whether the methods in r have implementations with implemented(). Return true if all methods have implementations.

source
POMDPs.show_requirementsFunction
show_requirements(r::AbstractRequirementSet)

Check whether the methods in r have implementations with implemented() and print out a formatted list showing which are missing. Return true if all methods have implementations.

source
POMDPs.get_requirementsFunction
get_requirements(f::Function, args::Tuple)

Return a RequirementSet for the function f and arguments args.

source
POMDPs.requirements_infoFunction
requirements_info(s::Solver, p::Union{POMDP,MDP}, ...)

Print information about the requirement for solver s.

source
POMDPs.@POMDP_requireMacro
@POMDP_require solve(s::CoolSolver, p::POMDP) begin
    PType = typeof(p)
    @req states(::PType)
    @req actions(::PType)
    @req transition(::PType, ::S, ::A)
    s = first(states(p))
    a = first(actions(p))
    t_dist = transition(p, s, a)
    @req rand(::AbstractRNG, ::typeof(t_dist))
end

Create a get_requirements implementation for the function signature and the requirements block.

source
POMDPs.@POMDP_requirementsMacro
reqs = @POMDP_requirements CoolSolver begin
    PType = typeof(p)
    @req states(::PType)
    @req actions(::PType)
    @req transition(::PType, ::S, ::A)
    s = first(states(p))
    a = first(actions(p))
    t_dist = transition(p, s, a)
    @req rand(::AbstractRNG, ::typeof(t_dist))
end

Create a RequirementSet object.

source
POMDPs.@reqMacro
@req f( ::T1, ::T2)

Convert a f( ::T1, ::T2) expression to a (f, Tuple{T1,T2})::Req for pushing to a RequirementSet.

If in a @POMDP_requirements or @POMDP_require block, marks the requirement for including in the set of requirements.

source
POMDPs.@subreqMacro
@subreq f(arg1, arg2)

In a @POMDP_requirements or @POMDP_require block, include the requirements for f(arg1, arg2) as a child argument set.

source
POMDPs.implementedFunction
implemented(function, Tuple{Arg1Type, Arg2Type})

Check whether there is an implementation available that will return a suitable value.

source

Utility Tools