API Documentation

Documentation for the POMDPs.jl user interface. You can get help for any type or function in the module by typing ? in the Julia REPL followed by the name of type or function. For example:

julia> using POMDPs
julia> ?
help?> reward
search: reward

  reward{S,A,O}(pomdp::POMDP{S,A,O}, state::S, action::A, statep::S)

  Returns the immediate reward for the s-a-s triple

  reward{S,A,O}(pomdp::POMDP{S,A,O}, state::S, action::A)

  Returns the immediate reward for the s-a pair

Contents

Index

Types

# POMDPs.POMDPType.

Abstract base type for a partially observable Markov decision process.

S: state type
A: action type
O: observation type

# POMDPs.MDPType.

Abstract base type for a fully observable Markov decision process.

S: state type
A: action type

# POMDPs.AbstractSpaceType.

Base type for state, action and observation spaces.

T: type that parametrizes the space (state, action, or observation)

# POMDPs.AbstractDistributionType.

Abstract type for a probability distribution.

T: type over which distribution is over (state, action, or observation)

# POMDPs.SolverType.

Base type for an MDP/POMDP solver

# POMDPs.PolicyType.

Base type for a policy (a map from every possible belief, or more abstract policy state, to an optimal or suboptimal action)

B: a belief (or policy state) that represents the knowledge an agent has about the state of the system

# POMDPs.UpdaterType.

Abstract type for an object that defines how the belief should be updated

B: belief type that parametrizes the updater

A belief is a general construct that represents the knowledge an agent has about the state of the system. This can be a probability distribution, an action observation history or a more general representation.

Model Functions

# POMDPs.statesFunction.

states{S,A,O}(problem::POMDP{S,A,O}, state::S)
states{S,A}(problem::MDP{S,A}, state::S)

Returns a subset of the state space reachable from state.

states(problem::POMDP)
states(problem::MDP)

Returns the complete state space of a POMDP.

# POMDPs.actionsFunction.

actions{S,A,O}(problem::POMDP{S,A,O}, state::S, aspace::AbstractSpace{A})
actions{S,A}(problem::MDP{S,A}, state::S, aspace::AbstractSpace{A})

Modifies aspace to the action space accessible from the given state and returns it.

actions(problem::POMDP)
actions(problem::MDP)

Returns the entire action space of a POMDP.

actions{S,A,O,B}(problem::POMDP{S,A,O}, belief::B, aspace::AbstractSpace{A})

Modifies aspace to the action space accessible from the states with nonzero belief and returns it.

# POMDPs.observationsFunction.

observations{S,A,O}(problem::POMDP{S,A,O}, state::S, obs::AbstractSpace{O}=observations(problem))

Modifies obs to the observation space accessible from the given state and returns it.

observations(problem::POMDP)

Returns the entire observation space.

# POMDPs.rewardFunction.

reward{S,A,O}(problem::POMDP{S,A,O}, state::S, action::A, statep::S)
reward{S,A}(problem::MDP{S,A}, state::S, action::A, statep::S)

Returns the immediate reward for the s-a-s' triple

reward{S,A,O}(problem::POMDP{S,A,O}, state::S, action::A)
reward{S,A}(problem::MDP{S,A}, state::S, action::A)

Returns the immediate reward for the s-a pair

# POMDPs.transitionFunction.

transition{S,A,O}(problem::POMDP{S,A,O}, state::S, action::A,

distribution::AbstractDistribution{S}=create_transition_distribution(problem)) transition{S,A}(problem::MDP{S,A}, state::S, action::A, distribution::AbstractDistribution{S}=create_transition_distribution(problem))

Returns the transition distribution from the current state-action pair

# POMDPs.observationFunction.

observation{S,A,O}(problem::POMDP{S,A,O}, state::S, action::A, statep::S, distribution::AbstractDistribution{O}=create_observation_distribution(problem))

Returns the observation distribution for the s-a-s' tuple (state, action, and next state)

observation{S,A,O}(problem::POMDP{S,A,O}, action::A, statep::S, distribution::AbstractDistribution{O}=create_observation_distribution(problem))

Modifies distribution to the observation distribution for the a-s' tuple (action and next state) and returns it

# POMDPs.isterminalFunction.

isterminal{S,A,O}(problem::POMDP{S,A,O}, state::S)
isterminal{S,A}(problem::MDP{S,A}, state::S)

Checks if state s is terminal

# POMDPs.isterminal_obsFunction.

isterminal_obs{S,A,O}(problem::POMDP{S,A,O}, observation::O)

Checks if an observation is terminal.

# POMDPs.discountFunction.

discount(problem::POMDP)
discount(problem::MDP)

Return the discount factor for the problem.

# POMDPs.n_statesFunction.

n_states(problem::POMDP)
n_states(problem::MDP)

Returns the number of states in problem. Used for discrete models only.

# POMDPs.n_actionsFunction.

n_actions(problem::POMDP)
n_actions(problem::MDP)

Returns the number of actions in problem. Used for discrete models only.

# POMDPs.n_observationsFunction.

n_observations(problem::POMDP)

Returns the number of actions in problem. Used for discrete models only.

# POMDPs.state_indexFunction.

state_index{S,A,O}(problem::POMDP{S,A,O}, s::S)
state_index{S,A}(problem::MDP{S,A}, s::S)

Returns the integer index of state s. Used for discrete models only.

# POMDPs.action_indexFunction.

action_index{S,A,O}(problem::POMDP{S,A,O}, a::A)
action_index{S,A}(problem::MDP{S,A}, a::A)

Returns the integer index of action a. Used for discrete models only.

# POMDPs.obs_indexFunction.

obs_index{S,A,O}(problem::POMDP{S,A,O}, o::O)

Returns the integer index of observation o. Used for discrete models only.

# POMDPs.create_stateFunction.

create_state(problem::POMDP)
create_state(problem::MDP)

Create a state object (for preallocation purposes).

# POMDPs.create_actionFunction.

create_action(problem::POMDP)
create_action(problem::MDP)

Creates an action object (for preallocation purposes)

# POMDPs.create_observationFunction.

create_observation(problem::POMDP)

Create an observation object (for preallocation purposes).

Distribution/Space Functions

# Base.Random.randFunction.

rand{T}(rng::AbstractRNG, d::AbstractSpace{T}, sample::T)

Returns a random sample from space s.

rand{T}(rng::AbstractRNG, d::AbstractDistribution{T}, sample::T)

Fill sample with a random element from distribution d. The sample can be a state, action or observation.

# POMDPs.pdfFunction.

pdf{T}(d::AbstractDistribution{T}, x::T)

Value of probability distribution d function at sample x.

# POMDPs.dimensionsFunction.

dimensions{T}(s::AbstractSpace{T})

Returns the number of dimensions in space s.

# POMDPs.iteratorFunction.

iterator{T}(s::AbstractSpace{T})

Returns an iterable type (array or custom iterator) corresponding to space s.

iterator{T}(d::AbstractDistribution{T})

Returns an iterable type (array or custom iterator) corresponding to distribution d.

# POMDPs.initial_state_distributionFunction.

initial_state_distribution(pomdp::POMDP)

Returns an initial belief for the pomdp.

# POMDPs.create_transition_distributionFunction.

create_transition_distribution(problem::POMDP)
create_transition_distribution(problem::MDP)

Returns a transition distribution (for memory preallocation).

# POMDPs.create_observation_distributionFunction.

create_observation_distribution(problem::POMDP)
create_observation_distribution(problem::MDP)

Returns an observation distribution (for memory preallocation).

Belief Functions

# POMDPs.updateFunction.

update{B,A,O}(updater::Updater, belief_old::B, action::A, obs::O,
              belief_new::B=create_belief(updater))

Returns a new instance of an updated belief given belief_old and the latest action and observation.

update()

Updates all the installed packages

# POMDPs.create_beliefFunction.

create_belief(updater::Updater)

Creates a belief object of the type used by updater (preallocates memory)

create_belief(pomdp::POMDP)

Creates a belief either to be used by updater or pomdp

# POMDPs.initialize_beliefFunction.

initialize_belief{B}(updater::Updater{B}, 
                     state_distribution::AbstractDistribution,
                     new_belief::B=create_belief(updater))
initialize_belief{B}(updater::Updater{B},
                     belief::Any,
                     new_belief::B=create_belief(updater))

Returns a belief that can be updated using updater that has similar distribution to state_distribution or belief.

The conversion may be lossy. This function is also idempotent, i.e. there is a default implementation that passes the belief through when it is already the correct type: initialize_belief{B}(updater::Updater{B}, belief::B) = belief

Policy and Solver Functions

# POMDPs.create_policyFunction.

create_policy(solver::Solver, problem::POMDP)
create_policy(solver::Solver, problem::MDP)

Creates a policy object (for preallocation purposes)

# POMDPs.solveFunction.

solve(solver::Solver, problem::POMDP, policy=create_policy(solver, problem))

Solves the POMDP using method associated with solver, and returns a policy.

# POMDPs.updaterFunction.

updater(policy::Policy)

Returns a default Updater appropriate for a belief type that policy p can use

# POMDPs.actionFunction.

action{B}(p::Policy, x::B, action)

Fills and returns action based on the current state or belief, given the policy. B is a generalized information state - can be a state in an MDP, a distribution in POMDP, or any other representation needed to make a decision using the given policy.

action{B}(policy::Policy, x::B)

Returns an action for the current state or belief, given the policy

If an MDP is being simulated, x will be a state; if a POMDP is being simulated, x will be a belief

# POMDPs.valueFunction.

value{B}(p::Policy, x::B)

Returns the utility value from policy p given the state

Simulator

# POMDPs.SimulatorType.

Base type for an object defining how a simulation should be carried out

# POMDPs.simulateFunction.

simulate{S,A}(simulator::Simulator, problem::MDP{S,A}, policy::Policy, initial_state::S)

Run a simulation using the specified policy and returns the accumulated reward

simulate{S,A,O,B}(simulator::Simulator, problem::POMDP{S,A,O}, policy::Policy{B}, updater::Updater{B}, initial_belief::Union{B,AbstractDistribution{S}})

Run a simulation using the specified policy and returns the accumulated reward

Utility Tools

# POMDPs.addFunction.

add(solver_name::AbstractString, v::Bool=true)

Downloads and installs a registered solver with name solver_name. v is a verbose flag, when set to true, function will notify the user if solver is already installed. This function is not exported, and must be called:

julia> using POMDPs
julia> POMDPs.add("MCTS")

# POMDPs.add_allFunction.

add_all()

Downloads and installs all the packages supported by JuliaPOMDP

# POMDPs.test_allFunction.

test_all()

Tests all the JuliaPOMDP packages installed on your current machine.

# POMDPs.availableFunction.

available()

Prints all the available packages in JuliaPOMDP

# POMDPs.@pomdp_funcMacro.

Provide a default function implementation that throws an error when called.

# POMDPs.strip_argFunction.

Strip anything extra (type annotations, default values, etc) from an argument.

For now this cannot handle keyword arguments (it will throw an error).

Constants

# POMDPs.REMOTE_URLConstant.

url to remote JuliaPOMDP organization repo

# POMDPs.SUPPORTED_PACKAGESConstant.

Set containing string names of officially supported solvers and utility packages (e.g. MCTS, SARSOP, POMDPToolbox, etc). If you have a validated solver that supports the POMDPs.jl API, contact the developers to add your solver to this list.