Package: markovDP 0.99.0

markovDP: Infrastructure for Discrete-Time Markov Decision Processes (MDP)

The package provides the infrastructure to work with MDPs in R. The focus is on convenience in formulating MDPs, the support of sparse representations (using sparse matrices, lists and data.frames) and visualization of results. Some key components are implemented in C++ to speed up computation. It also implements several popular solvers.

Authors:Michael Hahsler [aut, cph, cre]

markovDP_0.99.0.tar.gz
markovDP_0.99.0.zip(r-4.5)markovDP_0.99.0.zip(r-4.4)markovDP_0.99.0.zip(r-4.3)
markovDP_0.99.0.tgz(r-4.4-arm64)markovDP_0.99.0.tgz(r-4.4-x86_64)markovDP_0.99.0.tgz(r-4.3-arm64)markovDP_0.99.0.tgz(r-4.3-x86_64)
markovDP_0.99.0.tar.gz(r-4.5-noble)markovDP_0.99.0.tar.gz(r-4.4-noble)
markovDP_0.99.0.tgz(r-4.4-emscripten)markovDP_0.99.0.tgz(r-4.3-emscripten)
markovDP.pdf |markovDP.html
markovDP/json (API)
NEWS

# Install 'markovDP' in R:
install.packages('markovDP', repos = c('https://mhahsler.r-universe.dev', 'https://cloud.r-project.org'))

Peer review:

Bug tracker:https://github.com/mhahsler/markovdp/issues

Uses libs:
  • c++– GNU Standard C++ Library v3
Datasets:

On CRAN:

control-theorymarkov-decision-processoptimization

47 exports 5 stars 0.82 score 16 dependencies

Last updated 2 months agofrom:519b951586

Exports:absorbing_statesactionactionsadd_policybellman_operatorcolors_continuouscolors_discretecurve_multiple_directedepoch_to_episodegreedy_actiongreedy_policygridworld_animategridworld_initgridworld_matrixgridworld_maze_MDPgridworld_plotgridworld_plot_transition_graphgridworld_rc2sgridworld_read_mazegridworld_s2rcis_solved_MDPmanual_policyMDPnormalize_MDPplot_transition_graphplot_value_functionpolicypolicy_evaluationq_valuesR_random_policyreachable_statesregretremove_unreachable_statesrewardreward_matrixround_stochasticsimulate_MDPsolve_MDPsolve_MDP_DPsolve_MDP_LPsolve_MDP_TDstart_vectorT_transition_graphtransition_matrixvalue_function

Dependencies:clicodetoolscpp11foreachglueigraphiteratorslatticelifecyclelpSolvemagrittrMatrixpkgconfigRcpprlangvctrs

Gridworlds in Package markovDP

Rendered fromgridworlds.Rmdusingknitr::rmarkdownon Jul 05 2024.

Last update: 2024-06-05
Started: 2024-05-30

markovDP: Discrete-Time Markov Decision Processes (MDPs)

Rendered frommarkovDP.Rmdusingknitr::rmarkdownon Jul 05 2024.

Last update: 2024-06-05
Started: 2024-05-31

Readme and manuals

Help Manual

Help pageTopics
Access to Parts of the Model Descriptionaccessors normalize_MDP reward_matrix start_vector transition_matrix
Action Given a Policyaction action.MDP
Available Actions in a Stateactions
Add a Policy to a MDP Problem Descriptionadd_policy
Cliff Walking Gridworld MDPCliff_walking cliff_walking
Default Colors for Visualizationcolors colors_continuous colors_discrete
The Dyna MazeDynaMaze dynamaze
Helper Functions for Gridworld MDPsgridworld gridworld_animate gridworld_init gridworld_matrix gridworld_maze_MDP gridworld_plot gridworld_plot_transition_graph gridworld_rc2s gridworld_read_maze gridworld_s2rc
Steward Russell's 4x3 Maze Gridworld MDPMaze maze
Define an MDP Problemepoch_to_episode is_solved_MDP MDP R_ T_
Extract or Create a Policymanual_policy policy random_policy
Policy Evaluationbellman_operator policy_evaluation
Q-Value Functionsgreedy_action greedy_policy q_values
Reachable and Absorbing Statesabsorbing_states reachable_and_absorbing reachable_states remove_unreachable_states
Calculate the Regret of a Policyregret
Calculate the Expected Reward of a Policyreward reward.MDP
Round a stochastic vector or a row-stochastic matrixround_stochastic
Simulate Trajectories in a MDPsimulate_MDP
Solve an MDP Problemsolve_MDP solve_MDP_DP solve_MDP_LP solve_MDP_TD
Transition Graphcurve_multiple_directed plot_transition_graph transition_graph
Value Functionplot_value_function value_function
Windy Gridworld MDP Windy Gridworld MDPWindy_gridworld windy_gridworld