Package: markovDP 0.99.0

markovDP: Infrastructure for Discrete-Time Markov Decision Processes (MDP)

Provides the infrastructure to work with Markov Decision Processes (MDPs) in R. The focus is on convenience in formulating MDPs, the support of sparse representations (using sparse matrices, lists and data.frames) and visualization of results. Some key components are implemented in C++ to speed up computation. Several popular solvers are implemented.

Authors:Michael Hahsler [aut, cph, cre]

markovDP_0.99.0.tar.gz
markovDP_0.99.0.zip(r-4.5)markovDP_0.99.0.zip(r-4.4)markovDP_0.99.0.zip(r-4.3)
markovDP_0.99.0.tgz(r-4.4-x86_64)markovDP_0.99.0.tgz(r-4.4-arm64)markovDP_0.99.0.tgz(r-4.3-x86_64)markovDP_0.99.0.tgz(r-4.3-arm64)
markovDP_0.99.0.tar.gz(r-4.5-noble)markovDP_0.99.0.tar.gz(r-4.4-noble)
markovDP_0.99.0.tgz(r-4.4-emscripten)markovDP_0.99.0.tgz(r-4.3-emscripten)
markovDP.pdf |markovDP.html
markovDP/json (API)
NEWS

# Install 'markovDP' in R:
install.packages('markovDP', repos = c('https://mhahsler.r-universe.dev', 'https://cloud.r-project.org'))

Peer review:

Bug tracker:https://github.com/mhahsler/markovdp/issues

Uses libs:
  • c++– GNU Standard C++ Library v3
Datasets:

On CRAN:

control-theorymarkov-decision-processoptimization

4.74 score 5 stars 1 scripts 64 exports 22 dependencies

Last updated 7 hours agofrom:84d9c0c813. Checks:OK: 6 ERROR: 3. Indexed: yes.

TargetResultDate
Doc / VignettesOKOct 17 2024
R-4.5-win-x86_64OKOct 17 2024
R-4.5-linux-x86_64OKOct 17 2024
R-4.4-win-x86_64OKOct 17 2024
R-4.4-mac-x86_64OKOct 17 2024
R-4.4-mac-aarch64OKOct 17 2024
R-4.3-win-x86_64ERROROct 17 2024
R-4.3-mac-x86_64ERROROct 17 2024
R-4.3-mac-aarch64ERROROct 17 2024

Exports:Aabsorbing_statesactactionaction_discrepancyadd_policyavailable_actionsbellman_operatorbellman_updatecolors_continuouscolors_discretecurve_multiple_directedepoch_to_episodefind_reachable_statesgreedy_actiongreedy_policygw_animategw_initgw_matrixgw_maze_MDPgw_pathgw_plotgw_plot_transition_graphgw_random_mazegw_rc2sgw_read_mazegw_s2rcgw_transition_probgw_transition_prob_end_stateinduced_transition_matrixis_converged_MDPis_solved_MDPmanual_policyMDPnormalize_MDPplot_transition_graphplot_value_functionpolicypolicy_evaluationq_valuesR_random_policyregretremove_unreachable_statesrewardreward_matrixround_stochasticSsample_MDPsolve_MDPsolve_MDP_DPsolve_MDP_LPsolve_MDP_MCsolve_MDP_samplingsolve_MDP_TDstart_vectorT_transition_graphtransition_matrixunreachable_statesV_randomV_zerovalue_errorvalue_function

Dependencies:clicodetoolscpp11crayonfastmapforeachgluehmsigraphiteratorslatticelifecyclelpSolvemagrittrMatrixpkgconfigprettyunitsprogressR6Rcpprlangvctrs

Gridworlds in Package markovDP

Rendered fromgridworlds.Rmdusingknitr::rmarkdownon Oct 17 2024.

Last update: 2024-10-10
Started: 2024-05-30

markovDP: Discrete-Time Markov Decision Processes (MDPs)

Rendered frommarkovDP.Rmdusingknitr::rmarkdownon Oct 17 2024.

Last update: 2024-10-17
Started: 2024-05-31

Readme and manuals

Help Manual

Help pageTopics
Access to Parts of the Model Descriptionaccessors normalize_MDP reward_matrix start_vector transition_matrix
Perform an Actionact
Choose an Action Given a Policyaction action.MDP
Available Actions in a Stateavailable_actions
Bellman Update and Bellman operatorbellman_operator bellman_update
Cliff Walking Gridworld MDPCliff_walking cliff_walking
Default Colors for Visualizationcolors colors_continuous colors_discrete
The Dyna MazeDynaMaze dynamaze
Find Reachable State Space from a Transition Model Functionfind_reachable_states
Helper Functions for Gridworld MDPsgridworld gw gw_animate gw_init gw_matrix gw_maze_MDP gw_path gw_plot gw_plot_transition_graph gw_random_maze gw_rc2s gw_read_maze gw_s2rc gw_transition_prob gw_transition_prob_end_state
Steward Russell's 4x3 Maze Gridworld MDPMaze maze
Define an MDP ProblemA epoch_to_episode is_converged_MDP is_solved_MDP MDP R_ S T_
Extract, Create Add a Policy to a Modeladd_policy induced_transition_matrix manual_policy policy random_policy
Policy Evaluationpolicy_evaluation
Q-Values and Greedy Policiesgreedy_action greedy_policy q_values
Regret of a Policy and Related Measuresaction_discrepancy regret value_error
Calculate the Expected Reward of a Policyreward reward.MDP
Round a stochastic vector or a row-stochastic matrixround_stochastic
Sample Trajectories from an MDPsample_MDP
Solve an MDP Problemsolve_MDP solve_MDP_DP solve_MDP_LP solve_MDP_MC solve_MDP_sampling solve_MDP_TD
Transition Graphcurve_multiple_directed plot_transition_graph transition_graph
Unreachable and Absorbing Statesabsorbing_states remove_unreachable_states unreachable_and_absorbing unreachable_states
Value Functionplot_value_function value_function V_random V_zero
Windy Gridworld MDP Windy Gridworld MDPWindy_gridworld windy_gridworld