Skip to content

Instantly share code, notes, and snippets.

@pierrelux
Created October 10, 2019 19:53
Show Gist options
  • Select an option

  • Save pierrelux/dfbfe4c650a12fa4c420bc4e68d89ba8 to your computer and use it in GitHub Desktop.

Select an option

Save pierrelux/dfbfe4c650a12fa4c420bc4e68d89ba8 to your computer and use it in GitHub Desktop.

Revisions

  1. pierrelux created this gist Oct 10, 2019.
    34 changes: 34 additions & 0 deletions forest.py
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,34 @@
    import numpy as onp


    def forest_management(forest_stages=3, r1=4, r2=2, p=0.1):
    """Forest management example from the MDPToolbox package.
    Chadès, I., Chapron, G., Cros, M.‐J., Garcia, F. and Sabbadin, R. 2014.
    MDPtoolbox: a multi‐platform toolbox to solve stochastic dynamic programming problems.
    Ecography 37: 916–920 (ver. 0).
    Args:
    forest_stages (int, optional): Number of possibles states of the forest: from young to old.
    Defaults to 3.
    r1 (int, optional): Payoff for preserving an old forest. Defaults to 4.
    r2 (int, optional): Payoff for deforesting an old forest. Defaults to 2.
    p (float, optional): Probability of wildfire. Defaults to 0.1.
    Returns:
    tuple: (transition, reward, discount)
    """
    nactions = 2
    transition = onp.zeros((nactions, forest_stages, forest_stages))
    onp.fill_diagonal(transition[0, :, 1:], 1-p)
    transition[0, :, 0] = p
    transition[0, -1, -1] = 1-p
    transition[1, :, 0] = 1

    reward = onp.zeros((forest_stages, nactions))
    reward[-1, 0] = r1
    reward[:, 1] = 1
    reward[0, 1] = 0
    reward[-1, 1] = r2

    return transition, reward, 0.9