Created
October 10, 2019 19:53
-
-
Save pierrelux/dfbfe4c650a12fa4c420bc4e68d89ba8 to your computer and use it in GitHub Desktop.
Revisions
-
pierrelux created this gist
Oct 10, 2019 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,34 @@ import numpy as onp def forest_management(forest_stages=3, r1=4, r2=2, p=0.1): """Forest management example from the MDPToolbox package. Chadès, I., Chapron, G., Cros, M.‐J., Garcia, F. and Sabbadin, R. 2014. MDPtoolbox: a multi‐platform toolbox to solve stochastic dynamic programming problems. Ecography 37: 916–920 (ver. 0). Args: forest_stages (int, optional): Number of possibles states of the forest: from young to old. Defaults to 3. r1 (int, optional): Payoff for preserving an old forest. Defaults to 4. r2 (int, optional): Payoff for deforesting an old forest. Defaults to 2. p (float, optional): Probability of wildfire. Defaults to 0.1. Returns: tuple: (transition, reward, discount) """ nactions = 2 transition = onp.zeros((nactions, forest_stages, forest_stages)) onp.fill_diagonal(transition[0, :, 1:], 1-p) transition[0, :, 0] = p transition[0, -1, -1] = 1-p transition[1, :, 0] = 1 reward = onp.zeros((forest_stages, nactions)) reward[-1, 0] = r1 reward[:, 1] = 1 reward[0, 1] = 0 reward[-1, 1] = r2 return transition, reward, 0.9