Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save antonputra/d9506e1c1a8f292c1d835545df13756c to your computer and use it in GitHub Desktop.
Save antonputra/d9506e1c1a8f292c1d835545df13756c to your computer and use it in GitHub Desktop.

Revisions

  1. @cheethoe cheethoe revised this gist Jul 31, 2018. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions gistfile1.txt
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,5 @@
    # This will use osd.5 as an example
    # ceph commands are expected to be run in the rook-toolbox
    1) disk fails
    2) remove disk from node
    3) mark out osd. `ceph osd out osd.5`
  2. @cheethoe cheethoe created this gist Jul 31, 2018.
    16 changes: 16 additions & 0 deletions gistfile1.txt
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,16 @@
    # This will use osd.5 as an example
    1) disk fails
    2) remove disk from node
    3) mark out osd. `ceph osd out osd.5`
    4) remove from crush map. `ceph osd crush remove osd.5`
    5) delete caps. `ceph auth del osd.5`
    6) remove osd. `ceph osd rm osd.5`
    7) delete the deployment `kubectl delete deployment -n rook-ceph rook-ceph-osd-id-5`
    8) delete osd data dir on node `rm -rf /var/lib/rook/osd5`
    9) edit the osd configmap `kubectl edit configmap -n rook-ceph rook-ceph-osd-nodename-config`
    9a) edit out the config section pertaining to your osd id and underlying device.
    10) add new disk and verify node sees it.
    11) restart the rook-operator pod by deleting the rook-operator pod
    12) osd prepare pods run
    13) new rook-ceph-osd-id-5 will be created
    14) check health of your cluster `ceph -s; ceph osd tree`