NOTE: This guide is ONLY for devs who don't want to edit their
yarn.lockfile by hand. If you don't care about that please carry on.
So you've pulled the latest master
git checkout master
git pull| Resources: | |
| PasswordGeneratorLogGroup: | |
| Type: AWS::Logs::LogGroup | |
| Properties: | |
| LogGroupName: !Sub /aws/lambda/${AWS::StackName}-PasswordGenerator | |
| RetentionInDays: 1 | |
| PasswordGeneratorRole: | |
| Type: AWS::IAM::Role | |
| Properties: | |
| AssumeRolePolicyDocument: |
NOTE: This guide is ONLY for devs who don't want to edit their
yarn.lockfile by hand. If you don't care about that please carry on.
So you've pulled the latest master
git checkout master
git pullRun this in order to backup all you k8s cluster data. It will be saved in a folder bkp. To restore the cluster, you can run kubectl apply -f bkp.
Please note: this recovers all resources correctly, including dynamically generated PV's. However, it will not recover ELB endpoints. You will need to update any DNS entries manually, and manually remove the old ELB's.
Please note: This has not been tested with all resource types. Supported resource types include:
| # initialization file (not found) |
It's easy enough to set up your machine as a swarm manager for local development on a single node swarm. But how about setting up multiple local nodes using Docker Machine in case you want to simulate a multiple node environment (maybe to test HA features)?
The following script demonstrates a simple way to specify the number of manager and worker nodes you want and then bootstrap a swarm.
You can also check out the sample as a Github project here.
| http://jsfiddle.net/GuQaV/show/ |
| var mongoObjectId = function () { | |
| var timestamp = (new Date().getTime() / 1000 | 0).toString(16); | |
| return timestamp + 'xxxxxxxxxxxxxxxx'.replace(/[x]/g, function() { | |
| return (Math.random() * 16 | 0).toString(16); | |
| }).toLowerCase(); | |
| }; |