diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nobrowse -nomount ram://XXXXX`where XXXXX is the size of the RAM disk in terms of memory blocks.
Notes:
My Elasticsearch cheatsheet with example usage via rest api (still a work-in-progress)
| # Full configuration of mongod and mongos file, based on MongoDB 3.6. | |
| # Includes links for official docs, for each section of config. | |
| # | |
| # Are not mentioned in this document: | |
| # - Deprecated options as of 3.6; | |
| # - Options only available to mmapv1; | |
| # - Windows Service Options; | |
| # | |
| # This document is divided as follows: | |
| # - Options for both mongod and mongos; |
| ##################### ElasticSearch Configuration Example ##################### | |
| # This file contains an overview of various configuration settings, | |
| # targeted at operations staff. Application developers should | |
| # consult the guide at <http://elasticsearch.org/guide>. | |
| # | |
| # The installation procedure is covered at | |
| # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>. | |
| # | |
| # ElasticSearch comes with reasonable defaults for most settings, |
In your command-line run the following commands:
brew doctorbrew update| public static void postNewComment(Context context,final UserAccount userAccount,final String comment,final int blogId,final int postId){ | |
| mPostCommentResponse.requestStarted(); | |
| RequestQueue queue = Volley.newRequestQueue(context); | |
| StringRequest sr = new StringRequest(Request.Method.POST,"http://api.someservice.com/post/comment", new Response.Listener<String>() { | |
| @Override | |
| public void onResponse(String response) { | |
| mPostCommentResponse.requestCompleted(); | |
| } | |
| }, new Response.ErrorListener() { | |
| @Override |
| import org.apache.spark.mllib.linalg.distributed.RowMatrix | |
| import org.apache.spark.mllib.linalg._ | |
| import org.apache.spark.{SparkConf, SparkContext} | |
| // To use the latest sparse SVD implementation, please build your spark-assembly after this | |
| // change: https://github.com/apache/spark/pull/1378 | |
| // Input tsv with 3 fields: rowIndex(Long), columnIndex(Long), weight(Double), indices start with 0 | |
| // Assume the number of rows is larger than the number of columns, and the number of columns is | |
| // smaller than Int.MaxValue |
| #!/bin/bash | |
| # From http://tech.serbinn.net/2010/shell-script-to-create-ramdisk-on-mac-os-x/ | |
| # | |
| ARGS=2 | |
| E_BADARGS=99 | |
| if [ $# -ne $ARGS ] # correct number of arguments to the script; | |
| then |
| /* | |
| This example uses Scala. Please see the MLlib documentation for a Java example. | |
| Try running this code in the Spark shell. It may produce different topics each time (since LDA includes some randomization), but it should give topics similar to those listed above. | |
| This example is paired with a blog post on LDA in Spark: http://databricks.com/blog | |
| Spark: http://spark.apache.org/ | |
| */ | |
| import scala.collection.mutable |