### Keybase proof
I hereby claim:
- I am dat-vikash on github.
- I am datvikash (https://keybase.io/datvikash) on keybase.
- I have a public key ASA0rfvSJCHjPeXlJSJlZ-cTwiIiO_joi5SnMZW-O1fU7go
To claim this, I am signing this object:
| """ | |
| Helps make dad life easier by reserving times before pre-k. | |
| Features: | |
| - allow for filtering by time, day of week, number of player and course | |
| - handles reservations and ensures no overbooking | |
| - keeps track of default and state | |
| Requires python3 and selenium: | |
| + https://sites.google.com/chromium.org/driver/ |
| -- show running queries (pre 9.2) | |
| SELECT procpid, age(clock_timestamp(), query_start), usename, current_query | |
| FROM pg_stat_activity | |
| WHERE current_query != '<IDLE>' AND current_query NOT ILIKE '%pg_stat_activity%' | |
| ORDER BY query_start desc; | |
| -- show running queries (9.2) | |
| SELECT pid, age(clock_timestamp(), query_start), usename, query | |
| FROM pg_stat_activity | |
| WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%' |
| Puts on glasses: | |
| (•_•) | |
| ( •_•)>⌐■-■ | |
| (⌐■_■) | |
| Takes off glasses ("mother of god..."): | |
| (⌐■_■) | |
| ( •_•)>⌐■-■ |
| import pandas as pd | |
| def _map_to_pandas(rdds): | |
| """ Needs to be here due to pickling issues """ | |
| return [pd.DataFrame(list(rdds))] | |
| def toPandas(df, n_partitions=None): | |
| """ | |
| Returns the contents of `df` as a local `pandas.DataFrame` in a speedy fashion. The DataFrame is | |
| repartitioned if `n_partitions` is passed. |
### Keybase proof
I hereby claim:
To claim this, I am signing this object:
| Latency Comparison Numbers | |
| -------------------------- | |
| L1 cache reference 0.5 ns | |
| Branch mispredict 5 ns | |
| L2 cache reference 7 ns 14x L1 cache | |
| Mutex lock/unlock 25 ns | |
| Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
| Compress 1K bytes with Zippy 3,000 ns 3 us | |
| Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
| Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
| trait EventStoreServiceWithLogstashSink extends EventStoreService | |
| { | |
| this: EventStore => | |
| // set config | |
| val config = EventStoreCollectorConfig(logstashEndpoint = Some(Application.CONFIGS.get("logstash-endpoint").get.convertTo[String])) | |
| case class MyCustomEvent(fact: Any, | |
| timestamp: Option[Long] = None, | |
| apiVersion: Option[String] = None, | |
| originHost: Option[String]= None, |
| class WSClientVisitorSpec extends TestKit(_system = Akka.system(FakeApplication())) with WordSpecLike with Matchers with ImplicitSender | |
| { | |
| //instantiate test constants | |
| val actorRef = TestActorRef(new WSClientVisitor("TEST") with MockWebSocketChannel, name= "test") | |
| // get a test reference to our actor | |
| val actor = actorRef.underlyingActor | |
| "Web Socket Client For Visitor" should { | |
| "register a new socket" in new WithApplication(app = FakeApplication(additionalConfiguration =Map("akka.event-handlers" -> List("akka.testkit.TestEventListener")), | |
| withGlobal = Some(new GlobalSettings() { |
| trait MockWebSocketChannel extends WebSocketChannel | |
| { | |
| var mockWebSocketChannelQueue : List[JsValue] = List.empty | |
| override def push(data: JsValue): Unit = mockWebSocketChannelQueue = mockWebSocketChannelQueue :+ data | |
| } |
| object Application extends Controller { | |
| /* Each WebSocket connection state is managed by an Agent actor. | |
| A new actor is created for each WebSocket, and is killed when the socket is closed. | |
| For each play actor agent, a unique WebSocket Client Worker actor is created to process WS events via the WSManager Actor. | |
| */ | |
| def websocketManager(deviceId: String) = WebSocket.async[JsValue] | |
| { | |
| request => | |
| // instantiate an actor to hold web socket |