Here's the canonical TOML example from the TOML README, and a YAML version of the same. Which looks nicer?
title = "TOML Example" |
| PSI = function(X,Y,saveplot,showplot,ratings,psinames) { | |
| # --------------------------------------------------------------- | |
| # PURPOSE: | |
| # Population Distribution Index (PSI) | |
| # Basel Committee on Banking Supervision | |
| # Working Paper No. 14 | |
| # Studies on the Validation of Internal Rating Systems | |
| # --------------------------------------------------------------- | |
| # INPUTS: | |
| # X Array of (%) of Frequency |
| # ============= | |
| # Introduction | |
| # ============= | |
| # I've been doing some data mining lately and specially looking into `Gradient | |
| # Boosting Trees <http://en.wikipedia.org/wiki/Gradient_boosting>`_ since it is | |
| # claimed that this is one of the techniques with best performance out of the | |
| # box. In order to have a better understanding of the technique I've reproduced | |
| # the example of section *10.14.1 California Housing* in the book `The Elements of Statistical Learning <http://www-stat.stanford.edu/~tibs/ElemStatLearn/>`_. | |
| # Each point of this dataset represents the house value of a property with some | |
| # attributes of that house. You can get the data and the description of those |
| import numpy as np | |
| def imhist(im): | |
| # calculates normalized histogram of an image | |
| m, n = im.shape | |
| h = [0.0] * 256 | |
| for i in range(m): | |
| for j in range(n): | |
| h[im[i, j]]+=1 | |
| return np.array(h)/(m*n) |
| import os | |
| import struct | |
| import numpy as np | |
| """ | |
| Loosely inspired by http://abel.ee.ucla.edu/cvxopt/_downloads/mnist.py | |
| which is GPL licensed. | |
| """ | |
| def read(dataset = "training", path = "."): |
| """ | |
| Partial Correlation in Python | |
| Based on Fabian Pedregosa-Izquierdo's implementation at: | |
| https://gist.github.com/fabianp/9396204419c7b638d38f | |
| This version of the algorithm calculates the partial correlation coefficient controlling for Z. | |
| I use row vectors here, for whatever reason. | |
| """ |
| (use '[clojure.core.match :only [match]]) | |
| (defn evaluate [env [sym x y]] | |
| (match [sym] | |
| ['Number] x | |
| ['Add] (+ (evaluate env x) (evaluate env y)) | |
| ['Multiply] (* (evaluate env x) (evaluate env y)) | |
| ['Variable] (env x))) | |
| (def environment {"a" 3, "b" 4, "c" 5}) |
Here's the canonical TOML example from the TOML README, and a YAML version of the same. Which looks nicer?
title = "TOML Example" |
| ;; Why is Lisp so great? or Why so many parenthesis? | |
| ;; The funny thing about Lisp is that everybody asks why it has so may parenthesis. Quite a few friends of mine who have studied Lisp in college don’t like it that much. I couldn’t really understand why, until I realized they usually take a class that uses the book Concepts of Programming Languages by Robert W. Sebesta as a textbook. I’m in no position to review this book because I haven’t read it. But from what I’ve skimmed, Lisp is not very well represented in this book, to put it very nicely. He describes Lisp only as a functional programming language, tells a little bit about cons cells, and that’s pretty much it! No object orientation in lisp, no syntactic abstraction, no meta-programming, and so on. My feeling is that if I didn’t know Lisp and read this book I wouldn’t be very impressed by Lisp. | |
| ;; So why is Lisp so great and why so many parenthesis? These two different questions have the same answer; because Lisp have syntactic abstraction trough t |
Last Update: 12.12.2014
Offline Version
| # 0 is too far from ` ;) | |
| set -g base-index 1 | |
| # Automatically set window title | |
| set-window-option -g automatic-rename on | |
| set-option -g set-titles on | |
| #set -g default-terminal screen-256color | |
| set -g status-keys vi | |
| set -g history-limit 10000 |