Skip to content

Instantly share code, notes, and snippets.

@jhclark
Forked from jboner/latency.txt
Created May 31, 2012 19:58
Show Gist options
  • Select an option

  • Save jhclark/2845836 to your computer and use it in GitHub Desktop.

Select an option

Save jhclark/2845836 to your computer and use it in GitHub Desktop.

Revisions

  1. jhclark revised this gist May 31, 2012. 1 changed file with 0 additions and 1 deletion.
    1 change: 0 additions & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -5,7 +5,6 @@ Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
  2. jhclark revised this gist May 31, 2012. 1 changed file with 3 additions and 1 deletion.
    4 changes: 3 additions & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -14,8 +14,10 @@ Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X S
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    By Jeff Dean (http://research.google.com/people/jeff/)
    With some updates from Brendan
    With some updates from Brendan: http://brenocon.com/dean_perf.html
    Comparisons from https://gist.github.com/2844130

    Assuming ~1GB/sec SSD

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
  3. jhclark revised this gist May 31, 2012. 1 changed file with 0 additions and 1 deletion.
    1 change: 0 additions & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -16,7 +16,6 @@ Send packet CA->Netherlands->CA 150,000,000 ns 150 ms
    By Jeff Dean (http://research.google.com/people/jeff/)
    With some updates from Brendan
    Comparisons from https://gist.github.com/2844130
    And SSD numbers from 100,000 ns

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
  4. jhclark revised this gist May 31, 2012. 1 changed file with 4 additions and 2 deletions.
    6 changes: 4 additions & 2 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -5,16 +5,18 @@ Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
    Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x reading in seq from memory
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    By Jeff Dean (http://research.google.com/people/jeff/)
    With some updates from Brendan
    Comparisons from https://gist.github.com/2844130
    And SSD numbers from
    And SSD numbers from 100,000 ns

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
  5. jhclark revised this gist May 31, 2012. 1 changed file with 5 additions and 4 deletions.
    9 changes: 5 additions & 4 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,18 +1,19 @@
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    L2 cache reference 7 ns 14x L1 cache
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Disk seek 10,000,000 ns 10 ms
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms
    Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x reading in seq from memory
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    By Jeff Dean (http://research.google.com/people/jeff/)
    With some updates from Brendan
    Comparisons from https://gist.github.com/2844130
    And SSD numbers from

    1 ns = 10-9 seconds
  6. jhclark revised this gist May 31, 2012. 1 changed file with 12 additions and 7 deletions.
    19 changes: 12 additions & 7 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -4,11 +4,16 @@ L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Disk seek 10,000,000 ns 10 ms
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    By Jeff Dean (http://research.google.com/people/jeff/):
    By Jeff Dean (http://research.google.com/people/jeff/)
    With some updates from Brendan
    And SSD numbers from

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
  7. @jboner jboner revised this gist May 31, 2012. 1 changed file with 11 additions and 11 deletions.
    22 changes: 11 additions & 11 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,14 +1,14 @@
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns

    By Jeff Dean (http://research.google.com/people/jeff/):
  8. @jboner jboner revised this gist May 31, 2012. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,3 @@
    By Jeff Dean (http://research.google.com/people/jeff/):

    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    @@ -11,4 +9,6 @@ Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns

    By Jeff Dean (http://research.google.com/people/jeff/):
  9. @jboner jboner revised this gist May 31, 2012. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    By Jeff Dean:
    By Jeff Dean (http://research.google.com/people/jeff/):

    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
  10. @jboner jboner revised this gist May 31, 2012. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,3 +1,5 @@
    By Jeff Dean:

    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
  11. @jboner jboner revised this gist May 31, 2012. No changes.
  12. @jboner jboner created this gist May 31, 2012.
    12 changes: 12 additions & 0 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,12 @@
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns