Skip to content

Instantly share code, notes, and snippets.

@ziwon
Forked from jboner/latency.txt
Created July 25, 2022 05:18
Show Gist options
  • Save ziwon/cee1bf65b4e3b809679d7201e3921a32 to your computer and use it in GitHub Desktop.
Save ziwon/cee1bf65b4e3b809679d7201e3921a32 to your computer and use it in GitHub Desktop.

Revisions

  1. @jboner jboner revised this gist Apr 22, 2018. 1 changed file with 4 additions and 6 deletions.
    10 changes: 4 additions & 6 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,5 @@
    Latency Comparison Numbers
    --------------------------
    Latency Comparison Numbers (~2012)
    ----------------------------------
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns 14x L1 cache
    @@ -28,7 +28,5 @@ Originally by Peter Norvig: http://norvig.com/21-days.html#answers

    Contributions
    -------------
    Some updates from: https://gist.github.com/2843375
    'Humanized' comparison: https://gist.github.com/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
    Animated presentation: http://prezi.com/pdkvgys-r0y6/latency-numbers-for-programmers-web-development/latency.txt
    'Humanized' comparison: https://gist.github.com/hellerbarde/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
  2. @jboner jboner revised this gist Jan 15, 2016. 1 changed file with 20 additions and 20 deletions.
    40 changes: 20 additions & 20 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,25 +1,25 @@
    Latency Comparison Numbers
    --------------------------
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns 14x L1 cache
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 4K randomly from SSD* 150,000 ns 0.15 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD* 1,000,000 ns 1 ms 4X memory
    Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns 14x L1 cache
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns 3 us
    Send 1K bytes over 1 Gbps network 10,000 ns 10 us
    Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
    Read 1 MB sequentially from memory 250,000 ns 250 us
    Round trip within same datacenter 500,000 ns 500 us
    Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
    Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
    Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory, 20X SSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms

    Notes
    -----
    1 ns = 10^-9 seconds
    1 ms = 10^-3 seconds
    * Assuming ~1GB/sec SSD
    1 us = 10^-6 seconds = 1,000 ns
    1 ms = 10^-3 seconds = 1,000 us = 1,000,000 ns

    Credit
    ------
    @@ -28,7 +28,7 @@ Originally by Peter Norvig: http://norvig.com/21-days.html#answers

    Contributions
    -------------
    Some updates from: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
    Nice animated presentation of the data: http://prezi.com/pdkvgys-r0y6/latency-numbers-for-programmers-web-development/
    Some updates from: https://gist.github.com/2843375
    'Humanized' comparison: https://gist.github.com/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
    Animated presentation: http://prezi.com/pdkvgys-r0y6/latency-numbers-for-programmers-web-development/latency.txt
  3. @jboner jboner revised this gist Dec 13, 2015. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -17,8 +17,8 @@ Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    Notes
    -----
    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
    1 ns = 10^-9 seconds
    1 ms = 10^-3 seconds
    * Assuming ~1GB/sec SSD

    Credit
  4. @jboner jboner revised this gist Jun 7, 2012. 1 changed file with 18 additions and 9 deletions.
    27 changes: 18 additions & 9 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,25 +1,34 @@
    Latency Comparison Numbers
    --------------------------
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns 14x L1 cache
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 4K randomly from SSD 150,000 ns 0.15 ms
    Read 4K randomly from SSD* 150,000 ns 0.15 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
    Read 1 MB sequentially from SSD* 1,000,000 ns 1 ms 4X memory
    Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    Notes
    -----
    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
    Assuming ~1GB/sec SSD
    * Assuming ~1GB/sec SSD

    By Jeff Dean (http://research.google.com/people/jeff/)
    Originally by Peter Norvig (http://norvig.com/21-days.html#answers)
    Some updates from: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
    Nice animated presentation of the data: http://prezi.com/pdkvgys-r0y6/latency-numbers-for-programmers-web-development/
    Credit
    ------
    By Jeff Dean: http://research.google.com/people/jeff/
    Originally by Peter Norvig: http://norvig.com/21-days.html#answers

    Contributions
    -------------
    Some updates from: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
    Nice animated presentation of the data: http://prezi.com/pdkvgys-r0y6/latency-numbers-for-programmers-web-development/
  5. @jboner jboner revised this gist Jun 7, 2012. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -21,4 +21,5 @@ By Jeff Dean (http://research.google.com/people/jeff/)
    Originally by Peter Norvig (http://norvig.com/21-days.html#answers)
    Some updates from: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
    Visual comparison chart: http://i.imgur.com/k0t1e.png
    Nice animated presentation of the data: http://prezi.com/pdkvgys-r0y6/latency-numbers-for-programmers-web-development/
  6. @jboner jboner revised this gist Jun 2, 2012. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -5,7 +5,7 @@ Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    SSD 4K random read 150,000 ns 0.15 ms
    Read 4K randomly from SSD 150,000 ns 0.15 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
  7. @jboner jboner revised this gist Jun 2, 2012. 1 changed file with 3 additions and 2 deletions.
    5 changes: 3 additions & 2 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -5,7 +5,7 @@ Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    SSD random read 150,000 ns
    SSD 4K random read 150,000 ns 0.15 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
    @@ -20,4 +20,5 @@ Assuming ~1GB/sec SSD
    By Jeff Dean (http://research.google.com/people/jeff/)
    Originally by Peter Norvig (http://norvig.com/21-days.html#answers)
    Some updates from: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
    Visual comparison chart: http://i.imgur.com/k0t1e.png
  8. @jboner jboner revised this gist Jun 1, 2012. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -5,6 +5,7 @@ Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    SSD random read 150,000 ns
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
    @@ -19,4 +20,4 @@ Assuming ~1GB/sec SSD
    By Jeff Dean (http://research.google.com/people/jeff/)
    Originally by Peter Norvig (http://norvig.com/21-days.html#answers)
    Some updates from: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
  9. @jboner jboner revised this gist Jun 1, 2012. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -10,7 +10,7 @@ Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
    Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
  10. @jboner jboner revised this gist Jun 1, 2012. 1 changed file with 6 additions and 6 deletions.
    12 changes: 6 additions & 6 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -12,11 +12,11 @@ Disk seek 10,000,000 ns 10 ms 20x datacenter
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    By Jeff Dean (http://research.google.com/people/jeff/)
    Originally by Peter Norvig (http://norvig.com/21-days.html#answers)
    With some updates from Brendan (http://brenocon.com/dean_perf.html)

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
    Assuming ~1GB/sec SSD

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
    By Jeff Dean (http://research.google.com/people/jeff/)
    Originally by Peter Norvig (http://norvig.com/21-days.html#answers)
    Some updates from: https://gist.github.com/2843375
    Great 'humanized' comparison version: https://gist.github.com/2843375
  11. @jboner jboner revised this gist Jun 1, 2012. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -13,8 +13,8 @@ Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    By Jeff Dean (http://research.google.com/people/jeff/)
    With some updates from Brendan: http://brenocon.com/dean_perf.html
    Comparisons from https://gist.github.com/2844130
    Originally by Peter Norvig (http://norvig.com/21-days.html#answers)
    With some updates from Brendan (http://brenocon.com/dean_perf.html)

    Assuming ~1GB/sec SSD

  12. @jboner jboner revised this gist Jun 1, 2012. 1 changed file with 21 additions and 13 deletions.
    34 changes: 21 additions & 13 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,14 +1,22 @@
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns 14x L1 cache
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially from SSD 1,000,000 ns 1 ms 4X memory
    Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    By Jeff Dean (http://research.google.com/people/jeff/):
    By Jeff Dean (http://research.google.com/people/jeff/)
    With some updates from Brendan: http://brenocon.com/dean_perf.html
    Comparisons from https://gist.github.com/2844130

    Assuming ~1GB/sec SSD

    1 ns = 10-9 seconds
    1 ms = 10-3 seconds
  13. @jboner jboner revised this gist May 31, 2012. 1 changed file with 11 additions and 11 deletions.
    22 changes: 11 additions & 11 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,14 +1,14 @@
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns

    By Jeff Dean (http://research.google.com/people/jeff/):
  14. @jboner jboner revised this gist May 31, 2012. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,3 @@
    By Jeff Dean (http://research.google.com/people/jeff/):

    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    @@ -11,4 +9,6 @@ Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns

    By Jeff Dean (http://research.google.com/people/jeff/):
  15. @jboner jboner revised this gist May 31, 2012. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    By Jeff Dean:
    By Jeff Dean (http://research.google.com/people/jeff/):

    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
  16. @jboner jboner revised this gist May 31, 2012. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -1,3 +1,5 @@
    By Jeff Dean:

    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
  17. @jboner jboner revised this gist May 31, 2012. No changes.
  18. @jboner jboner created this gist May 31, 2012.
    12 changes: 12 additions & 0 deletions latency.txt
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,12 @@
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns
    Compress 1K bytes with Zippy 3,000 ns
    Send 2K bytes over 1 Gbps network 20,000 ns
    Read 1 MB sequentially from memory 250,000 ns
    Round trip within same datacenter 500,000 ns
    Disk seek 10,000,000 ns
    Read 1 MB sequentially from disk 20,000,000 ns
    Send packet CA->Netherlands->CA 150,000,000 ns