-
Star
(112)
You must be signed in to star a gist -
Fork
(13)
You must be signed in to fork a gist
-
-
Save steipete/36350a8a60693d440954b95ea6cbbafc to your computer and use it in GitHub Desktop.
| // | |
| // SpinlockTestTests.swift | |
| // SpinlockTestTests | |
| // | |
| // Created by Peter Steinberger on 04/10/2016. | |
| // Copyright © 2016 PSPDFKit GmbH. All rights reserved. | |
| // | |
| import XCTest | |
| final class LockingTests: XCTestCase { | |
| func testSpinLock() { | |
| var spinLock = OS_SPINLOCK_INIT | |
| executeLockTest { (block) in | |
| OSSpinLockLock(&spinLock) | |
| block() | |
| OSSpinLockUnlock(&spinLock) | |
| } | |
| } | |
| func testUnfairLock() { | |
| var unfairLock = os_unfair_lock_s() | |
| executeLockTest { (block) in | |
| os_unfair_lock_lock(&unfairLock) | |
| block() | |
| os_unfair_lock_unlock(&unfairLock) | |
| } | |
| } | |
| func testDispatchSemaphore() { | |
| let sem = DispatchSemaphore(value: 1) | |
| executeLockTest { (block) in | |
| _ = sem.wait(timeout: DispatchTime.distantFuture) | |
| block() | |
| sem.signal() | |
| } | |
| } | |
| func testNSLock() { | |
| let lock = NSLock() | |
| executeLockTest { (block) in | |
| lock.lock() | |
| block() | |
| lock.unlock() | |
| } | |
| } | |
| func testPthreadMutex() { | |
| var mutex = pthread_mutex_t() | |
| pthread_mutex_init(&mutex, nil) | |
| executeLockTest{ (block) in | |
| pthread_mutex_lock(&mutex) | |
| block() | |
| pthread_mutex_unlock(&mutex) | |
| } | |
| pthread_mutex_destroy(&mutex); | |
| } | |
| func testSyncronized() { | |
| let obj = NSObject() | |
| executeLockTest{ (block) in | |
| objc_sync_enter(obj) | |
| block() | |
| objc_sync_exit(obj) | |
| } | |
| } | |
| func testQueue() { | |
| let lockQueue = DispatchQueue.init(label: "com.test.LockQueue") | |
| executeLockTest{ (block) in | |
| lockQueue.sync() { | |
| block() | |
| } | |
| } | |
| } | |
| func disabled_testNoLock() { | |
| executeLockTest { (block) in | |
| block() | |
| } | |
| } | |
| private func executeLockTest(performBlock:@escaping (_ block:() -> Void) -> Void) { | |
| let dispatchBlockCount = 16 | |
| let iterationCountPerBlock = 100_000 | |
| // This is an example of a performance test case. | |
| let queues = [ | |
| DispatchQueue.global(qos: DispatchQoS.QoSClass.userInteractive), | |
| DispatchQueue.global(qos: DispatchQoS.QoSClass.default), | |
| DispatchQueue.global(qos: DispatchQoS.QoSClass.utility), | |
| ] | |
| var value = 0 | |
| self.measure { | |
| let group = DispatchGroup.init() | |
| for block in 0..<dispatchBlockCount { | |
| group.enter() | |
| let queue = queues[block % queues.count] | |
| queue.async(execute: { | |
| for _ in 0..<iterationCountPerBlock { | |
| performBlock({ | |
| value = value + 2 | |
| value = value - 1 | |
| }) | |
| } | |
| group.leave() | |
| }) | |
| } | |
| _ = group.wait(timeout: DispatchTime.distantFuture) | |
| } | |
| } | |
| } |
For most CRUD-apps it will not matter. We're building a PDF renderer and SDK, and when you have hot code paths that are called 10.000 times during a render operation, these small things start to matter. Then again, using the old spin locks can lifelock your app, but these are already deprecated and you'll get a warning there anyway. Dispatch queues are very nice for most simple things.
Also updated source test file
Good read on the subject: https://www.mikeash.com/pyblog/friday-qa-2017-10-27-locks-thread-safety-and-swift-2017-edition.html
Could you please re-measure the tests on latest software?
I'm seeing a huge drop in performance (around x8 times) for queues and semaphores on Xcode 10, iOS 12.
Wow, DispatchSemaphore is terrible now. I suppose someone ought to file a radar...
@drewster99 I ran a similar test and it seems, NSLock, pthread_mutex and os_unfair_lock all are pretty close and much faster than these test results above suggest. This difference is probably the effect of calling a block in the inner loop of Peter's test, which I do not in my test case (in order to solely measure the effect of the locks).
While I currently have a top notch MacBook, my results are roughly 0.16 secs for 16 queues and a loop counter of 100_000 - for incrementing a shared counter. I didn't test DispatchSemaphore because, well ... I was lazy and it shouldn't be used in those scenarios. ;)
What's interesting though, that in the case of os_unfair_lock, Swift adds an overhead (the usual refcounting management) counting to roughly 70%, and 30% is actually the time spend in os_unfair_lock. So, my guess is, C/C++/Obj-C should be about 3 times faster using os_unfair_lock. It might be even more faster using raw std::atomics, which however cannot be directly compared, because the exact usage of std::atomics depends on what you are synchronising.
Another observation is, that the uncontended case is roughly 4 times faster (i.e., 16 x 100_000 ops on a single queue).
And omitting the function call to os_unfair_lock (i.e. making it inline), would Swift enable to omit a +ref and -ref counter op per lock/unlock pair, which would make the operation much more faster. Well, the function call to os_unfair_lock is not inlineable :/
Don't really understand any of this. Anyway to break it down in sinple was? Been trying to figure it out for months now but still I am lost. Thanks for your help.
I won't recommend to use os_unfair_lock_s as in this code, PLEASE, read this before http://www.russbishop.net/the-law.
In short, in swift you must use
var unfairLock: UnsafeMutablePointer<os_unfair_lock>
...
init
unfairLock = UnsafeMutablePointer<os_unfair_lock>.allocate(capacity: 1)
unfairLock.initialize(to: os_unfair_lock())
deinit
unfairLock.deallocate()
instead of
var unfairLock = os_unfair_lock_s()
...
I am pretty ignorant about the tradeoffs here between the different kinds of locks and systems here - is there a good resource that describes this with more detail?