Ehrenstein, Martin2011-03-282022-10-252011-03-282022-10-2519941994https://ir.wgtn.ac.nz/handle/123456789/23448Non-blocking algorithms guarantee that, in a group of processors contending for a shared object, at least one processor can perform its operation on that object regardless of the other processors' actions. Thus a non-blocking algorithm can tolerate certain types of processor failures: in a fail-stop scenario processors can stop, and possibly restart later. A shared memory multiprocessor experiences various types of delays, ranging from primary cache misses to page faults. If locks are used, care must be taken that the system is not delayed as a whole when a processor holding a lock is delayed. For non-blocking algorithms, however, no such care is necessary. In this thesis, I investigate the performance of non-blocking algorithms. The investigation consists of a series of simulation experiments on non-blocking algorithms. To properly evaluate the performance of these algorithms, the simulator is accurate down to the bus-cycle level. The design and implementation of the simulator is based on a study of current shared-memory multiprocessors and their cache-coherence protocols. For these experiments, I use non-blocking algorithms developed from a survey of recent techniques for deriving non-blocking algorithms based on primitive instructions such as Compare&Swap, and Load-Linked/Store-Conditional. A consequence of this survey was that I developed an efficient non-blocking Queue implementation. The main conclusion of my investigation is that, even in the absence of long delays, non-blocking algorithms typically outperform traditional lock-based algorithms, and, in particular, a priority-based software protocol developed by me matches the performance of hardware-based implementations.pdfen-NZLock-free synchronizationShared memoryNon-blocking algorithmsAn examination of lock-free synchronization in shared memory via simulationText