Concurrency
Welcome to my lexicon on the concurrent programming lingo. It covers all you need to know for the course on the subject at Lund University, Sweden.
General
busy-wait – when a thread over and over again checks if some boolean has changed and it can continue. This is extremely inefficient in comparison to a system where a thread is signaled when something has changed.
race condition – occurs when critical sections are not locked. Several threads can then access the data at the same time. The result is then undefined as it depends on which thread entered when.
dead lock – two or more threads, with resources locked, are waiting for other locks to be freed while the wanted lock is locked by another thread that wants the lock that the former threads are in control of. To obtain deadlocks 4 conditions must be met:
- mutual exclusion – only one thread is allowed at a time.
- hold and wait – entering a monitor from another monitor.
- no resource preemption – threads can only exit voluntarily by wait(), otherwise it can get stuck.
- circular wait – if there are two or more threads that at some point is waiting for one another.
starvation – occurs if a thread is continously denied the resource it tries to access.
live lock – when two or more threads tries to access the same resource
priorites – correctness of a program should never depend on priorites since these may not be obeyed on all platforms.
semphore – way for thread to lock the door after itself after entering a critical section to guarantee exclusivity.
- counting semaphore – has a counter that increments when give is called and decrements when take is called.
- multistep semaphore – counter that permits incrementations with more than one.
- binary / mutex semaphore – guarantees mutual exclusion
context switch – when the system changes running thread/process the preemptied thread needs to stash its data away for later. Steps:
- SAVE
- Turn off interrupts,
- push PC, registers on stack,
- SWITCH
- save stack pointer in process record,
- get new process record and restore stack pointer from it,
- RESTORE
- pop registers, PC from stack,
- turn on interrupts.
Scheduling
The main goal of scheduling is to get a concurrent program that is also correct in terms of timing.
In order to be able to analyze a program several simplifications must be made. E.g.
- threads must not depend on each other. I.e there can be no wait() or notify(),
- there must be a fixed number of threads,
- all threads are periodic with known periods,
cyclic executive scheduling consists of:
- Major cycle length the period it takes until all threads to have been in business at least once. least common multiple of the periods of all the threads will guarantee that we get the shortest possible major cycle length, and hence the most effective major cycle length.
- Minor cycle length should be greatest common divisor of the periods of all the threads as this leads to the longest minor cycle possible.
- Schedulable? – simply try to fit all execution times within the corresponding threads period. Dividing and distributing an execution time is perfectly fine if left with no other option.
- Doesn’t need a scheduler as it will just follow a fixed schedule calculated beforehand.
WCET – worst-case execution time C – execution time (WCET) R – worst-case response time, i.e. when control and execution is completed T – period U – C/T – CPU utilization
Rate Monotonic Scheduling–RMS
- short period <-> high priority
- can guarantee schedulability if sum of C/T for all n threads is below . N.B It might still be schedulable if above this bound.
- as soon as there are threads ready to carry out their tasks the one with the highest priority will start. Even if a lower priority thread is running a higher-priority one will always force it to a halt by preemption as soon as it is ready.
- fixed-priority scheduling
Deadline Monotonic Scheduling–DMS
- same as RMS but with deadlines instead of periods.
- short deadline <-> high priority.
- schedulable if R < D.
Earliest Deadline First–EDF – Highest priority is given to the thread that is closest to its deadline, this is checked every time a new thread is released.
Priority inversion
Scenario: low priority thread: L, mid prio thread: M, high prio thread: H, shared resource R.
- L enters R,
- L is preempted by M,
- M is preempted by H,
- H tries to enter R but is locked out,
- Since H has nothing to do M continues its execution,
- H (and L) has to wait until M finishes. N.B. H is blocked by lower prio thread (since M is blocking L which needs to release R in order for H to continue.
To counter priority inversion the concept of priority inheritance is used.
Basic priority inheritance
- L holding a resource will temporarily be raised to the priority of the higher-priority thread requesting the resource.
- WCET of H = worst-case time to execute code in thread + for each used resource: maximum blocking time.
- Transitive priority inheritance can occur only in presence of nested critical sections. If A is blocked by C which in turn is blocked by D it is necessary for D to inherit A’s priority. If not D will be preempted by B and forced to wait around for B to finish.
- where usage(k,i) returns 1 if semaphore is used by at least one thread with a priority less than and at least one thread with a priority higher than or equal to the priority of . Otherwise it returns 0.
Priority-ceiling protocol
Direct blocking occurs when a H thread is blocked from accessing a resource due to it being held by a L thread. Direct blocking is necessary for mutual exclusion and is able to occur in any preemptive-scheduler protocol.
- For every semaphore a thread is involved with only one of the lower-priority threads can be blocking within one critical region protected by one semaphore. for every semaphore.
Push through blocking is a consequence of the priority inheritance protocol. The time that the thread could’ve been active but instead was preempted by lower-priority thread due to a temporarily raised priority.
- Occurs when M’s thread cannot run due to L’s thread that has inherited a higher priority.
- Can affect threads that has absolutely nothing to do with any shared resources.
- A semaphore can induce push-through blocking for thread M if it is accessed both by a thread with lower priority and a thread with higher priority. M doesn’t have to have anything to do with the semaphore in question.
Resource allocation graphs
The four components of a graph:
- thread vertex - one possible state for a specific thread. drawn using a circle with the thread name inside.
- resource vertex - monitor or semaphore of some sort. Drawn using a square with the resource name inside.
- assignment edge - directed edge from a resource to a thread means that the thread has locked that resource.
- request edge - directed edge from a thread to a resource means that the thread wants to lock that resource.
If a thread tries locking many semaphores from inside each other, several vertices of this thread must be instanciated in the graph. Deadlock can occur if the graph contains cycles of edges that all are directed in the same direction, i.e.
- DEADLOCK: R1 -> A -> R2 -> B -> R1
- NOT DEADLOCK: R1 -> A <- R2 -> B -> R1
concurrency in java
rule of thumb: syncronized
- do not mix threads and monitors,
- all public methods in monitor should be syncronized,
- write a wrapper monitor if a class needs to be thread-safe,
- do not use syncronized blocks.
keywords
volatile guarantees that the variable that is declared volatile always is read from main memory, instead of being put in some CPU cache. This way every thread reads the same value, i.e. the one in main memory. Atomicity is ensured. N.B. without volatile only reads and writes to reference variables and primitives (except long and double) are atomic.
syncronized only allows for one thread at a time to gain access to the method which has been declared syncronized. As a matter of fact any other thread trying to access any of the synchronized methods in the class would be blocked. Can also be placed as a block of code inside a method. A syncronized method has three compartments for the threads:
- exclusive area, where only one thread can be simultaneously.
- monitor queue, where threads wait for the one in the exclusive area to finish or call wait().
- condition queue, where threads that have called wait() are put, usually waiting for a condition to be met.
wait() makes the thread executing within the monitor to step out of the exclusive area and into the condition queue, allowing for another thread to try its luck. wait() should always be placed inside a while(someBool).
wait(long timeout) and wait(long timeout, int nanos) for waiting x number of milliseconds and x number of nanoseconds respectively. These should be used with care, however, as notify() from some obscure subclass can interfere. To avoid mentioned notify() one can implement the while-loop as follows.
synchronized monitorSleep(longtimeout) throws InterruptedException {
long tf = System.currentTimeMillis()+timeout;
while((timeout=tf-System.currentTimeMillis())>0) wait(timeout);
}
notify() will cause a thread in the condition queue to retry its condition. If this condition returns true the thread is moved to the monitor queue. Which thread is notified isn’t clear since java doesn’t do the scheduling, hence the following method should be used in its stead.
notifyAll() does the same as notify() but for every thread in the condition queue. USE THIS INSTEAD OF notify().
java.lang.Thread
Using threads in java requires java.lang.Thread package. Threads are started using method start. A thread must have a runnable object (an object that implements the interface java.lang.Runnable). However, extending thread will take care of this since Thread already implements Runnable
interrupt() – changes a state in the thread that can be checked and reset with interrupted(). All threads that need to be able to be killed must actively check the interrupted(). E.g.
public void run() {
while(!interrupted()) {
...
...
}
}
sleep(long millis) – makes the thread wait for x number of milliseconds. Telling the OS that it is OK to swap currentThread. Sleeps the currentThread no matter which thread you call sleep with.
isAlive() – false if the thread has terminated or not been started. True if it is active at time of check. If isAlive() returns false right after a call to start() the thread must have terminated. If it returns true on the other hand it means the thread cannot have been started before.
join() – waits for the thread to return false from isAlive() before continuing.