|Version 14 (modified by simonmar@…, 11 years ago) (diff)|
on the wiki:
- The Control.Concurrency module
Papers and other docs:
- Concurrent Haskell (the original paper, including a semantics)
- Extending the Haskell FFI with Concurrency (a specification of the interaction between concurrency and the FFI, with a semantics)
- A Draft report addendum (a shorter version of the above paper).
- A Poor Man's Concurrency Monad (Hugs' implementation of Concurrency)
- Software Transactional Memory
- Vital for some modern applications and large applications commonly require it.
- Stable MVar implementation is well understood and tested.
- Imposes non trivial implementation constraints.
- Providing a 'select' and non-blocking IO would be enough to allow people to implement something like it themselves in haskell and are provided by most systems as primitives.
- Things like the 'poor man's concurrency monad' can achieve some of the benefits
- Standardise on Concurrent Haskell without STM. It is our view that even in the presence of STM, MVars offer functionality that is distinct from STM and separately useful, so this leaves room for growth.
- Use the semantics from Extending the Haskell FFI with Concurrency
- Standardise a way to write thread-safe libraries that work with implementations that don't provide full concurrency support.
- Decide whether the Haskell' report includes Concurrency (with a separate NoConcurrency addendum to specify how implementations without concurrency behave), or whether Concurrency is specified in a separate addendum.
- Decide how much pre-emption is acceptable, and figure out how to specify this.
- Require bound thread support, or make it optional? (YHC has concurrency with non-blocking foreign calls, but doesn't have bound threads as yet.)
In order to write library code that is thread-safe when run in a multi-threaded environment, two things are needed:
- a way to protect mutable state against race conditions
- a way to declare that foreign calls are blocking
For (1), we have two choices:
- Provide MVars. A non-concurrent implementation might implement them in terms of IORef, for example.
- Provide STM. Not entirely trivial to implement, even in a single-threaded implementation, because exceptions have to abort a transaction. Ross Paterson provided an implementation.
For (2), one option is ForeignBlocking.
Pre-emption means that (1) threads have priority levels, and (2) that a higher priority thread can steal the processor from a currently-running lower priority thread, and (3) it can do so "immediately" it needs to, without waiting for some "safe" synchronisation point. By these criteria, none of the current Haskell implementations are pre-emptive, because no API assigns priorities to threads. So let's try to avoid using the term.
Fairness can be defined by two main criteria:
- No runnable process will be indefinitely delayed.
- No thread can be blocked indefinitely on an MVar unless another thread holds the MVar indefinitely.
Cooperative scheduling describes an implementation in which it is the programmer's responsibility to insert context switch points in the code. An implementation that only provides cooperative scheduling cannot satisfy the fairness properties given above. A programmer who has access to all the code may be able to insert enough context switch points to satisfy fairness, but this isn't always possible.
Concurrent foreign call means a foreign call that should run concurrently with other Haskell threads. It is a requirement of "fairness" (above) that a foreign call should not prevent other threads from making progress indefinitely. Note that we used to use the term non-blocking foreign call here, but that lead to confusion with "foreign calls that block", which are foreign calls that may wait indefinitely for some resource, such as reading data from a socket. "foreign calls that block" are the main reason for wanting support for concurrent foreign calls.
Scheduling. All current (and likely future) implementations of concurrency in Haskell use non-preemptive scheduling. That is, there are explicit time points at which any one thread can yield, allowing other threads to run. The only differences between implementations are in the granularity and positioning of the yield.
- For Hugs, yield is inserted at certain I/O actions.
- For ghc, yield is inserted after some count of allocations.
- For yhc, yield is inserted after some count of bytecode instructions.
Arguably, Hugs has made the wrong choice from a fairness point of view. It would be possible to make Hugs yield more often, such as in IO-monad's bind operator, but even this wouldn't be quite enough for fairness, because a thread might hang indefinitely performing a non-IO computation. Yielding outside of the IO monad in Hugs doesn't seem possible without overhauling the concurrency implementation completely.
There are several levels of concurrency support which require sucessivly more implementation support and imply more implementation overhead.
The report says nothing about concurrency at all
Enough is specified to allow people to write completley portable programs and libraries that while they may not depend on concurrency, will not break in the presence of it. See "Thread-safety" above.
This would allow concurrency needing programs to be written, but perhaps not as transparently as it curently is with GHC. This would include everything needed to write concurrent programs without anything that would imply a run-time or implementation overhead in the non-concurrent case.