Module rebar_parallel

This module contains a small parallel dispatch queue that allows to take a list of jobs and run as many of them in parallel as there are schedulers ongoing.

Description

This module contains a small parallel dispatch queue that allows to take a list of jobs and run as many of them in parallel as there are schedulers ongoing.

Original design by Max Fedorov in the rebar compiler, then generalised and extracted here to be reused in other circumstances.

It also contains an asynchronous version of the queue built with a naive pool, which allows similar semantics in worker definitions, but without demanding to know all the tasks to run ahead of time.

Function Index

pool/4Create a pool using as many workers as there are schedulers, and for which tasks can be added by calling pool_async_task/2.
pool/5Create a pool using PoolSize workers and for which tasks can be added by calling pool_async_task/2.
pool_task_async/2Add a task to a pool.
pool_terminate/1Mark the pool as terminated.
queue/5Create a queue using as many workers as there are schedulers, that will spread all Task entries among them based on whichever is available first.

Function Details

pool/4

pool(WorkF, WArgs, Handler, HArgs) -> pid()

Create a pool using as many workers as there are schedulers, and for which tasks can be added by calling pool_async_task/2.

The values returned by the worker function WorkF for each value is then passed to a Handler which either discards its result after having done a side-effectful operation (by returning ok) as in a lists:foreach/2 call, or returns a value that gets added to an accumulator (by returning {ok, Val}). The handler can return both types as required.

The accumulated list of value is in no specific order and depends on how tasks were scheduled and completed, and can only be obtained by calling pool_terminate/1.

The pool process is linked to its initial caller and will error out via a link if any task crashes or other invalid states are found

pool/5

pool(WorkF, WArgs, Handler, HArgs, PoolSize) -> pid()

Create a pool using PoolSize workers and for which tasks can be added by calling pool_async_task/2.

The values returned by the worker function WorkF for each value is then passed to a Handler which either discards its result after having done a side-effectful operation (by returning ok) as in a lists:foreach/2 call, or returns a value that gets added to an accumulator (by returning {ok, Val}). The handler can return both types as required.

The accumulated list of value is in no specific order and depends on how tasks were scheduled and completed, and can only be obtained by calling pool_terminate/1.

The pool process is linked to its initial caller and will error out via a link if any task crashes or other invalid states are found

pool_task_async/2

pool_task_async(Pid::pid(), Task::term()) -> ok

Add a task to a pool. This call is asynchronous and does no validation about whether the pool process exists or not. If the pool has already been terminated or is in the process of being terminated, the task may trigger the pool to abort and error out to point out invalid usage.

pool_terminate/1

pool_terminate(Pid::pid()) -> [term()]

Mark the pool as terminated. At this point it will stop accepting new tasks but will keep processing those that have been scheduled.

Once all tasks are complete and workers have shut down, the accumulated value will be returned.

Any process may terminate the pool, and the pool may only be terminated once.

queue/5

queue(Tasks::[Task], WorkF, WArgs, Handler, HArgs) -> [Ret]

Create a queue using as many workers as there are schedulers, that will spread all Task entries among them based on whichever is available first.

The values returned by the worker function WorkF for each value is then passed to a Handler which either discards its result after having done a side-effectful operation (by returning ok) as in a lists:foreach/2 call, or returns a value that gets added to an accumulator (by returning {ok, Val}). The handler can return both types as required.

The accumulated list of value is in no specific order and depends on how tasks were scheduled and completed.


Generated by EDoc