dispatch_apply(3) | Library Functions Manual | dispatch_apply(3) |
NAME
dispatch_apply
—
schedule blocks for iterative execution
SYNOPSIS
#include
<dispatch/dispatch.h>
void
dispatch_apply
(size_t
iterations, dispatch_queue_t queue,
void (^block)(size_t));
void
dispatch_apply_f
(size_t
iterations, dispatch_queue_t queue,
void *context, void (*function)(void
*, size_t));
DESCRIPTION
The
dispatch_apply
()
function provides data-level concurrency through a "for (;;)" loop
like primitive:
size_t iterations = 10; // 'idx' is zero indexed, just like: // for (idx = 0; idx < iterations; idx++) dispatch_apply(iterations, DISPATCH_APPLY_AUTO, ^(size_t idx) { printf("%zu\n", idx); });
Although any queue can be used, it is
strongly recommended to use DISPATCH_APPLY_AUTO as the
queue argument to both
dispatch_apply
()
and
dispatch_apply_f
(),
as shown in the example above, since this allows the system to automatically
use worker threads that match the configuration of the current thread as
closely as possible. No assumptions should be made about which global
concurrent queue will be used.
Like a "for (;;)" loop, the
dispatch_apply
()
function is synchronous. If asynchronous behavior is desired, wrap the call
to dispatch_apply
() with a call to
dispatch_async
()
against another queue.
Sometimes, when the block passed to
dispatch_apply
()
is simple, the use of striding can tune performance. Calculating the optimal
stride is best left to experimentation. Start with a stride of one and work
upwards until the desired performance is achieved (perhaps using a power of
two search):
#define STRIDE 3 dispatch_apply(count / STRIDE, DISPATCH_APPLY_AUTO, ^(size_t idx) { size_t j = idx * STRIDE; size_t j_stop = j + STRIDE; do { printf("%zu\n", j++); } while (j < j_stop); }); size_t i; for (i = count - (count % STRIDE); i < count; i++) { printf("%zu\n", i); }
IMPLIED REFERENCES
Synchronous functions within the dispatch framework hold an implied reference on the target queue. In other words, the synchronous function borrows the reference of the calling function (this is valid because the calling function is blocked waiting for the result of the synchronous function, and therefore cannot modify the reference count of the target queue until after the synchronous function has returned).
This is in contrast to asynchronous functions which must retain both the block and target queue for the duration of the asynchronous operation (as the calling function may immediately release its interest in these objects).
FUNDAMENTALS
dispatch_apply
()
and
dispatch_apply_f
()
attempt to quickly create enough worker threads to efficiently iterate work
in parallel. By contrast, a loop that passes work items individually to
dispatch_async
()
or
dispatch_async_f
()
will incur more overhead and does not express the desired parallel execution
semantics to the system, so may not create an optimal number of worker
threads for a parallel workload. For this reason, prefer to use
dispatch_apply
() or
dispatch_apply_f
() when parallel execution is
important.
The
dispatch_apply
()
function is a wrapper around
dispatch_apply_f
().
CAVEATS
Unlike dispatch_async
(), a block submitted
to dispatch_apply
() is expected to be either
independent or dependent
only on
work already performed in lower-indexed invocations of the block. If the
block's index dependency is non-linear, it is recommended to use a for-loop
around invocations of dispatch_async
().
SEE ALSO
May 1, 2009 | Darwin |