Boost C++ Libraries Home Libraries People FAQ More

PrevUpHomeNext

Design

Library Architecture
Why OpenCL

The Boost Compute library consists of several different components. The core layer provides a "thin" C++ wrapper over the OpenCL API. This includes classes to manage OpenCL objects such as device's, kernel's and command_queue's.

On top of the core layer is a partial implementation of the C++ standard library providing common containers (e.g. vector<T>, array<T, N>) along with common algorithms (e.g. transform() and sort()).

The library also provides a number of "fancy" iterators (e.g. transform_iterator and permutation_iterator) which enhance the functionality of the standard algorithms.

Boost.Compute also supplies a number of facilities for interoperation with other C and C++ libraries. See the section on interoperability for more information.

See the API Overview section for a full list of functions, classes, and macros provided by Boost.Compute.

Boost.Compute uses OpenCL as its interface for executing code on parallel devices such as GPUs and multi-core CPUs.

OpenCL was chosen for a number of reasons:

  • Vendor-neutral, standard C/C++, and doesn't require a special compiler, non-standard pragmas, or compiler extensions.
  • It is not just another parallel-library abstraction layer, it provides direct access to the underlying hardware.
  • Its runtime compilation model allows for kernels to be optimized and tuned dynamically for the device present when the application is run rather that the device that was present when the code was compiled (which is often a separate machine).
  • Using OpenCL allows Boost.Compute to directly interoperate with other OpenCL libraries (such as VexCL and OpenCV), as well as existing code written with OpenCL.
  • The "thin" C++ wrapper provided by Boost.Compute allows the user to break-out and write their own custom kernels when the provided APIs are not suitable.

PrevUpHomeNext