Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of Boost. Click here to view this page for the latest version.
PrevUpHomeNext

Chapter 25. Boost.MPI

Douglas Gregor

Matthias Troyer

Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt )

Table of Contents

Introduction
Getting started
MPI Implementation
Configure and Build
Using Boost.MPI
Tutorial
Point-to-Point communication
Collective operations
Managing communicators
Cartesian communicator
Reference
Python Bindings

Introduction

Boost.MPI is a library for message passing in high-performance parallel applications. A Boost.MPI program is one or more processes that can communicate either via sending and receiving individual messages (point-to-point communication) or by coordinating as a group (collective communication). Unlike communication in threaded environments or using a shared-memory library, Boost.MPI processes can be spread across many different machines, possibly with different operating systems and underlying architectures.

Boost.MPI is not a completely new parallel programming library. Rather, it is a C++-friendly interface to the standard Message Passing Interface (MPI), the most popular library interface for high-performance, distributed computing. MPI defines a library interface, available from C, Fortran, and C++, for which there are many MPI implementations. Although there exist C++ bindings for MPI, they offer little functionality over the C bindings. The Boost.MPI library provides an alternative C++ interface to MPI that better supports modern C++ development styles, including complete support for user-defined data types and C++ Standard Library types, arbitrary function objects for collective algorithms, and the use of modern C++ library techniques to maintain maximal efficiency.

At present, Boost.MPI supports the majority of functionality in MPI 1.1. The thin abstractions in Boost.MPI allow one to easily combine it with calls to the underlying C MPI library. Boost.MPI currently supports:

  • Communicators: Boost.MPI supports the creation, destruction, cloning, and splitting of MPI communicators, along with manipulation of process groups.
  • Point-to-point communication: Boost.MPI supports point-to-point communication of primitive and user-defined data types with send and receive operations, with blocking and non-blocking interfaces.
  • Collective communication: Boost.MPI supports collective operations such as reduce and gather with both built-in and user-defined data types and function objects.
  • MPI Datatypes: Boost.MPI can build MPI data types for user-defined types using the Boost.Serialization library.
  • Separating structure from content: Boost.MPI can transfer the shape (or "skeleton") of complex data structures (lists, maps, etc.) and then separately transfer their content. This facility optimizes for cases where the data within a large, static data structure needs to be transmitted many times.

Boost.MPI can be accessed either through its native C++ bindings, or through its alternative, Python interface.

Last revised: April 11, 2018 at 14:08:07 GMT


PrevUpHomeNext