...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/log/sinks/text_ostream_backend.hpp
>
The text output stream sink backend is the most generic backend provided
by the library out of the box. The backend is implemented in the basic_text_ostream_backend
class template (text_ostream_backend
and wtext_ostream_backend
convenience typedefs provided for narrow and wide character support). It
supports formatting log records into strings and putting into one or several
streams. Each attached stream gets the same result of formatting, so if
you need to format log records differently for different streams, you will
need to create several sinks - each with its own formatter.
The backend also provides a feature that may come useful when debugging
your application. With the auto_flush
method one can tell the sink to automatically flush the buffers of all
attached streams after each log record is written. This will, of course,
degrade logging performance, but in case of an application crash there
is a good chance that last log records are not lost.
void init_logging() { boost::shared_ptr< logging::core > core = logging::core::get(); // Create a backend and attach a couple of streams to it boost::shared_ptr< sinks::text_ostream_backend > backend = boost::make_shared< sinks::text_ostream_backend >(); backend->add_stream( boost::shared_ptr< std::ostream >(&std::clog, boost::null_deleter())); backend->add_stream( boost::shared_ptr< std::ostream >(new std::ofstream("sample.log"))); // Enable auto-flushing after each log record written backend->auto_flush(true); // Wrap it into the frontend and register in the core. // The backend requires synchronization in the frontend. typedef sinks::synchronous_sink< sinks::text_ostream_backend > sink_t; boost::shared_ptr< sink_t > sink(new sink_t(backend)); core->add_sink(sink); }
#include <boost/log/sinks/text_file_backend.hpp
>
Although it is possible to write logs into files with the text stream backend the library also offers a special sink backend with an extended set of features suitable for file-based logging. The features include:
The backend is called text_file_backend
.
Warning | |
---|---|
This sink uses Boost.Filesystem internally, which may cause problems on process termination. See here for more details. |
File rotation happens when the sink detects that one or more rotation conditions are met and a new file needs to be created. It is implemented by the sink backend and includes the following steps:
It is important to note that there are three kinds of file names or paths involved in this process:
file_name
named parameter of the sink backend constructor or by calling the
set_file_name_pattern
method.
target_file_name
named parameter or by calling the set_target_file_name_pattern
method.
The file name patterns and rotation conditions can be specified when
the text_file_backend
backend is constructed.
void init_logging() { boost::shared_ptr< logging::core > core = logging::core::get(); boost::shared_ptr< sinks::text_file_backend > backend = boost::make_shared< sinks::text_file_backend >( keywords::file_name = "file.log", keywords::target_file_name = "file_%5N.log", keywords::rotation_size = 5 * 1024 * 1024, keywords::time_based_rotation = sinks::file::rotation_at_time_point(12, 0, 0) ); // Wrap it into the frontend and register in the core. // The backend requires synchronization in the frontend. typedef sinks::synchronous_sink< sinks::text_file_backend > sink_t; boost::shared_ptr< sink_t > sink(new sink_t(backend)); core->add_sink(sink); }
active file name pattern |
|
target file name pattern |
|
rotate the file upon reaching 5 MiB size... |
|
...or every day, at noon, whichever comes first |
Note | |
---|---|
The file size at rotation can be imprecise. The implementation counts the number of bytes written to the file, but the underlying API can introduce additional auxiliary data, which would increase the log file's actual size on disk. For instance, it is well known that Windows and DOS operating systems have a special treatment with regard to new-line characters. Each new-line character is written as a two byte sequence 0x0D 0x0A instead of a single 0x0A. Other platform-specific character translations are also known. The actual size on disk can also be less than the number of written characters on compressed filesystems. |
The time-based rotation is not limited by only time points. There are following options available out of the box:
rotation_at_time_point
class. This kind of rotation takes place whenever the specified time
point is reached. The following variants are available:
sinks::file::rotation_at_time_point(12, 0, 0)
sinks::file::rotation_at_time_point(date_time::Tuesday, 0, 0, 0)in case of midnight, the time can be omitted:
sinks::file::rotation_at_time_point(date_time::Tuesday)
sinks::file::rotation_at_time_point(gregorian::greg_day(1), 0, 0, 0)like with weekdays, midnight is implied:
sinks::file::rotation_at_time_point(gregorian::greg_day(1))
rotation_at_time_interval
class. With this predicate the rotation is not bound to any time
points and happens as soon as the specified time interval since the
previous rotation elapses. This is how to make rotations every hour:
sinks::file::rotation_at_time_interval(posix_time::hours(1))
If none of the above applies, one can specify his own predicate for time-based
rotation. The predicate should take no arguments and return bool
(the true
value indicates that the rotation should take place). The predicate will
be called for every log record being written to the file.
bool is_it_time_to_rotate(); void init_logging() { // ... boost::shared_ptr< sinks::text_file_backend > backend = boost::make_shared< sinks::text_file_backend >( keywords::file_name = "file.log", keywords::target_file_name = "file_%5N.log", keywords::time_based_rotation = &is_it_time_to_rotate ); // ... }
Note | |
---|---|
The log file rotation takes place on an attempt to write a new log record to the file. Thus the time-based rotation is not a strict threshold, either. The rotation will take place as soon as the library detects that the rotation should have happened. |
In addition to time and size-based file rotation the backend also performs
rotation on its destruction by default. This is done in order to maintain
all log files collected in the target directory after program termination
and ensure that temporary log files don't pile up in the directory the
sink backend writes to. This behavior can be disabled with the enable_final_rotation
parameter of
the backend constructor or the similarly named method of the backend:
void init_logging() { // ... boost::shared_ptr< sinks::text_file_backend > backend = boost::make_shared< sinks::text_file_backend >( keywords::file_name = "file.log", keywords::target_file_name = "file_%5N.log", keywords::enable_final_rotation = false ); // ... }
Both active and target file name patterns may contain a number of wildcards, like the one you can see in the example above. Supported placeholders are:
%N
)
with an optional width specification in the printf
-like
format. The file counter will always be decimal, zero filled to the
specified width.
%%
).
A few quick examples:
Template |
Expands to |
---|---|
file_%N.log |
file_1.log, file_2.log... |
file_%3N.log |
file_001.log, file_002.log... |
file_%Y%m%d.log |
file_20080705.log, file_20080706.log... |
file_%Y-%m-%d_%H-%M-%S.%N.log |
file_2008-07-05_13-44-23.1.log, file_2008-07-06_16-00-10.2.log... |
Important | |
---|---|
Although all Boost.DateTime format specifiers will work, there are restrictions on some of them, if you intend to scan for old log files. This functionality is discussed here. |
Note that, as described above, active and target file names are generated at different points in time. Specifically, the active file name is generated when the log file is originally created, and the target file name - when the file is closed. Timestamps used to construct these file names will reflect that difference.
Tip | |
---|---|
When file appending is needed, it is recommended to avoid any placeholders in the active file name pattern. Otherwise appending won't happen because of the different active log file names. You can use the target file name pattern to add a timestamp or counter to the log file after rotation. |
The sink backend allows hooking into the file rotation process in order
to perform pre- and post-rotation actions. This can be useful to maintain
log file validity by writing headers and footers. For example, this is
how we could modify the init_logging
function from our previous examples in order to write logs into XML files:
// Complete file sink type typedef sinks::synchronous_sink< sinks::text_file_backend > file_sink; void write_header(sinks::text_file_backend::stream_type& file) { file << "<?xml version=\"1.0\"?>\n<log>\n"; } void write_footer(sinks::text_file_backend::stream_type& file) { file << "</log>\n"; } void init_logging() { // Create a text file sink boost::shared_ptr< file_sink > sink(new file_sink( keywords::file_name = "%Y%m%d_%H%M%S_%5N.xml", keywords::rotation_size = 16384 )); sink->set_formatter ( expr::format("\t<record id=\"%1%\" timestamp=\"%2%\">%3%</record>") % expr::attr< unsigned int >("RecordID") % expr::attr< boost::posix_time::ptime >("TimeStamp") % expr::xml_decor[ expr::stream << expr::smessage ] ); // Set header and footer writing functors sink->locked_backend()->set_open_handler(&write_header); sink->locked_backend()->set_close_handler(&write_footer); // Add the sink to the core logging::core::get()->add_sink(sink); }
the resulting file name pattern |
|
rotation size, in characters |
|
the log message has to be decorated, if it contains special characters |
After being closed, the rotated files can be collected. In order to do
so one has to set up a file collector by specifying the target directory
where to collect the rotated files and, optionally, size thresholds.
For example, we can modify the init_logging
function to place rotated files into a distinct directory and limit total
size of the files. Let's assume the following function is called by
init_logging
with the
constructed sink:
void init_file_collecting(boost::shared_ptr< file_sink > sink) { sink->locked_backend()->set_file_collector(sinks::file::make_collector( keywords::target = "logs", keywords::max_size = 16 * 1024 * 1024, keywords::min_free_space = 100 * 1024 * 1024, keywords::max_files = 512 )); }
the target directory |
|
maximum total size of the stored files, in bytes |
|
minimum free space on the drive, in bytes |
|
maximum number of stored files |
The max_size
, min_free_space
and max_files
parameters are optional, the corresponding threshold will not be taken
into account if the parameter is not specified.
One can create multiple file sink backends that collect files into the same target directory. In this case the most strict thresholds are combined for this target directory. The files from this directory will be erased without regard for which sink backend wrote it, i.e. in the strict chronological order.
Warning | |
---|---|
The collector does not resolve log file name clashes between different sink backends, so if the clash occurs the behavior is undefined, in general. Depending on the circumstances, the files may overwrite each other or the operation may fail entirely. |
The file collector provides another useful feature. Suppose you ran your
application 5 times and you have 5 log files in the "logs"
directory. The file sink backend and file collector provide a scan_for_files
method that searches
the target directory for these files and takes them into account. So,
if it comes to deleting files, these files are not forgotten. What's
more, if a file name pattern in the backend involves a file counter,
scanning for older files allows updating the counter to the most recent
value. Here is the final version of our init_logging
function:
void init_logging() { // Create a text file sink boost::shared_ptr< file_sink > sink(new file_sink( keywords::file_name = "%Y%m%d_%H%M%S_%5N.xml", keywords::rotation_size = 16384 )); // Set up where the rotated files will be stored init_file_collecting(sink); // Upon restart, scan the directory for files matching the file_name pattern sink->locked_backend()->scan_for_files(); sink->set_formatter ( expr::format("\t<record id=\"%1%\" timestamp=\"%2%\">%3%</record>") % expr::attr< unsigned int >("RecordID") % expr::attr< boost::posix_time::ptime >("TimeStamp") % expr::xml_decor[ expr::stream << expr::smessage ] ); // Set header and footer writing functors namespace bll = boost::lambda; sink->locked_backend()->set_open_handler ( bll::_1 << "<?xml version=\"1.0\"?>\n<log>\n" ); sink->locked_backend()->set_close_handler ( bll::_1 << "</log>\n" ); // Add the sink to the core logging::core::get()->add_sink(sink); }
There are two methods of file scanning: the scan that involves file name
matching with the target file name pattern (the default) and the scan
that assumes that all files in the target directory are log files. The
former applies certain restrictions on the placeholders that can be used
within the file name pattern, in particular only file counter placeholder
and these placeholders of Boost.DateTime
are supported: %y
,
%Y
,
%m
,
%d
,
%H
,
%M
,
%S
,
%f
.
The latter scanning method, in its turn, has its own drawback: it does
not allow updating the file counter in the backend. It is also considered
to be more dangerous as it may result in unintended file deletion, so
be cautious. The all-files scanning method can be enabled by passing
it as an additional parameter to the scan_for_files
call:
// Look for all files in the target directory backend->scan_for_files(sinks::file::scan_all);
When scanning for matching file names, if the target file name is not set then the active file name pattern is used instead.
The sink backend supports appending to the previously written files (e.g.
left from a previous run of your application). In order to enable this
mode, one has to add std::ios_base::app
to the file open mode used by the backend. This can be done with the
open_mode
named parameter
of the backend constructor or the set_open_mode
method.
void init_logging() { // ... boost::shared_ptr< sinks::text_file_backend > backend = boost::make_shared< sinks::text_file_backend >( keywords::file_name = "file.log", keywords::target_file_name = "file_%5N.log", keywords::open_mode = std::ios_base::out | std::ios_base::app, keywords::enable_final_rotation = false ); // ... }
When initializing from settings, the "Append" parameter of the "TextFile" sink enables appending.
In order for file appending to actually happen, it is important that the name of the newly opened log file matches the previously written file. Otherwise, the sink will simply create a new file under the new name. There are several recommendations to follow when file appending is desirable:
#include <boost/log/sinks/text_multifile_backend.hpp
>
While the text stream and file backends are aimed to store all log records
into a single file/stream, this backend serves a different purpose. Assume
we have a banking request processing application and we want logs related
to every single request to be placed into a separate file. If we can associate
some attribute with the request identity then the text_multifile_backend
backend is the way to go.
Note | |
---|---|
During its operation, the multi-file backend frequently opens and closes
log files, which means that the cost of these operations on a given system
will be significant to the logging performance. Windows, especially with
antivirus software running, is known to have extremely expensive file
open and close operations. For example, compared to Linux, closing a
file can be in the order of hundreds times slower,
according to some
reports.
Consider creating multiple |
void init_logging() { boost::shared_ptr< logging::core > core = logging::core::get(); boost::shared_ptr< sinks::text_multifile_backend > backend = boost::make_shared< sinks::text_multifile_backend >(); // Set up the file naming pattern backend->set_file_name_composer ( sinks::file::as_file_name_composer(expr::stream << "logs/" << expr::attr< std::string >("RequestID") << ".log") ); // Wrap it into the frontend and register in the core. // The backend requires synchronization in the frontend. typedef sinks::synchronous_sink< sinks::text_multifile_backend > sink_t; boost::shared_ptr< sink_t > sink(new sink_t(backend)); // Set the formatter sink->set_formatter ( expr::stream << "[RequestID: " << expr::attr< std::string >("RequestID") << "] " << expr::smessage ); core->add_sink(sink); }
You can see we used a regular formatter
in order to specify file naming pattern. Now, every log record with a distinct
value of the "RequestID" attribute will be stored in a separate
file, no matter how many different requests are being processed by the
application concurrently. You can also find the multiple_files
example in the
library distribution, which shows a similar technique to separate logs
generated by different threads of the application.
If using formatters is not appropriate for some reason, you can provide
your own file name composer. The composer is a mere function object that
accepts a log record as a single argument and returns a value of the text_multifile_backend::path_type
type.
Note | |
---|---|
The multi-file backend has no knowledge of whether a particular file is going to be used or not. That is, if a log record has been written into file A, the library cannot tell whether there will be more records that fit into the file A or not. This makes it impossible to implement file rotation and removing unused files to free space on the file system. The user will have to implement such functionality himself. |
#include <boost/log/sinks/text_ipc_message_queue_backend.hpp
>
Sometimes, it is convenient to pass log records between different processes on a local machine. For example, one could collect logs from multiple processes into a common logger process. Or create a log viewer which is able to monitor running processes. To implement this idea, a sink backend that sends logs across processes is needed. The text interprocess sink backend puts formatted log messages to an interprocess message queue, which can then be retrieved and processed by another process. In particular, one may choose to encode a log record with various attribute values into a JSON or XML formatted text message, and then decode the message at the receiving side for processing such as filtering and displaying.
The backend is implemented by the text_ipc_message_queue_backend
class template which should be instantiated with an interprocess message
queue such as reliable_message_queue
.
The following example program illustrates a logger process.
BOOST_LOG_ATTRIBUTE_KEYWORD(a_timestamp, "TimeStamp", attrs::local_clock::value_type) BOOST_LOG_ATTRIBUTE_KEYWORD(a_process_id, "ProcessID", attrs::current_process_id::value_type) BOOST_LOG_ATTRIBUTE_KEYWORD(a_thread_id, "ThreadID", attrs::current_thread_id::value_type) int main() { try { typedef logging::ipc::reliable_message_queue queue_t; typedef sinks::text_ipc_message_queue_backend< queue_t > backend_t; typedef sinks::synchronous_sink< backend_t > sink_t; // Create a sink that is associated with the interprocess message queue // named "ipc_message_queue". boost::shared_ptr< sink_t > sink = boost::make_shared< sink_t > ( keywords::name = logging::ipc::object_name(logging::ipc::object_name::user, "ipc_message_queue"), keywords::open_mode = logging::open_mode::open_or_create, keywords::capacity = 256, keywords::block_size = 1024, keywords::overflow_policy = queue_t::block_on_overflow ); // Set the formatter sink->set_formatter ( expr::stream << "[" << a_timestamp << "] [" << a_process_id << ":" << a_thread_id << "] " << expr::smessage ); logging::core::get()->add_sink(sink); // Add the commonly used attributes, including TimeStamp, ProcessID and ThreadID logging::add_common_attributes(); // Do some logging src::logger logger; for (unsigned int i = 1; i <= 10; ++i) { BOOST_LOG(logger) << "Message #" << i; } } catch (std::exception& e) { std::cout << "Failure: " << e.what() << std::endl; } return 0; }
The same interprocess queue can be used to implement the receiving side as well. The following code displays the received log messages on the console.
int main() { try { typedef logging::ipc::reliable_message_queue queue_t; // Create a message_queue_type object that is associated with the interprocess // message queue named "ipc_message_queue". queue_t queue ( keywords::name = logging::ipc::object_name(logging::ipc::object_name::user, "ipc_message_queue"), keywords::open_mode = logging::open_mode::open_or_create, keywords::capacity = 256, keywords::block_size = 1024, keywords::overflow_policy = queue_t::block_on_overflow ); std::cout << "Viewer process running..." << std::endl; // Keep reading log messages from the associated message queue and print them on the console. // queue.receive() will block if the queue is empty. std::string message; while (queue.receive(message) == queue_t::succeeded) { std::cout << message << std::endl; // Clear the buffer for the next message message.clear(); } } catch (std::exception& e) { std::cout << "Failure: " << e.what() << std::endl; } return 0; }
#include <boost/log/sinks/syslog_backend.hpp
>
The syslog backend, as comes from its name, provides support for the syslog API that is available on virtually any UNIX-like platform. On Windows there exists at least one public implementation of the syslog client API. However, in order to provide maximum flexibility and better portability the library offers built-in support for the syslog protocol described in RFC 3164. Thus on Windows only the built-in implementation is supported, while on UNIX-like systems both built-in and system API based implementations are supported.
The backend is implemented in the syslog_backend
class. The backend supports formatting log records, and therefore requires
thread synchronization in the frontend. The backend also supports severity
level translation from the application-specific values to the syslog-defined
values. This is achieved with an additional function object, level mapper,
that receives a set of attribute values of each log record and returns
the appropriate syslog level value. This value is used by the backend to
construct the final priority value of the syslog record. The other component
of the syslog priority value, the facility, is constant for each backend
object and can be specified in the backend constructor arguments.
Level mappers can be written by library users to translate the application
log levels to the syslog levels in the best way. However, the library provides
two mappers that would fit this need in obvious cases. The direct_severity_mapping
class template provides a way to directly map values of some integral attribute
to syslog levels, without any value conversion. The custom_severity_mapping
class template adds some flexibility and allows to map arbitrary values
of some attribute to syslog levels.
Anyway, one example is better than a thousand words.
// Complete sink type typedef sinks::synchronous_sink< sinks::syslog_backend > sink_t; void init_native_syslog() { boost::shared_ptr< logging::core > core = logging::core::get(); // Create a backend boost::shared_ptr< sinks::syslog_backend > backend(new sinks::syslog_backend( keywords::facility = sinks::syslog::user, keywords::use_impl = sinks::syslog::native )); // Set the straightforward level translator for the "Severity" attribute of type int backend->set_severity_mapper(sinks::syslog::direct_severity_mapping< int >("Severity")); // Wrap it into the frontend and register in the core. // The backend requires synchronization in the frontend. core->add_sink(boost::make_shared< sink_t >(backend)); } void init_builtin_syslog() { boost::shared_ptr< logging::core > core = logging::core::get(); // Create a new backend boost::shared_ptr< sinks::syslog_backend > backend(new sinks::syslog_backend( keywords::facility = sinks::syslog::local0, keywords::use_impl = sinks::syslog::udp_socket_based )); // Setup the target address and port to send syslog messages to backend->set_target_address("192.164.1.10", 514); // Create and fill in another level translator for "MyLevel" attribute of type string sinks::syslog::custom_severity_mapping< std::string > mapping("MyLevel"); mapping["debug"] = sinks::syslog::debug; mapping["normal"] = sinks::syslog::info; mapping["warning"] = sinks::syslog::warning; mapping["failure"] = sinks::syslog::critical; backend->set_severity_mapper(mapping); // Wrap it into the frontend and register in the core. core->add_sink(boost::make_shared< sink_t >(backend)); }
the logging facility |
|
the native syslog API should be used |
|
the logging facility |
|
the built-in socket-based implementation should be used |
Please note that all syslog constants, as well as level extractors, are
declared within a nested namespace syslog
.
The library will not accept (and does not declare in the backend interface)
native syslog constants, which are macros, actually.
Also note that the backend will default to the built-in implementation
and user
logging facility,
if the corresponding constructor parameters are not specified.
Tip | |
---|---|
The |
#include <boost/log/sinks/debug_output_backend.hpp
>
Windows API has an interesting feature: a process, being run under a debugger,
is able to emit messages that will be intercepted and displayed in the
debugger window. For example, if an application is run under the Visual
Studio IDE it is able to write debug messages to the IDE window. The basic_debug_output_backend
backend provides a simple way of emitting such messages. Additionally,
in order to optimize application performance, a special
filter is available that checks whether the application is being
run under a debugger. Like many other sink backends, this backend also
supports setting a formatter in order to compose the message text.
The usage is quite simple and straightforward:
// Complete sink type typedef sinks::synchronous_sink< sinks::debug_output_backend > sink_t; void init_logging() { boost::shared_ptr< logging::core > core = logging::core::get(); // Create the sink. The backend requires synchronization in the frontend. boost::shared_ptr< sink_t > sink(new sink_t()); // Set the special filter to the frontend // in order to skip the sink when no debugger is available sink->set_filter(expr::is_debugger_present()); core->add_sink(sink); }
Note that the sink backend is templated on the character type. This type
defines the Windows API version that is used to emit messages. Also, debug_output_backend
and wdebug_output_backend
convenience typedefs
are provided.
#include <boost/log/sinks/event_log_backend.hpp
>
Windows operating system provides a special API for publishing events related to application execution. A wide range of applications, including Windows components, use this facility to provide the user with all essential information about computer health in a single place - an event log. There can be more than one event log. However, typically all user-space applications use the common Application log. Records from different applications or their parts can be selected from the log by a record source name. Event logs can be read with a standard utility, an Event Viewer, that comes with Windows.
Although it looks very tempting, the API is quite complicated and intrusive, which makes it difficult to support. The application is required to provide a dynamic library with special resources that describe all events the application supports. This library must be registered in the Windows registry, which pins its location in the file system. The Event Viewer uses this registration to find the resources and compose and display messages. The positive feature of this approach is that since event resources can describe events differently for different languages, it allows the application to support event internationalization in a quite transparent manner: the application simply provides event identifiers and non-localizable event parameters to the API, and it does the rest of the work.
In order to support both the simplistic approach "it just works" and the more elaborate event composition, including internationalization support, the library provides two sink backends that work with event log API.
The basic_simple_event_log_backend
backend is intended to encapsulate as much of the event log API as possible,
leaving interface and usage model very similar to other sink backends.
It contains all resources that are needed for the Event Viewer to function
properly, and registers the Boost.Log library in the Windows registry in
order to populate itself as the container of these resources.
Important | |
---|---|
The library must be built as a dynamic library in order to use this backend flawlessly. Otherwise event description resources are not linked into the executable, and the Event Viewer is not able to display events properly. |
The only thing user has to do to add Windows event log support to his application is to provide event source and log names (which are optional and can be automatically suggested by the library), set up an appropriate filter, formatter and event severity mapping.
// Complete sink type typedef sinks::synchronous_sink< sinks::simple_event_log_backend > sink_t; // Define application-specific severity levels enum severity_level { normal, warning, error }; void init_logging() { // Create an event log sink boost::shared_ptr< sink_t > sink(new sink_t()); sink->set_formatter ( expr::format("%1%: [%2%] - %3%") % expr::attr< unsigned int >("LineID") % expr::attr< boost::posix_time::ptime >("TimeStamp") % expr::smessage ); // We'll have to map our custom levels to the event log event types sinks::event_log::custom_event_type_mapping< severity_level > mapping("Severity"); mapping[normal] = sinks::event_log::info; mapping[warning] = sinks::event_log::warning; mapping[error] = sinks::event_log::error; sink->locked_backend()->set_event_type_mapper(mapping); // Add the sink to the core logging::core::get()->add_sink(sink); }
Having done that, all logging records that pass to the sink will be formatted the same way they are in the other sinks. The formatted message will be displayed in the Event Viewer as the event description.
The basic_event_log_backend
allows more detailed control over the logging API, but requires considerably
more scaffolding during initialization and usage.
First, the user has to build his own library with the event resources (the process is described in MSDN). As a part of this process one has to create a message file that describes all events. For the sake of example, let's assume the following contents were used as the message file:
; /* -------------------------------------------------------- ; HEADER SECTION ; */ SeverityNames=(Debug=0x0:MY_SEVERITY_DEBUG Info=0x1:MY_SEVERITY_INFO Warning=0x2:MY_SEVERITY_WARNING Error=0x3:MY_SEVERITY_ERROR ) ; /* -------------------------------------------------------- ; MESSAGE DEFINITION SECTION ; */ MessageIdTypedef=WORD MessageId=0x1 SymbolicName=MY_CATEGORY_1 Language=English Category 1 . MessageId=0x2 SymbolicName=MY_CATEGORY_2 Language=English Category 2 . MessageId=0x3 SymbolicName=MY_CATEGORY_3 Language=English Category 3 . MessageIdTypedef=DWORD MessageId=0x100 Severity=Warning Facility=Application SymbolicName=LOW_DISK_SPACE_MSG Language=English The drive %1 has low free disk space. At least %2 Mb of free space is recommended. . MessageId=0x101 Severity=Error Facility=Application SymbolicName=DEVICE_INACCESSIBLE_MSG Language=English The drive %1 is not accessible. . MessageId=0x102 Severity=Info Facility=Application SymbolicName=SUCCEEDED_MSG Language=English Operation finished successfully in %1 seconds. .
After compiling the resource library, the path to this library must be provided to the sink backend constructor, among other parameters used with the simple backend. The path may contain placeholders that will be expanded with the appropriate environment variables.
// Create an event log sink boost::shared_ptr< sinks::event_log_backend > backend( new sinks::event_log_backend(( keywords::message_file = "%SystemDir%\\event_log_messages.dll", keywords::log_name = "My Application", keywords::log_source = "My Source" )) );
Like the simple backend, basic_event_log_backend
will register itself in the Windows registry, which will enable the Event
Viewer to display the emitted events.
Next, the user will have to provide the mapping between the application
logging attributes and event identifiers. These identifiers were provided
in the message compiler output as a result of compiling the message file.
One can use basic_event_composer
and one of the event ID mappings, like in the following example:
// Create an event composer. It is initialized with the event identifier mapping. sinks::event_log::event_composer composer( sinks::event_log::direct_event_id_mapping< int >("EventID")); // For each event described in the message file, set up the insertion string formatters composer[LOW_DISK_SPACE_MSG] // the first placeholder in the message // will be replaced with contents of the "Drive" attribute % expr::attr< std::string >("Drive") // the second placeholder in the message // will be replaced with contents of the "Size" attribute % expr::attr< boost::uintmax_t >("Size"); composer[DEVICE_INACCESSIBLE_MSG] % expr::attr< std::string >("Drive"); composer[SUCCEEDED_MSG] % expr::attr< unsigned int >("Duration"); // Then put the composer to the backend backend->set_event_composer(composer);
As you can see, one can use regular formatters to specify which attributes will be inserted instead of placeholders in the final event message. Aside from that, one can specify mappings of attribute values to event types and categories. Suppose our application has the following severity levels:
// Define application-specific severity levels enum severity_level { normal, warning, error };
Then these levels can be mapped onto the values in the message description file:
// We'll have to map our custom levels to the event log event types sinks::event_log::custom_event_type_mapping< severity_level > type_mapping("Severity"); type_mapping[normal] = sinks::event_log::make_event_type(MY_SEVERITY_INFO); type_mapping[warning] = sinks::event_log::make_event_type(MY_SEVERITY_WARNING); type_mapping[error] = sinks::event_log::make_event_type(MY_SEVERITY_ERROR); backend->set_event_type_mapper(type_mapping); // Same for event categories. // Usually event categories can be restored by the event identifier. sinks::event_log::custom_event_category_mapping< int > cat_mapping("EventID"); cat_mapping[LOW_DISK_SPACE_MSG] = sinks::event_log::make_event_category(MY_CATEGORY_1); cat_mapping[DEVICE_INACCESSIBLE_MSG] = sinks::event_log::make_event_category(MY_CATEGORY_2); cat_mapping[SUCCEEDED_MSG] = sinks::event_log::make_event_category(MY_CATEGORY_3); backend->set_event_category_mapper(cat_mapping);
Tip | |
---|---|
As of Windows NT 6 (Vista, Server 2008) it is not needed to specify event type mappings. This information is available in the message definition resources and need not be duplicated in the API call. |
Now that initialization is done, the sink can be registered into the core.
// Create the frontend for the sink boost::shared_ptr< sinks::synchronous_sink< sinks::event_log_backend > > sink( new sinks::synchronous_sink< sinks::event_log_backend >(backend)); // Set up filter to pass only records that have the necessary attribute sink->set_filter(expr::has_attr< int >("EventID")); logging::core::get()->add_sink(sink);
In order to emit events it is convenient to create a set of functions that will accept all needed parameters for the corresponding events and announce that the event has occurred.
BOOST_LOG_INLINE_GLOBAL_LOGGER_DEFAULT(event_logger, src::severity_logger_mt< severity_level >) // The function raises an event of the disk space depletion void announce_low_disk_space(std::string const& drive, boost::uintmax_t size) { BOOST_LOG_SCOPED_THREAD_TAG("EventID", (int)LOW_DISK_SPACE_MSG); BOOST_LOG_SCOPED_THREAD_TAG("Drive", drive); BOOST_LOG_SCOPED_THREAD_TAG("Size", size); // Since this record may get accepted by other sinks, // this message is not completely useless BOOST_LOG_SEV(event_logger::get(), warning) << "Low disk " << drive << " space, " << size << " Mb is recommended"; } // The function raises an event of inaccessible disk drive void announce_device_inaccessible(std::string const& drive) { BOOST_LOG_SCOPED_THREAD_TAG("EventID", (int)DEVICE_INACCESSIBLE_MSG); BOOST_LOG_SCOPED_THREAD_TAG("Drive", drive); BOOST_LOG_SEV(event_logger::get(), error) << "Cannot access drive " << drive; } // The structure is an activity guard that will emit an event upon the activity completion struct activity_guard { activity_guard() { // Add a stop watch attribute to measure the activity duration m_it = event_logger::get().add_attribute("Duration", attrs::timer()).first; } ~activity_guard() { BOOST_LOG_SCOPED_THREAD_TAG("EventID", (int)SUCCEEDED_MSG); BOOST_LOG_SEV(event_logger::get(), normal) << "Activity ended"; event_logger::get().remove_attribute(m_it); } private: logging::attribute_set::iterator m_it; };
Now you are able to call these helper functions to emit events. The complete
code from this section is available in the event_log
example in the library
distribution.