The ElasticsearchPipelineStage
stopped retrying if sending requests to the Elasticsearch server failed with the
send queue being full. As no additional messages could be queued the processing thread could was not awaked by
incoming new messages. Furthermore the retry check interval was not correctly evaluated causing that no new requests
were kicked off after a connection error.
In case that the send queue was sufficiently filled to send multiple bulk requests to the Elasticsearch server,
the ElasticsearchPipelineStage
kicked off these requests without knowing whether the connection to the server
is fine. This led to multiple (by default up to 5) error messages in the system log every 30 seconds.
The following package dependencies have been updated to work the highest version that is available for a certain target framework:
GriffinPlus.Lib.Common
: 4.3.0 (all frameworks)System.Text.Json
: 6.0.11 (.NET Framework 4.6.1 only), 9.0.1 (.NET Framework 4.8 only)
The following package dependencies have been updated to work the highest version that is available for a certain target framework:
GriffinPlus.Lib.Common
: 4.1.4 (all frameworks)System.Text.Json
: 8.0.4 (.NET Framework 4.8 only)
Added support for unwrapping System.AggregateException
objects when writing log messages.
Improved formatting of inner exceptions in general by indenting exceptions by their level in the hierarchy.
The fix comes with the GriffinPlus.Lib.Logging.Interface
package (version 1.1.2) that is referenced now.
The log file is a sqlite database and in analysis mode message texts were stored as STRING. This allowed the database engine to store numbers as INTEGER or REAL, if they look like a number. This lead to cast exceptions when reading message text from the database as string. The fix is the introduction of a migration step that alters the type of the text column from STRING to TEXT.
The migration is applied automatically when opening a log file read/write. When opening a log file read-only, a ´MigrationNeededException´ is thrown to notify the user of the necessary migration.
The following package dependencies have been updated to work the highest version that is available for a certain target framework:
GriffinPlus.Lib.Common
: 4.0.0 (all frameworks)System.Text.Json
: 8.0.3 (.NET Framework 4.8)
All NuGet packages contain specific builds for .NET 6/7/8 now.
The following package dependencies have been updated to work the highest version that is available for a certain target framework:
GriffinPlus.Lib.Common
: 3.3.1 (all frameworks)System.Data.SQLite.Core
: 1.0.118 (all frameworks)System.Diagnostics.EventLog
: 5.0.1 (.NET Standard 2.0)System.Diagnostics.EventLog
: 6.0.0 (.NET Framework 4.6.1)System.Diagnostics.EventLog
: 8.0.0 (.NET Framework 4.8, NET 5.0/6.0/7.0)System.Text.Json
: 5.0.2 (.NET Standard 2.0)System.Text.Json
: 6.0.9 (.NET Framework 4.6.1)System.Text.Json
: 8.0.1 (.NET Framework 4.8)
The following package dependencies have been updated to work the highest version that is available for a certain target framework:
GriffinPlus.Lib.Common
: 3.1.5 (all frameworks)System.Data.SQLite.Core
: 1.0.117 (all frameworks)System.Diagnostics.EventLog
: 5.0.1 (.NET Standard 2.0)System.Diagnostics.EventLog
: 6.0.0 (.NET Framework 4.6.1)System.Diagnostics.EventLog
: 7.0.0 (.NET Framework 4.8, .NET 5.0)System.Text.Json
: 5.0.2 (.NET Standard 2.0)System.Text.Json
: 6.0.7 (.NET Framework 4.6.1)System.Text.Json
: 7.0.2 (.NET Framework 4.8)
GriffinPlus.Lib.Common
has been updated to version 3.1.3.
The following packages have been downgraded for support of .NET Core 2.2:
System.Diagnostics.EventLog
: 4.7.0System.Text.Json
: 4.7.2
Test now run on the following frameworks:
- Tests on .NET Framework 4.6.1 run with the library build for .NET Framework 4.6.1
- Tests on .NET Core 2.2 run with the library build for .NET Standard 2.0
- Tests on .NET Core 3.1 run with the library build for .NET Standard 2.0
- Tests on .NET 5.0 run with the library build for .NET Standard 2.0
- Tests on .NET 6.0 run with the library build for .NET Standard 2.0
- Tests on .NET 7.0 run with the library build for .NET Standard 2.0
The Elasticsearch Pipeline Stage caused a delay of 30 seconds when shutting down, if messages were buffered, but could not be sent to the configured elasticsearch endpoint.
The delay could occur if the local log service is not running and auto-reconnecting is enabled. As long as the stage tries to establish a connection to the service the stage lock was kept locked blocking a thread trying to write a log message.
As the ElasticsearchPipelineStage
was adjusted to work with data streams the index
action was replaced with the create
action. The changed action was not properly considered when processing the response. This could result in messages being sent multiple times if at least one operation in a bulk request fails.
The ElasticsearchPipelineStage
used operation action index
when sending messages to Elasticsearch. This works for regular Elasticsearch indices, but fails for data streams that need operation action create
.
The logging interface (mainly the LogWriter
class and LogLevel
class) have been pulled out into a separate project available via NuGet package GriffinPlus.Lib.Logging.Interface
. The interface is very stable. This allows libraries to use the interface package instead of the full-featured GriffinPlus.Lib.Logging
package to write to the log without the need to update library packages with every release of the full-featured GriffinPlus.Lib.Logging
package. The interface should stay stable even in case of major version releases of the logging package. Some functionality has moved, but for compatibility reasons forwards have been established. These forwards are marked as obsolete, so users can easily migrate their code to the new version without breaking anything. These forwards will be removed with the next major release.
The application name was set to the name of the current application domain. It should be the name of the process by default.
This release contains breaking changes!
All configurations now provide a Changed
event that is raised when something in the configuration changes. Event handlers are fired using the synchronization context of the thread registering the event, so it is suitable for use in conjunction with GUIs. Raising the Changed
event can be suspended temporarily.
The FileBackedLogConfiguration
effectively supports reloading its *.gplogconf file on changes now.
Fix synchronization issue when shutting down pipeline stages deriving from the AsyncProcessingPipelineStage
class
The boolean member variable indicating that the stage is shutting down was not volatile, so it was not evaluated properly in the processing loop of the stage. This behavior deferred the shutdown of the stage.
The name generation now supports generic types and generic type definitions. Information about the assembly (name, version, hash) containing the type is properly pruned to create a clean name.
The ProcessIntegration
class implements IDisposable
now. Disposing a ProcessIntegration
object waits for the process to exit and cleans up the associated Process
object. This should avoid generating zombie processes on Linux.
The file position was not moved to end of the file when opening in 'append' mode, so new messages have overwritten messages in an existing file.
The stage re-opened the log file when a setting has changed. Re-opening truncated the log file, so messages were lost.
Azure pipelines did not patch the correct assembly information files, so all assemblies had version 0.0.0.0
.
Although direct support for .NET Core 2.1 has been removed, the .NET Standard 2.0 Version is still usable on .NET Core 2.1.
The IProcessingPipelineStage
interface has been removed in favor of a common base class (ProcessingPipelineStage
). This was necessary to better integrate pipeline stages into the logging subsystem. There is a derivation for synchronous pipeline stages (SyncProcessingPipelineStage
) and asynchronous pipeline stages (AsyncProcessingPipelineStage
) now.
The configuration and the processing pipeline was set up independently from each other, so pipeline stages created a temporary set of default settings to get into an operable state. As soon as pipeline stages were dropped into the logging subsystem, the configuration of the logging subsystem became active replacing the temporary stage settings. This lead to discarding settings that were changed by a stage's property.
The Log
class now provides an Initialize<TConfiguration>(...)
method that creates a configuration and the pipeline stages in a single step. Stages are directly bound to this configuration. Furthermore stages now need to provide a parameterless constructor to work with the logging subsystem. The name of a pipeline stage and its settings are passed to the parameterless constructor on a side channel using the ProcessingPipelineStage.Create<TStage>()
method. A pipeline builder mechanism assists with setting up pipeline stages during initialization and hides this magic from the user.
When pruning old log messages the FileBackedLogMessageCollection
class raised the CollectionChanged
event, but notified about a collection reset instead of a limited number of items being removed from the collection. This behavior was rather unexpected and different from the behavior of the LogMessageCollection
class working purely in memory. The FileBackedLogMessageCollection
behaves as the LogMessageCollection now, but the change induced reading log messages from the log file before actually removing the messages. If the user of the FileBackedLogMessageCollection
class knows that event recipients do not need the removed messages, he can set the ReturnDummyMessagesWhenPruning
property to true
to pass a special collection with dummy messages to event recipients. The number of dummy messages is the same as the number of messages that has actually been removed. This way reading unneeded log messages can be avoided to improve performance.
Derived filter classes can now influence which log levels are considered predefined log levels. This is especially important when interoperating with logging systems that use other predefined log levels than Griffin+ Logging.
The DisableFilterOnReset
property determines whether the filter is disabled when it is reset (default is false). The UnselectItemsOnReset
property determines whether filter items are unselected when the filter is reset (default is false).
The ECS requires writing the version field to determine the ECS version the writer complies to.
The base classes for processing pipeline stages now provide the following overridable methods that are invoked when a registered pipeline stage setting changes:
AsyncProcessingPipelineStage.OnSettingsChangedAsync()
SyncProcessingPipelineStage.OnSettingsChanged()
If no elasticsearch endpoint was available the processing thread went to sleep before trying again. In this case the pipeline stage did not shut down until the shutdown timeout elapsed causing an unnecessary delay of 30 seconds.
An application usually terminates when unhandled exceptions occur. Log messages buffered in stages might get lost in this case. These exceptions are now logged using the system logger and the configured pipeline stages to allow further investigation. After logging the incident the logging subsystem is shut down gracefully and the process is terminated. Terminating the process can be disabled by setting the TerminateProcessOnUnhandledException
property of the Log
class to false
.
Some pipeline stages need some time to process buffered messages, so exiting the process without shutting down gracefully can result in message loss.
Formerly a foreground thread was used for processing. The foreground thread could keep the process from exiting, if the pipeline stage was not shut down gracefully at the end.
If the ElasticsearchPipelineStage
does not complete shutting down within 30 seconds, a cancellation token is signaled to cancel pending send operations. This could lead to an OperationAbortedException
to be thrown before the send tasks have actually been set to completed. Disposing these incomplete tasks as part of the cleanup procedure could throw an exception as well.
The system logger now writes the correct registry snippet to register the appropriate log source to the Windows event log.
The pipeline stage created a dedicated thread at startup, but lost it after awaiting the first task. The thread always continued on a worker thread. The pipeline stage now creates a dedicated thread with its own synchronization context, so execution can continue on that thread - provided that ConfigureAwait(false)
is not used which allows the continuation to run on a worker thread.
Replaced MemoryStream
with MemoryBlockStream
in ElasticsearchPipelineStage
. The MemoryBlockStream
rents buffers from the application's array pool. When there is nothing to do, all buffers are returned to the pool. This reduces the memory consumption in times with less log traffic. Buffers are 80 KiB in size, so they are allocated on the regular heap, not on the large object heap. Not using the large object heap is always a good idea, because objects on this heap are collected rarely and the heap is not compacted which can cause heap fragmentation issues.
The pipeline stage used thread pool threads to do any processing. This is usually the way with the best throughput as the thread pool tries to limit the number of threads to the number of cores to minimize context switches. Queuing work for a thread pool thread is rather cheap, but the overhead to handle this was not. As a mitigation the processing thread was kept for some time to process additional work. Putting a thread pool thread asleep lets the cpu core associated with it sleep as well. This could lead to significant performance loss. Using a dedicated thread introduces additional context switches, but this seems to have less impact than putting a CPU core asleep.
- Fix unexpected disposal of content stream along with the send task of the
ElasticsearchPipelineStage
- Remove explicit reference to System.Net.Http in
ElasticsearchPipelineStage
project - Fix
ElasticsearchPipelineStage
shutting down too early
This release contains some breaking changes!
Griffin+ Logging ships with a new pipeline stage that forwards log messages to an Elasticsearch cluster. For more information, please see the project page.
The Log
class now provides an operating system dependent logger that directly writes into the windows event log (on windows) or to syslog (on linux). The system logger allows the logging subsystem to communicate issues that occur within the logging subsystem itself. The ProcessingPipelineBaseStage
class supports writing pipeline-specific informational messages, warnings and errors using the system logger now. When writing errors an exception object can be passed. The exception is unwrapped, formatted and logged as well.
- Fix synchronization issue in
VolatileProcessingPipelineStageSetting
class - Fix deadlock caused by default configuration using the pipeline stage lock
Pipeline stages used the GetSetting()
method of the ProcessingPipelineStageConfigurationBase
class to obtain a setting object for named settings. This method had undesirable side effects as it actually registered a setting to allow pipeline stages to use it. The registration added a new setting with a default value. The registration failed, if GetSetting()
was called with different default setting values. This was intentional behavior, but confused some people, so we decided to rename GetSetting()
to RegisterSetting()
as it expresses what it actually does. At the same time we added methods to access settings without registering one with a default value. Now stages can use the following methods to access their settings:
RegisterSetting()
registers a setting with a specific name and creates a new setting with a default value, if the setting does not exist, yet. The default value does not change an existing setting value.GetSetting()
gets a setting with a specific name and returns null, if it does not exist, yet.SetSetting()
sets a setting with a specific name and creates a new setting with the specified value (but without setting a default value to circumvent clashes withRegisterSettings()
).
Griffin+ log levels are now compliant to syslog log levels and log level ids correspond to syslog severity codes. The former trace levels (Trace0
to Trace19
) have been eliminated as tags are now available to differentiate tracing, if necessary.
Old log levels:
Failure
(0)Error
(1)Warning
(2)Note
(3)Developer
(4)Trace0
(5)- ...
Trace19
(24)- Aspects (25+)
New log levels:
Emergency
(0)Alert
(1)Critical
(2)Error
(3)Warning
(4)Notice
(5)Informational
(6)Debug
(7)Trace
(8, no syslog equivalent)- Aspects (9+, no syslog equivalent)
The LocalLogServicePipelineStage
transforms the new log levels to their old pendent to avoid breaking old and new components log to the service. The log levels are mapped as follows:
Emergency
=>Failure
Alert
=>Failure
Critical
=>Failure
Error
=>Error
Warning
=>Warning
Notice
=>Note
Informational
=>Note
Debug
=>Developer
Trace
=>Trace0
- Aspects are kept as they are
Introduced a setting proxy that forwards all setting accesses to the currently bound pipeline stage configuration. If the configuration is exchanged, the proxy rebinds to the new configuration automatically. This avoids breaking the link between a pipeline stage and its configuration. The solution up to now was to invoke the virtual BindSettings()
method of the ProcessingPipelineBaseStage
class just after the configuration has been exchanged. The first call to this method was done in the constructor of the ProcessingPipelineBaseStage
class. This could cause severe issues as the constructor of a derived class has not run at the point the virtual method was called. The override of BindSettings()
was working on an incompletely initialized object in this case.
Pipeline stages deriving from the ProcessingPipelineBaseStage
class can now call RegisterSetting()
to get a setting proxy. BindSettings()
has been removed.
Pipeline stage settings supported only primitive types, enums and strings. Pipeline stage settings can now be registered with custom converters that handle the conversion from the setting's value to string and vice versa. This way even complex types can be used in pipeline settings. These types are always stored as strings in configurations.
Under Linux unit tests targeting the FileBackedLogConfiguration
class failed sporadicly, because the user was running out of inotify instances needed by the FileSystemWatcher
class. Disposing the watchers solved the issue. In real-world applications there usually is only one configuration with a single watcher, so not disposing the configuration should not be an issue.
The ProcessingPipelineBaseStage
now provides the IsInitializing
property derived classes can use to determine whether the stage is being initialized, but has not completed, yet. The EnsureAttachedToLoggingSubsystem()
method ensures that the stage is already initialized or still initializing, otherwise throws an exception.
- Make exceptions thrown by SelectableFileBackedLogMessageFilter class more elaborate.
- Fix ArgumentOutOfRangeException in FileBackedLogMessageCollectionFilteringAccessor class
- Forbid using line break characters in log writer/level names (these characters break the configuration)
- Fix issue with read-only databases in
SelectableFileBackedLogMessageFilter
class - Fix updating overview collections when creating
LogMessageCollection
with an initial message set - Fix effect of global filter switch in
SelectableFileBackedLogMessageFilter
class - Make Reset() method of
SelectableLogMessageFilterBase
class protected (can break consistency when not used properly in user-code) - Fix default timestamp of SelectableLogMessageFilter class (defaults to
01-01-0001T00:00:00
now) - Fix issue with sorting of filter items in
SelectableLogMessageFilterBase
class
- Map log level
None
andAll
toFailure
when writing messages (these levels should be used for filtering only, not for writing!)
- Add support for underscores in log writer tags.
- Add support for populating log files on creation to speed up creating files with an initial log message set
- Item filters of the SelectableLogMessageFilter implementations (in-memory, file-backed) do not remove items on Reset() any more, if AccumulateItems is set.
This release contains some breaking changes!
It may be necessary to adjust the setup of pipeline stages and the implementation of own pipeline stages.
These changes do not effect writing log messages, so the impact should be rather low.
- Added a log file based on an SQLite database (LogFile class)
- The file can be set up for recording and for analysis to fit scenarios where write speed is more important than the ability to query data - an vice versa.
- The file can operate in two modes to weigh robustness against speed
- Robust Mode: The database uses a WAL (Write Ahead Log) when writing to ensure data consistency
- Fast Mode: The database works without journaling and does not sync to disk to speed up operation
- Added log message collections with filtering accessors and data-binding capabilities
- LogFileCollection class (in-memory)
- FileBackedLogFileCollection class (backed by the LogFile class)
- Added Selectable Log Message Filter with data-binding support for both collection types to filter log messages...
- ... by time span
- ... by selecting process ids, process names, application names, log writer names and log level names a log message must match
- ... by full-text search in the message text
- The LogMessage class now supports data-binding, asynchronous initialization and write protection
- Changed the default file extension for log configuration files from
.logconf
to.gplogconf
to circumvent a name clash with another logging subsystem
- The ProcessIntegration class now supports waiting for the process to exit (synchronously and asynchronously).
- Fixed task setup in AsyncProcessingPipelineStage class.
- Log Writers can be configured to attach tags to written log messages (tags can be used when filtering log messages).
- Support for reading JSON formatted log messages (see JsonMessageReader class)
- Support for integrating external processes into the logging subsystem (see ProcessIntegration class).
- The JsonMessageFormatter class supports setting the newline character sequence now.
- Option to replace used streams in the ConsoleWriterPipelineStage class.
- Fixed crash in ProcessingPipelineBaseStage.RemoveNextStage() class (effected all derived pipeline stages).
- Let the TextWriterPipelineStage emit all log messages fields by default.