First release of KurrentDB under the new name

This is the first release of KurrentDB, and is a short term release (STS) which will be supported until the next release of KurrentDB.

This release contains:

  • The new archiving feature.
  • The rebranding of the database from EventStoreDB to KurrentDB.

Rebranding to KurrentDB

EventStoreDB has been rebranded to KurrentDB. This affects the following:

  • The license has changed from Event Store License v2 (ESLv2) to Kurrent License v1 (KLv1).
  • The -ee and -ce suffixes have been removed, the packages are now simply kurrentdb.
  • The Cloudsmith repository has changed from eventstore to kurrent.
  • Configuration sections and prefixes have been renamed from EventStore and EVENTSTORE to KurrentDB and KURRENTDB.
  • Metrics have been renamed from eventstore to kurrentdb.
  • New KurrentDB grafana dashboards are available (summary and panels), and the Kurrent-Grafana repository has been made public.
  • On Windows: The executable is now KurrentDB.exe instead of EventStore.ClusterNode.exe.
  • On Linux: The service is now kurrentdb instead of eventstore, and the executable is kurrentd instead of eventstored.
  • On Linux: Default directories have changed from eventstore to kurrentdb (e.g. /etc/eventstore/ is now /etc/kurrentdb).

Follow the Upgrade Guide for 25.0 to upgrade from EventStoreDB to KurrentDB.

New features

Archiving

KurrentDB 25.0 introduces the initial release of Archiving: a new major feature to reduce costs and increase scalability of a KurrentDB cluster.

Quick summary

  • Data written to the database is replicated to all nodes in the deployment as normal.
  • A designated Archiver Node is responsible for uploading chunk files into the archive.
  • Nodes can then remove chunks from their local volumes to save space according to a retention policy.
  • Read requests read transparently through to the archive as necessary.

Populating the archive

  • A designated Archiver Node uploads chunk files to the archive as they are completed. The contents of each chunk file are the same as they were locally, except that merged chunks are unmerged for upload to make them trivial to locate.
  • Only chunk files are uploaded. PTables, Scavenge.db etc remain local to each node.
  • The Archiver Node stores an archive checkpoint in the archive indicating how much of the log has been uploaded to the archive.
  • The Archiver Node is also a Read-only Replica and does not participate in cluster elections/replication criteria. At the moment Read-only replicas can only be used in 3+ node clusters, and not single node deployments.

Removal of data from node volumes

  • All nodes can delete chunks from their local volumes.
  • The removal is performed during the Scavenge operation.
  • Chunks are removed only after they have been uploaded to the archive.
  • Chunks are removed only after they no longer meet the simple user defined retention policy.

Refer to the documentation for more information, including the metrics associated with the archiving feature, and the current limitations.

Fixes in this release

The following fixes which have been published in 24.10 are included in 25.0:

Handle replayed messages when retrying events in a persistent subscription (PR #4777)

This fixes an issue with persistent subscriptions where retried messages may be missed if they are retried after a parked message is already in the buffer. This can happen if a user triggers a replay of parked messages while there are non-parked messages timing out and being retried.

When this occurs, an error is logged for each message that is missed:

Error while processing message EventStore.Core.Messages.SubscriptionMessage+PersistentSubscriptionTimerTick in queued handler 'PersistentSubscriptions'.
System.InvalidOperationException: Operation is not valid due to the current state of the object.

If a retried message is missed in this way, the consumer will never receive the message. In order to recover and receive these messages again, the persistent subscription will need to be reset.

Validate against attempts to set metadata for the "" stream (PR #4799)

Empty string (“”) has never been a valid stream name. Attempting to set the metadata for it results in an attempt to write to the stream “$$” which, until now, has been a valid stream name.

However, writing to “$$” involves checking on the “” stream, to see if it is soft deleted. This results in the storage writer exiting which shuts down the server to avoid a ‘sick but not dead’ scenario

“$$” is now an invalid stream name, and so any attempt to write to it is rejected at an early stage.

Fix EventStoreDB being unable to start on Windows 2016 (PR #4765)

EventStoreDB would crash when running on Windows 2016 with the following error:

[ 1488, 1,16:45:11.368,FTL] EventStore Host terminated unexpectedly.
System.TypeInitializationException: The type initializer for 'EventStore.Core.Time.Instant' threw an exception.
---> System.Exception: Expected TicksPerSecond (1853322) to be an exact multiple of TimeSpan.TicksPerSecond (10000000)
at EventStore.Core.Time.Instant..cctor() in D:\a\TrainStation\TrainStation\EventStore\src\EventStore.Core\Time\Instant.cs:line 21
--- End of inner exception stack trace ---

This was caused by a bug when converting ticks between Stopwatch.Frequency and TimeSpan.TicksPerSecond.

Fixes to stats and metrics

Fixed the following issues:

  • On Linux the disk usage/capacity were showing the values for the disk mounted at / even if the database was on a different disk (PR #4759)
  • Align histogram buckets with the dashboard (PR #4811)
  • Fix some disk IO stats being 0 on linux due to the assumption that the /proc/<pid>/io file was in a particular order and returned early without reading all the values (PR #4818)

Support paging in persistent subscriptions (PR #4785)

The persistent subscription UI now pages when listing all persistent subscriptions rather than loading all of them at once. By default, the UI shows the persistent subscription groups for the first 100 streams, and refreshes every second. These options can be changed in the UI.

A count and offset can now be specified when getting all persistent subscription stats through the HTTP API: /subscriptions?count={count}&offset={offset}.

The response of /subscriptions (without the query string) is unchanged.