Skip to main content
Version: Next

ReductStore Settings

ReductStore can be configured using environment variables. The following sections describe the available settings and how to use them.

Remote Backend Settings

ReductStore supports using a remote object storage as a backend to store data. You can configure the remote backend using the following environment variables:

NameDefaultDescription
RS_REMOTE_BACKEND_TYPEFile systemIf it is set to S3, the storage uses an S3-compatible object storage as a backend
RS_REMOTE_ENDPOINTURL of the S3-compatible object storage endpoint. It's ignored for the AWS S3 service.
RS_REMOTE_REGIONRegion of the S3-compatible object storage. It's ignored for on-premises storage like MinIO.
RS_REMOTE_ACCESS_KEYAccess key for the S3-compatible object storage. Set it together with RS_REMOTE_SECRET_KEY, or leave both unset to use the default AWS credential provider chain.
RS_REMOTE_SECRET_KEYSecret key for the S3-compatible object storage. Set it together with RS_REMOTE_ACCESS_KEY, or leave both unset to use the default AWS credential provider chain.
RS_REMOTE_SESSION_TOKENOptional session token for temporary S3 credentials. Use it together with RS_REMOTE_ACCESS_KEY and RS_REMOTE_SECRET_KEY.
RS_REMOTE_BUCKETName of the bucket to store data. The bucket must be created before starting ReductStore.
RS_REMOTE_CACHE_PATHPath to a folder where the storage caches data before uploading it to the remote backend.
RS_REMOTE_CACHE_SIZE1GBMaximum size of the cache folder. When the limit is reached, the oldest data is removed.
RS_REMOTE_SYNC_INTERVAL0.1, 60 for S3Synchronization interval with the remote backend in seconds.
RS_REMOTE_DEFAULT_STORAGE_CLASSSTANDARDDefault storage class for data blocks. See the list of supported S3 storage classes.

By default, ReductStore uses the local file system to store data. To use a remote backend, set the RS_REMOTE_BACKEND_TYPE variable to S3 and configure the other variables accordingly. When the remote backend is enabled, ReductStore caches data in the folder specified by the RS_REMOTE_CACHE_PATH variable, thereby reducing the number of requests made to the remote backend. The RS_DATA_PATH variable is ignored when the remote backend is enabled.

Default AWS Credentials

For Amazon S3, you can omit RS_REMOTE_ACCESS_KEY and RS_REMOTE_SECRET_KEY entirely and let the AWS SDK resolve credentials from its default provider chain. In this case, you can use any supported authentication method (e.g., environment variables starting with AWS_, EC2 instance roles, or ECS task roles) without needing to configure ReductStore specifically for AWS.

For end-to-end deployment examples (standalone, active-passive, and read-only replicas) using S3, see the S3 integration guide.

Supported S3 Storage Classes

A user can specify default storage class for data blocks stored in the S3-compatible object storage using the RS_REMOTE_DEFAULT_STORAGE_CLASS environment variable. The following storage classes are supported:

NameDescription
STANDARDStandard storage class. Suitable for frequently accessed data.
STANDARD_IAStandard-Infrequent Access storage class. Suitable for data that is infrequently accessed but requires rapid access when needed.
INTELLIGENT_TIERINGIntelligent-Tiering storage class. Automatically moves data to the most cost-effective access tier based on usage patterns.
ONEZONE_IAOne Zone-Infrequent Access storage class. Suitable for data that is infrequently accessed and stored in a single availability zone.
EXPRESS_ONEZONEExpress-One Zone storage class. Suitable for data that requires low latency access and is stored in a single availability zone.
GLACIER_IRGlacier Instant Retrieval storage class. Suitable for data that is rarely accessed and requires immediate access when needed.
GLACIERGlacier storage class. Suitable for data that is rarely accessed and can be retrieved within a few hours.
DEEP_ARCHIVEDeep Archive storage class. Suitable for data that is rarely accessed and can be retrieved within 12 hours.
OUTPOSTSOutposts storage class. Suitable for data that needs to be stored on AWS Outposts.

For more information about S3 storage classes, see the Amazon S3 Storage Classes documentation.

Zenoh API Settings

ReductStore includes an optional Zenoh API that lets you ingest and retrieve data using the Zenoh pub/sub and query protocol alongside the existing HTTP API. The following environment variables configure it:

NameDefaultDescription
RS_ZENOH_ENABLEDfalseSet to true, 1, yes, or on to enable the Zenoh API
RS_ZENOH_CONFIGInline Zenoh session config string, e.g. mode=client;connect/endpoints=[tcp/127.0.0.1:7447]. Takes precedence over RS_ZENOH_CONFIG_PATH
RS_ZENOH_CONFIG_PATHPath to a Zenoh JSON5 config file
RS_ZENOH_BUCKETzenohTarget ReductStore bucket. Created automatically if it does not exist.
RS_ZENOH_SUB_KEYEXPRSKey expression for the subscriber (write path). Disabled if unset.
RS_ZENOH_QUERY_KEYEXPRSKey expression for the queryable (read path). Disabled if unset.
RS_ZENOH_QUERY_LOCALITYAnyAllowed origin for query replies. One of SessionLocal, Remote, or Any.

Inline Zenoh Credentials

When using TLS or user-password authentication with RS_ZENOH_CONFIG, Zenoh normally requires file paths for certificates and dictionaries. These variables let you supply the file contents directly as environment variables instead. This is useful in Docker or Kubernetes environments where mounting files is inconvenient. Each value is written to a temporary file at startup and its path is injected into the Zenoh session config automatically.

NameDefaultDescription
RS_ZENOH_TLS_ROOT_CAPEM content of the root CA certificate used to verify the Zenoh router or peer (transport/link/tls/root_ca_certificate)
RS_ZENOH_TLS_CONNECT_CERTPEM content of the client certificate for mutual TLS (transport/link/tls/connect_certificate)
RS_ZENOH_TLS_CONNECT_KEYPEM content of the client private key for mutual TLS (transport/link/tls/connect_private_key)
RS_ZENOH_AUTH_DICTIONARYUser-password dictionary content, one user:password entry per line (transport/auth/usrpwd/dictionary_file)

For usage examples, deployment patterns, and a full explanation of key expressions, selectors, and encoding, see the Zenoh API integration guide.

I/O Settings

In addition to general settings, you can configure I/O settings to optimize communication over HTTP with the storage engine. The following table describes the available environment variables:

NameDefaultDescription
RS_IO_BATCH_MAX_SIZE8MBMaximum size of a batch of records sent to the client.
RS_IO_BATCH_MAX_RECORDS85Maximum number of records in a batch sent and received from the client.
RS_IO_BATCH_MAX_METADATA_SIZE8KBMaximum size of metadata in a batch of records sent and received from the client.
RS_IO_BATCH_TIMEOUT5Maximum time in seconds for a batch of records to be prepared and sent to the client. If the batch is not full, it will be sent after the timeout.
RS_IO_BATCH_RECORD_TIMEOUT1Maximum time in seconds to wait for a record to be added to a batch. If the record is not added, the unfinished batch will be sent to the client.
RS_IO_OPERATION_TIMEOUT60Maximum time in seconds for an I/O operation (e.g., read or write). If the operation takes longer, it will be aborted.

Most of the I/O settings are related to batching and specify the size of the batch, the number of records in the batch, and the timeout for preparing and sending the batch on the server side.

However, if a client is using the HTTP/1.1 protocol, the RS_IO_BATCH_MAX_METADATA_SIZE and RS_IO_BATCH_MAX_RECORDS settings are used to limit the size of the headers in the HTTP request from the client. This means that the client can't send more records in a single batch than specified by the RS_IO_BATCH_MAX_RECORDS and the size of the headers in the request can't exceed the value of the RS_IO_BATCH_MAX_METADATA_SIZE setting.

Read more about batching in the HTTP API Reference.

Lock File Settings

ReductStore uses a lock file to prevent multiple instances from accessing the same data folder. The lock file is created in the data folder specified by the RS_DATA_PATH variable, using the same storage backend as the data. You can configure the lock file settings using the following environment variables:

NameDefaultDescription
RS_LOCK_FILE_POLLING_INTERVAL10Interval in seconds between attempts to acquire the lock file or rewrite the acquired lock file to update its timestamp.
RS_LOCK_FILE_TTL30Time-to-live in seconds for the lock file. If the lock file is not updated within the TTL, it is considered stale and can be overwritten by another instance.
RS_LOCK_FILE_TIMEOUT60Timeout in seconds for acquiring the lock file. If the lock file is not acquired within the timeout, the storage will exit. When set to 0, the storage will wait indefinitely.
RS_LOCK_FILE_FAILURE_ACTIONabortAction to take if the lock file can't be acquired. It can be abort or proceed. If set to abort, the storage will exit. If set to proceed, the storage will overwrite the lock file and continue.

The lock file is enabled when the RS_INSTANCE_ROLE variable is set to PRIMARY or SECONDARY. If the variable is not set, the lock file mechanism is disabled.

Storage Engine Settings

A user can configure storage engine settings to change behavior of data storage and retrieval. The following table describes the available environment variables:

NameDefaultDescription
RS_ENGINE_COMPACTION_INTERVAL60Interval in seconds between compaction WALs into blocks and synchronization of the storage state to the backend.
RS_ENGINE_REPLICA_UPDATE_INTERVAL60Interval in seconds between updates of bucket, entry lists and indexes from the backend in the read-only mode.
RS_ENGINE_ENABLE_INTEGRITY_CHECKStrueIf set to false, the storage engine skips integrity checks at startup to improve performance.

Replication Settings

ReductStore supports data replication between different instances. You can configure replication settings using the following environment variables:

NameDefaultDescription
RS_REPLICATION_TIMEOUT5Timeout in seconds for attempts to reconnect to the target server.
RS_REPLICATION_LOG_SIZE1000000Maximum number of pending records in the replication log. The oldest records are overwritten when the limit is reached.