Skip to content

Configuration

QuicD is highly configurable, allowing you to tune every aspect from worker thread counts to QUIC flow control parameters. This guide explains all configuration options and provides examples for common scenarios.

QuicD loads configuration from multiple sources with the following priority (highest to lowest):

  1. CLI arguments (e.g., --port 9000)
  2. Environment variables (e.g., QUICD_PORT=9000)
  3. Configuration file (config.toml)
  4. Built-in defaults
Terminal window
quicd --host 0.0.0.0 --port 8443 --log-level debug --config-file custom.toml

All settings can be overridden via environment variables prefixed with QUICD_:

Terminal window
export QUICD_HOST="0.0.0.0"
export QUICD_PORT="8443"
export QUICD_LOG_LEVEL="debug"
sudo -E quicd

The primary way to configure QuicD is via a TOML file (default: config.toml):

host = "0.0.0.0"
port = 8443
log_level = "info"
[runtime]
worker_threads = 8
[netio]
workers = 4
# ... more sections

ParameterTypeDefaultDescription
hoststring"127.0.0.1"Bind address (use "0.0.0.0" for all interfaces)
portu168080UDP port to bind
log_levelstring"info"Logging level: "trace", "debug", "info", "warn", "error"
config_filestring"config.toml"Path to configuration file (CLI only)

Example:

host = "0.0.0.0" # Listen on all interfaces
port = 8443 # Standard HTTPS alternate port
log_level = "info" # Production logging level

Controls the Tokio async runtime used for application tasks.

ParameterTypeDefaultDescription
worker_threadsusizeCPU countNumber of Tokio runtime threads
max_blocking_threadsusize512Maximum blocking thread pool size
thread_namestring"quicd-worker"Thread name prefix for debugging
thread_stack_sizeusize2097152Stack size per thread in bytes (2MB default)

Tuning guidance:

  • worker_threads: Set to number of CPU cores for balanced workloads
  • Increase max_blocking_threads if you have many blocking operations (file I/O, DNS)
  • Decrease thread_stack_size if running many connections (reduces memory per task)

Example:

[runtime]
worker_threads = 16 # 16-core CPU
max_blocking_threads = 512
thread_name = "quicd-app"
thread_stack_size = 2097152 # 2MB

Controls the native worker threads handling network I/O and QUIC protocol.

ParameterTypeDefaultDescription
workersusize4Number of native worker threads
buffer_pool_sizeusize8192Buffers per worker
buffer_sizeusize2048Buffer size in bytes
io_uring_entriesusize4096io_uring queue depth
enable_numaboolfalseNUMA-aware memory allocation

Tuning guidance:

  • workers: 1 per physical CPU core (not hyperthreads) is optimal
  • buffer_pool_size: Should be 2-4x expected concurrent connections per worker
  • buffer_size: 2048 bytes matches typical MTU with overhead; rarely needs changing
  • io_uring_entries: Higher = more batching (throughput), lower = less latency
  • enable_numa: Enable on multi-socket systems for 10-20% performance gain

Memory calculation:

Total buffer memory = workers × buffer_pool_size × buffer_size
Example: 8 workers × 8192 buffers × 2048 bytes = 128MB

Example (high-throughput server):

[netio]
workers = 8 # 8-core CPU
buffer_pool_size = 16384 # Support 32K connections (2x buffer pool)
buffer_size = 2048
io_uring_entries = 8192 # High batching for throughput
enable_numa = true # Multi-socket system

Example (low-latency server):

[netio]
workers = 4
buffer_pool_size = 4096
buffer_size = 2048
io_uring_entries = 1024 # Lower latency, less batching
enable_numa = false

Controls QUIC protocol parameters and connection limits.

ParameterTypeDefaultDescription
max_connections_per_workerusize100000Maximum connections per worker thread
idle_timeout_msu6430000Connection idle timeout (milliseconds)
initial_max_datau6410000000Connection-level flow control limit (bytes)
initial_max_stream_data_bidi_localu641000000Flow control for bidirectional streams (local-initiated)
initial_max_stream_data_bidi_remoteu641000000Flow control for bidirectional streams (remote-initiated)
initial_max_stream_data_uniu641000000Flow control for unidirectional streams
initial_max_streams_bidiu64100Concurrent bidirectional stream limit
initial_max_streams_uniu64100Concurrent unidirectional stream limit

Tuning guidance:

  • max_connections_per_worker: Total capacity; actual limit is memory-dependent
  • idle_timeout_ms: Balance between resource cleanup and connection persistence
  • Flow control (initial_max_*): Larger = more data in-flight = higher throughput but more memory
  • Stream limits: Tune based on application (HTTP/3 uses many short-lived streams)

Example (web server - many small requests):

[quic]
max_connections_per_worker = 50000
idle_timeout_ms = 60000 # 60s for persistent connections
initial_max_data = 10000000 # 10MB connection limit
initial_max_stream_data_bidi_local = 1000000 # 1MB per stream
initial_max_stream_data_bidi_remote = 1000000
initial_max_streams_bidi = 200 # Many concurrent requests
initial_max_streams_uni = 100

Example (media streaming - large transfers):

[quic]
max_connections_per_worker = 10000
idle_timeout_ms = 120000 # 2 minutes for live streams
initial_max_data = 100000000 # 100MB for video
initial_max_stream_data_bidi_local = 50000000 # 50MB streams
initial_max_stream_data_bidi_remote = 50000000
initial_max_streams_bidi = 10 # Few concurrent streams per connection
initial_max_streams_uni = 50 # Datagrams for real-time

Controls OpenTelemetry metrics and tracing.

ParameterTypeDefaultDescription
enable_metricsbooltrueEnable OpenTelemetry metrics export
enable_tracingbooltrueEnable distributed tracing
otlp_endpointstring"http://localhost:4317"OTLP collector endpoint (gRPC)

Tuning guidance:

  • Disable in development if not needed to reduce overhead
  • Use a local OTLP collector (e.g., OpenTelemetry Collector) in production
  • Metrics overhead: ~1-2% CPU when enabled

Example:

[telemetry]
enable_metrics = true
enable_tracing = true
otlp_endpoint = "http://otel-collector:4317" # Kubernetes service

Disable for development:

[telemetry]
enable_metrics = false
enable_tracing = false

Controls bounded channel capacities for communication between workers and app tasks.

ParameterTypeDefaultDescription
egress_capacityusize1024Commands from app tasks to workers
ingress_capacityusize1024Events from workers to app tasks
stream_data_capacityusize256Data chunks per stream channel

Tuning guidance:

  • Increase if you see “channel full” errors
  • Larger = more buffering = higher latency but better throughput
  • Balance: Too small = backpressure, too large = memory overhead

Symptoms of undersized channels:

  • Log warnings: “egress channel at capacity”
  • Connection errors: “worker unavailable”
  • High latency spikes

Example (high-load server):

[channels]
egress_capacity = 4096 # More buffering for high command rate
ingress_capacity = 4096
stream_data_capacity = 512 # Larger stream buffers

host = "127.0.0.1"
port = 8443
log_level = "debug"
[runtime]
worker_threads = 4
[netio]
workers = 2
buffer_pool_size = 1024
io_uring_entries = 1024
[quic]
max_connections_per_worker = 1000
idle_timeout_ms = 30000
[telemetry]
enable_metrics = false
enable_tracing = false
[channels]
egress_capacity = 256
ingress_capacity = 256
host = "0.0.0.0"
port = 443
log_level = "info"
[runtime]
worker_threads = 16
[netio]
workers = 8
buffer_pool_size = 16384
buffer_size = 2048
io_uring_entries = 8192
enable_numa = true
[quic]
max_connections_per_worker = 100000
idle_timeout_ms = 60000
initial_max_data = 10000000
initial_max_stream_data_bidi_local = 1000000
initial_max_stream_data_bidi_remote = 1000000
initial_max_streams_bidi = 200
initial_max_streams_uni = 100
[telemetry]
enable_metrics = true
enable_tracing = true
otlp_endpoint = "http://otel-collector:4317"
[channels]
egress_capacity = 2048
ingress_capacity = 2048
stream_data_capacity = 512
host = "0.0.0.0"
port = 8443
log_level = "warn" # Reduce logging overhead
[runtime]
worker_threads = 8
[netio]
workers = 4
buffer_pool_size = 4096
io_uring_entries = 1024 # Lower for reduced latency
[quic]
max_connections_per_worker = 10000
idle_timeout_ms = 120000 # Keep live streams alive
initial_max_data = 100000000
initial_max_stream_data_bidi_local = 50000000
initial_max_stream_data_bidi_remote = 50000000
[telemetry]
enable_metrics = true
enable_tracing = false # Tracing adds latency
[channels]
egress_capacity = 1024
ingress_capacity = 1024
stream_data_capacity = 128 # Smaller for low latency

Rule of thumb: 1 worker per physical CPU core (not hyperthreads).

Terminal window
# Check physical cores
lscpu | grep "Core(s) per socket"
# Set workers to this number

Formula: buffer_pool_size = 2 × expected_concurrent_connections / workers

Example: 100K connections, 8 workers → buffer_pool_size = 2 × 100000 / 8 = 25000

Monitor logs for “channel full” warnings. Increase capacity by 2x if frequent.

Larger flow control limits = more data in-flight:

  • Pros: Higher throughput, better utilization
  • Cons: More memory, higher latency spikes

Start with defaults, increase if throughput-limited.


QuicD validates configuration on startup. Common errors:

  • “Invalid bind address”: Check host format
  • “Port out of range”: Must be 1-65535
  • “Buffer pool size too small”: Minimum 256
  • “Worker count must be > 0”: At least 1 worker required


Proper configuration is key to QuicD’s performance. Start with defaults, then tune based on monitoring and profiling.