Configuration
QuicD is highly configurable, allowing you to tune every aspect from worker thread counts to QUIC flow control parameters. This guide explains all configuration options and provides examples for common scenarios.
Configuration Sources
Section titled “Configuration Sources”QuicD loads configuration from multiple sources with the following priority (highest to lowest):
- CLI arguments (e.g.,
--port 9000) - Environment variables (e.g.,
QUICD_PORT=9000) - Configuration file (
config.toml) - Built-in defaults
CLI Arguments
Section titled “CLI Arguments”quicd --host 0.0.0.0 --port 8443 --log-level debug --config-file custom.tomlEnvironment Variables
Section titled “Environment Variables”All settings can be overridden via environment variables prefixed with QUICD_:
export QUICD_HOST="0.0.0.0"export QUICD_PORT="8443"export QUICD_LOG_LEVEL="debug"sudo -E quicdConfiguration File
Section titled “Configuration File”The primary way to configure QuicD is via a TOML file (default: config.toml):
host = "0.0.0.0"port = 8443log_level = "info"
[runtime]worker_threads = 8
[netio]workers = 4
# ... more sectionsComplete Configuration Reference
Section titled “Complete Configuration Reference”Main Settings
Section titled “Main Settings”| Parameter | Type | Default | Description |
|---|---|---|---|
host | string | "127.0.0.1" | Bind address (use "0.0.0.0" for all interfaces) |
port | u16 | 8080 | UDP port to bind |
log_level | string | "info" | Logging level: "trace", "debug", "info", "warn", "error" |
config_file | string | "config.toml" | Path to configuration file (CLI only) |
Example:
host = "0.0.0.0" # Listen on all interfacesport = 8443 # Standard HTTPS alternate portlog_level = "info" # Production logging levelRuntime Configuration ([runtime])
Section titled “Runtime Configuration ([runtime])”Controls the Tokio async runtime used for application tasks.
| Parameter | Type | Default | Description |
|---|---|---|---|
worker_threads | usize | CPU count | Number of Tokio runtime threads |
max_blocking_threads | usize | 512 | Maximum blocking thread pool size |
thread_name | string | "quicd-worker" | Thread name prefix for debugging |
thread_stack_size | usize | 2097152 | Stack size per thread in bytes (2MB default) |
Tuning guidance:
worker_threads: Set to number of CPU cores for balanced workloads- Increase
max_blocking_threadsif you have many blocking operations (file I/O, DNS) - Decrease
thread_stack_sizeif running many connections (reduces memory per task)
Example:
[runtime]worker_threads = 16 # 16-core CPUmax_blocking_threads = 512thread_name = "quicd-app"thread_stack_size = 2097152 # 2MBNetwork I/O Configuration ([netio])
Section titled “Network I/O Configuration ([netio])”Controls the native worker threads handling network I/O and QUIC protocol.
| Parameter | Type | Default | Description |
|---|---|---|---|
workers | usize | 4 | Number of native worker threads |
buffer_pool_size | usize | 8192 | Buffers per worker |
buffer_size | usize | 2048 | Buffer size in bytes |
io_uring_entries | usize | 4096 | io_uring queue depth |
enable_numa | bool | false | NUMA-aware memory allocation |
Tuning guidance:
workers: 1 per physical CPU core (not hyperthreads) is optimalbuffer_pool_size: Should be 2-4x expected concurrent connections per workerbuffer_size: 2048 bytes matches typical MTU with overhead; rarely needs changingio_uring_entries: Higher = more batching (throughput), lower = less latencyenable_numa: Enable on multi-socket systems for 10-20% performance gain
Memory calculation:
Total buffer memory = workers × buffer_pool_size × buffer_sizeExample: 8 workers × 8192 buffers × 2048 bytes = 128MBExample (high-throughput server):
[netio]workers = 8 # 8-core CPUbuffer_pool_size = 16384 # Support 32K connections (2x buffer pool)buffer_size = 2048io_uring_entries = 8192 # High batching for throughputenable_numa = true # Multi-socket systemExample (low-latency server):
[netio]workers = 4buffer_pool_size = 4096buffer_size = 2048io_uring_entries = 1024 # Lower latency, less batchingenable_numa = falseQUIC Configuration ([quic])
Section titled “QUIC Configuration ([quic])”Controls QUIC protocol parameters and connection limits.
| Parameter | Type | Default | Description |
|---|---|---|---|
max_connections_per_worker | usize | 100000 | Maximum connections per worker thread |
idle_timeout_ms | u64 | 30000 | Connection idle timeout (milliseconds) |
initial_max_data | u64 | 10000000 | Connection-level flow control limit (bytes) |
initial_max_stream_data_bidi_local | u64 | 1000000 | Flow control for bidirectional streams (local-initiated) |
initial_max_stream_data_bidi_remote | u64 | 1000000 | Flow control for bidirectional streams (remote-initiated) |
initial_max_stream_data_uni | u64 | 1000000 | Flow control for unidirectional streams |
initial_max_streams_bidi | u64 | 100 | Concurrent bidirectional stream limit |
initial_max_streams_uni | u64 | 100 | Concurrent unidirectional stream limit |
Tuning guidance:
max_connections_per_worker: Total capacity; actual limit is memory-dependentidle_timeout_ms: Balance between resource cleanup and connection persistence- Flow control (
initial_max_*): Larger = more data in-flight = higher throughput but more memory - Stream limits: Tune based on application (HTTP/3 uses many short-lived streams)
Example (web server - many small requests):
[quic]max_connections_per_worker = 50000idle_timeout_ms = 60000 # 60s for persistent connectionsinitial_max_data = 10000000 # 10MB connection limitinitial_max_stream_data_bidi_local = 1000000 # 1MB per streaminitial_max_stream_data_bidi_remote = 1000000initial_max_streams_bidi = 200 # Many concurrent requestsinitial_max_streams_uni = 100Example (media streaming - large transfers):
[quic]max_connections_per_worker = 10000idle_timeout_ms = 120000 # 2 minutes for live streamsinitial_max_data = 100000000 # 100MB for videoinitial_max_stream_data_bidi_local = 50000000 # 50MB streamsinitial_max_stream_data_bidi_remote = 50000000initial_max_streams_bidi = 10 # Few concurrent streams per connectioninitial_max_streams_uni = 50 # Datagrams for real-timeTelemetry Configuration ([telemetry])
Section titled “Telemetry Configuration ([telemetry])”Controls OpenTelemetry metrics and tracing.
| Parameter | Type | Default | Description |
|---|---|---|---|
enable_metrics | bool | true | Enable OpenTelemetry metrics export |
enable_tracing | bool | true | Enable distributed tracing |
otlp_endpoint | string | "http://localhost:4317" | OTLP collector endpoint (gRPC) |
Tuning guidance:
- Disable in development if not needed to reduce overhead
- Use a local OTLP collector (e.g., OpenTelemetry Collector) in production
- Metrics overhead: ~1-2% CPU when enabled
Example:
[telemetry]enable_metrics = trueenable_tracing = trueotlp_endpoint = "http://otel-collector:4317" # Kubernetes serviceDisable for development:
[telemetry]enable_metrics = falseenable_tracing = falseChannel Configuration ([channels])
Section titled “Channel Configuration ([channels])”Controls bounded channel capacities for communication between workers and app tasks.
| Parameter | Type | Default | Description |
|---|---|---|---|
egress_capacity | usize | 1024 | Commands from app tasks to workers |
ingress_capacity | usize | 1024 | Events from workers to app tasks |
stream_data_capacity | usize | 256 | Data chunks per stream channel |
Tuning guidance:
- Increase if you see “channel full” errors
- Larger = more buffering = higher latency but better throughput
- Balance: Too small = backpressure, too large = memory overhead
Symptoms of undersized channels:
- Log warnings: “egress channel at capacity”
- Connection errors: “worker unavailable”
- High latency spikes
Example (high-load server):
[channels]egress_capacity = 4096 # More buffering for high command rateingress_capacity = 4096stream_data_capacity = 512 # Larger stream buffersExample Configurations
Section titled “Example Configurations”Development (Laptop)
Section titled “Development (Laptop)”host = "127.0.0.1"port = 8443log_level = "debug"
[runtime]worker_threads = 4
[netio]workers = 2buffer_pool_size = 1024io_uring_entries = 1024
[quic]max_connections_per_worker = 1000idle_timeout_ms = 30000
[telemetry]enable_metrics = falseenable_tracing = false
[channels]egress_capacity = 256ingress_capacity = 256Production (Web Server)
Section titled “Production (Web Server)”host = "0.0.0.0"port = 443log_level = "info"
[runtime]worker_threads = 16
[netio]workers = 8buffer_pool_size = 16384buffer_size = 2048io_uring_entries = 8192enable_numa = true
[quic]max_connections_per_worker = 100000idle_timeout_ms = 60000initial_max_data = 10000000initial_max_stream_data_bidi_local = 1000000initial_max_stream_data_bidi_remote = 1000000initial_max_streams_bidi = 200initial_max_streams_uni = 100
[telemetry]enable_metrics = trueenable_tracing = trueotlp_endpoint = "http://otel-collector:4317"
[channels]egress_capacity = 2048ingress_capacity = 2048stream_data_capacity = 512Low-Latency (Real-Time Media)
Section titled “Low-Latency (Real-Time Media)”host = "0.0.0.0"port = 8443log_level = "warn" # Reduce logging overhead
[runtime]worker_threads = 8
[netio]workers = 4buffer_pool_size = 4096io_uring_entries = 1024 # Lower for reduced latency
[quic]max_connections_per_worker = 10000idle_timeout_ms = 120000 # Keep live streams aliveinitial_max_data = 100000000initial_max_stream_data_bidi_local = 50000000initial_max_stream_data_bidi_remote = 50000000
[telemetry]enable_metrics = trueenable_tracing = false # Tracing adds latency
[channels]egress_capacity = 1024ingress_capacity = 1024stream_data_capacity = 128 # Smaller for low latencyPerformance Tuning Tips
Section titled “Performance Tuning Tips”Worker Count
Section titled “Worker Count”Rule of thumb: 1 worker per physical CPU core (not hyperthreads).
# Check physical coreslscpu | grep "Core(s) per socket"# Set workers to this numberBuffer Pool Sizing
Section titled “Buffer Pool Sizing”Formula: buffer_pool_size = 2 × expected_concurrent_connections / workers
Example: 100K connections, 8 workers → buffer_pool_size = 2 × 100000 / 8 = 25000
Channel Capacities
Section titled “Channel Capacities”Monitor logs for “channel full” warnings. Increase capacity by 2x if frequent.
Flow Control
Section titled “Flow Control”Larger flow control limits = more data in-flight:
- Pros: Higher throughput, better utilization
- Cons: More memory, higher latency spikes
Start with defaults, increase if throughput-limited.
Validation
Section titled “Validation”QuicD validates configuration on startup. Common errors:
- “Invalid bind address”: Check
hostformat - “Port out of range”: Must be 1-65535
- “Buffer pool size too small”: Minimum 256
- “Worker count must be > 0”: At least 1 worker required
Next Steps
Section titled “Next Steps”- Run HTTP/3 Server: Use your configuration to serve HTTP/3
- Architecture: Understand how configuration affects internals
- Performance Tuning: Advanced optimization techniques
Proper configuration is key to QuicD’s performance. Start with defaults, then tune based on monitoring and profiling.