CRITICAL DISCLAIMER: Production Data Center OnlyThese latency tests are designed for production data center environments only. DO NOT run these tests on local machines or consumer internet connections. Local bandwidth cannot handle heavy Solana subscriptions and will produce meaningless results that don’t reflect real-world performance.
CO-LOCATION REQUIREMENT: Deploy Near Your LaserStream EndpointFor meaningful latency measurements, you must co-locate your testing infrastructure in the same region as your chosen LaserStream endpoint. Network distance will dominate your measurements - testing from a different continent will show network latency, not LaserStream performance.
Understanding latency in distributed blockchain systems
When working with blockchain streaming services, latency measurement becomes complex because distributed systems don’t have a universal clock. Unlike traditional systems where you can measure round-trip time to a single server, blockchain networks involve multiple validators, each receiving and processing the same transaction at different times. The fundamental challenge: Blockchains like Solana don’t have a concept of absolute time. Each validator node globally will receive the same transaction at different times, and confirmation depends on a percentage of the cluster reaching consensus. This makes deterministic latency measurement impossible in the traditional sense.Commitment levels and latency priorities
Solana offers three commitment levels, each with different latency characteristics:- Processed: Fastest, single validator confirmation (~400ms)
- Confirmed: Medium, supermajority confirmation (~2-3 seconds)
- Finalized: Slowest, complete network finalization (~15-30 seconds)
Three approaches to measuring latency
1. Comparing parallel gRPC streams
Most reliable method - Compares two independent streams to the same data source, measuring which receives identical events first. Advantages:- Eliminates clock synchronization issues
- Provides relative performance comparison
- Most accurate for comparing services
2. Local timestamp vs created_at comparison
Moderate reliability - Measures the difference between when your system receives a message and the timestamp embedded in the message by the LaserStream service. Limitations:- Only represents when LaserStream created the message internally
- Upstream delays to LaserStream won’t be captured
- Less accurate than Method 1 for true end-to-end latency
3. Block timestamp analysis (not recommended)
Not recommended - Compares local receipt time against Solana’s block timestamp. Significant limitations:- Block timestamps only have second-level granularity
- Solana produces blocks every 400ms
- Provides minimal useful information
Setup requirements
Regional co-location
For meaningful latency measurements, deploy your testing infrastructure in the same data center or region as your LaserStream endpoint. Available LaserStream regions:- ewr: New York, US (East Coast) -
https://laserstream-mainnet-ewr.helius-rpc.com
- pitt: Pittsburgh, US (Central) -
https://laserstream-mainnet-pitt.helius-rpc.com
- slc: Salt Lake City, US (West Coast) -
https://laserstream-mainnet-slc.helius-rpc.com
- ams: Amsterdam, Europe -
https://laserstream-mainnet-ams.helius-rpc.com
- fra: Frankfurt, Europe -
https://laserstream-mainnet-fra.helius-rpc.com
- tyo: Tokyo, Asia -
https://laserstream-mainnet-tyo.helius-rpc.com
- sgp: Singapore, Asia -
https://laserstream-mainnet-sgp.helius-rpc.com
https://laserstream-devnet-ewr.helius-rpc.com
See the LaserStream gRPC documentation for complete setup instructions and endpoint selection guidelines.
Rust environment setup
All measurement scripts use Rust with Cargo. Basic setup:.env
file with your credentials:
Method 1: Parallel stream comparison
This script establishes two independent connections to different gRPC endpoints and measures which receives the sameBlockMeta
messages first. This approach eliminates clock synchronization issues by using relative timing.
- Positive delta: First service (YS) slower than second service (LS) - LaserStream is faster
- Negative delta: First service (YS) faster than second service (LS) - LaserStream is slower
- Mean/Median: Average performance difference
- P95: 95th percentile latency difference
Method 2: Created timestamp analysis
This approach compares thecreated_at
timestamp embedded in messages against your local system time when receiving them.
Best practices for latency testing
Key principles
- Co-location: Deploy tests in the same region as your LaserStream endpoint to minimize network latency
- Multiple methods: Use parallel stream comparison (Method 1) as your primary metric, supplemented by timestamp analysis
- Long-term monitoring: Run tests over extended periods to capture different network conditions and blockchain congestion
- Statistical analysis: Focus on percentiles (P95, P99) rather than just averages to understand tail latency
Interpreting results
- Establish baseline: Run tests for at least 1 hour to establish baseline performance under normal conditions
- Identify patterns: Look for patterns in latency spikes - do they correlate with high blockchain activity or network congestion?
- Compare percentiles: P95 latency is often more important than average latency for user experience
- Monitor consistency: Consistent performance is often more valuable than absolute minimum latency