Frequently Asked Questions
Common questions about apphash.io and memlogger integration.
General Questions
What is apphash.io?
apphash.io is a monitoring and analysis platform for blockchains. It detects and analyzes state non-determinism issues by tracking detailed state changes and consensus events.
What is state non-determinism?
State non-determinism occurs when validators compute different state roots (app hashes) for the same block. This breaks consensus and can halt the chain. apphash.io helps identify the root cause when this happens.
Is this only useful when problems occur?
No. While apphash.io excels at debugging non-determinism, it also provides:
- Continuous monitoring of state changes
- Historical state analysis
- Audit trails for governance
- Development insights during testing
Performance & Production
Will this impact my chain’s performance?
No significant impact. According to production benchmarks:
- Memory: +10-50MB (configurable)
- CPU: less than 1% overhead
- No impact on block time
- Negligible disk I/O (compressed, batched)
Can I run this in production?
Yes. The memlogger is designed for production use:
- Proven in production environments
- Asynchronous operation (non-blocking)
- Graceful failure handling
- Efficient compression and storage
What’s the disk space requirement?
Depends on chain activity. Example for moderate activity:
- ~400-500 MB/day compressed (with filtering)
- ~10-15 GB/month
- Scales with transaction volume
Enable filter = true to reduce volume by 70-80%.
Will debug logging slow down my node?
No. The memlogger uses:
- Asynchronous compression
- Non-blocking writes
- Object pooling (zero-allocation)
- Batched disk operations
Standard info/warn/error logs are unaffected.
Integration
How long does integration take?
Typically 30-60 minutes:
- 10 min: Update SDK dependency
- 10 min: Modify app.go
- 10 min: Update configuration
- 30 min: Testing and verification
Do I need to modify my modules?
No. The integration is at the app.go level only:
- No module changes required
- No custom logic needed
- Standard Cosmos SDK patterns
Can I cherry-pick to my SDK version?
Yes. The integration is a single commit that can be cherry-picked:
git cherry-pick 2a53c378ae5734c834fa7f7187a6c672d3d79521Works with Cosmos SDK v0.50+.
What if I have a custom SDK fork?
You can still integrate:
- Cherry-pick the integration commit
- Resolve any conflicts
- Test thoroughly
- Use your fork with the memlogger
Will this break my existing setup?
No. The integration:
- Doesn’t modify existing functionality
- Works alongside current logging
- Doesn’t change consensus behavior
- Is backward compatible
Configuration
What’s the recommended configuration?
For production:
# config.toml
log_level = "debug"
log_format = "json"
# app.toml
[memlogger]
enabled = true
filter = true
interval = "2s"
memory-bytes = 0Should I enable filtering?
Yes, for production:
- Reduces log volume by 70-80%
- Keeps all consensus-critical data
- Lowers disk usage
- Improves shipping efficiency
Disable for development if you need all debug messages.
What interval should I use?
“2s” is recommended for most cases:
- Good compression ratio
- Reasonable memory usage
- Frequent enough for analysis
Adjust based on your needs:
- “1s”: Lower memory, more segments
- “5s”: Higher memory, fewer segments
Should I set memory-bytes?
Usually not needed. Use time-based flushing (interval) unless:
- You have strict memory constraints
- You experience memory pressure
- Your log rate is highly variable
Example with limit:
memory-bytes = 50000000 # 50MB limitData & Storage
Where are logs stored?
$CHAIN_DIR/data/log.wal/<node-id>/<yyyy-mm-dd>/seg-NNNNNN.wal.gzEach node has its own directory, organized by date.
Are logs readable?
Yes. They’re gzip-compressed JSON:
# View logs
zcat seg-000001.wal.gz | jq '.'
# Search for specific events
zcat seg-000001.wal.gz | jq 'select(.height == 12345)'How long should I keep logs locally?
Recommended:
- 7-30 days locally for quick access
- 90+ days in object storage
- Forever on apphash.io platform (critical events)
Can I delete old logs?
Yes, after shipping to apphash.io:
# Delete logs older than 7 days
find $CHAIN_DIR/data/log.wal/ -type d -name "20*" -mtime +7 -exec rm -rf {} \;Do logs contain sensitive data?
Potentially, yes:
- Transaction details
- Account balances
- Governance proposals
Recommendations:
- Restrict file permissions (
chmod 600) - Use TLS for shipping
- Consider encryption at rest
Shipping & Platform
How do logs get to apphash.io?
Via walship:
- Monitors WAL directory
- Ships completed segments
- Handles retries and errors
- Maintains checkpoints
Do I need the shipper running?
Not immediately. You can:
- Integrate memlogger first
- Verify logs are generated
- Set up shipper later
Logs accumulate locally until shipped.
What happens if shipping fails?
- Logs remain on disk
- Shipper retries automatically
- No impact on chain operation
- Checkpoint prevents duplicate shipping
Can I ship logs manually?
Yes. The WAL files are standard gzip:
- Upload to S3/GCS
- Use rsync/scp
- Custom shipping solutions
The analyzer-shipper just automates this.
Troubleshooting
Logs aren’t being generated
Check:
enabled = truein app.tomllog_level = "debug"in config.toml- Code placed after baseApp initialization
- Rebuilt chain after changes
- Directory permissions
Build fails after integration
# Clean and retry
go clean -cache
go mod download
go mod tidy
go build ./...Verify the SDK replace directive in go.mod.
Memory usage is high
Adjust configuration:
interval = "1s" # Flush more frequently
memory-bytes = 30000000 # Add 30MB limit
filter = true # Ensure filtering enabledSegments are very large
Possible causes:
- Filtering disabled (
filter = false) - High transaction volume
- Long flush interval
Solutions:
- Enable filtering
- Reduce interval
- Check for unexpected log spam
Can’t find node-id directory
The directory is created on first log:
- Wait for first block
- Check
$CHAIN_DIR/data/log.wal/ - Verify node is producing blocks
Node ID comes from validator key.
Compatibility
Which Cosmos SDK versions are supported?
- v0.50.x: ✅ Supported
- v0.51.x: ✅ Supported
- v0.52.x: ✅ Supported
- v0.53.x: ✅ Supported (recommended)
- v0.47-v0.49: ⚠️ May require adjustments
- v0.46 and earlier: ❌ Not supported
Which Go versions are required?
- Go 1.21+: ✅ Recommended
- Go 1.20: ✅ Supported
- Go 1.19: ⚠️ May work
- Go 1.18 and earlier: ❌ Not supported
Does this work with CometBFT?
Yes. Works with:
- CometBFT v0.37+
- CometBFT v0.38+
- Tendermint v0.34+ (legacy)
Does this work with CosmWasm chains?
Yes. The integration is at the Cosmos SDK level, so it works with:
- CosmWasm chains
- EVM chains (evmos, etc.)
- Custom module chains
- Any Cosmos SDK-based chain
Can I use this with IBC?
Yes. IBC operations are tracked like any other state change:
- IBC packet sends/receives
- Channel/connection creation
- Client updates
- All logged automatically
Security
Is this audited?
The memlogger code is:
- Open source (reviewable)
- Single commit integration (auditable)
- Based on standard Go libraries
- Used in production
Formal audit status: Contact BFT Labs for details.
Does this expose chain data?
Logs contain the same data visible in:
- Block explorers
- RPC endpoints
- Transaction history
Sensitive data depends on your modules.
Can logging be disabled quickly?
Yes, two ways:
- In config (no restart needed for some):
[memlogger]
enabled = false- Remove code (requires rebuild): Just remove the streaming listener block from app.go.
Support & Community
Where can I get help?
- GitHub Issues: bft-labs/cosmos-sdk
- Documentation: This site
- Email: Contact BFT Labs
How do I report bugs?
Open an issue with:
- SDK version
- Go version
- Configuration
- Error messages
- Steps to reproduce
Can I contribute?
Yes! The project is open source:
- Submit issues
- Propose improvements
- Share integration experiences
- Help with documentation
Where is the source code?
- Cosmos SDK: github.com/bft-labs/cosmos-sdk
- Shipper: github.com/bft-labs/walship
- Examples: github.com/bft-labs/evm
Advanced Topics
Can I customize filtering?
Currently, filtering is predefined (consensus-critical events). Custom filtering may be added in future versions. Contact BFT Labs for specific requirements.
Can I integrate with other monitoring tools?
Yes. Logs are standard JSON format:
- Parse with any log processor
- Feed to ELK stack
- Integrate with Prometheus/Grafana
- Custom analysis tools
Can I run memlogger without apphash.io?
Yes. The memlogger works standalone:
- Generates local WAL files
- Can ship to custom backends
- Analyze logs yourself
- apphash.io platform is optional (but recommended)
What’s the recovery process for corrupted logs?
- Segments are independent
- Corruption affects single segment only
- Skip corrupted segment
- Continue with next segment
- Use index for validation
Can I replay logs?
Yes, using the WAL files:
- Decompress segments
- Parse JSON events
- Replay state changes
- Useful for debugging and analysis
Cost & Licensing
Is apphash.io free?
Contact BFT Labs for:
- Platform pricing
- Enterprise licensing
- Support contracts
- Custom features
The memlogger SDK integration is open source.
What’s the license?
Check repository LICENSE files:
- Cosmos SDK fork: Same as Cosmos SDK
- Memlogger additions: Check BFT Labs license
- Shipper: Check repository
Next Steps
- Getting Started - Begin integration
- Setup Guide - Detailed instructions
- Examples - Real implementations
- Architecture - Deep dive into design