1 points | by danebl 10 hours ago
2 comments
Protip: if what's shown in README is your "Traditional workflow"
# Traditional workflow: decompress, grep, wait, clean up zstd -d huge_logs.zst && grep "error" huge_logs && rm huge_logs
# Traditional workflow: decompress + grep zstd -dc huge_logs.zst | grep "error"
You are right, that was poor wording. If we compare apples to apples:
Traditional: must decompress entire file (even when streaming) - zstd -dc huge_logs.zst | grep "error" # 709 MB decompressed through memory
Crystal: indexed search, skip to matches - cuz search huge_logs.cuz "error" # Only decompress matching blocks
Thanks for pointing this up. We will update the readme file.
Protip: if what's shown in README is your "Traditional workflow"
STOP NOW! There is never need to decompress files to disk, every compressor supports streaming decompression just fine: (and as a bonus, no need to install any third-party tools)You are right, that was poor wording. If we compare apples to apples:
Traditional: must decompress entire file (even when streaming) - zstd -dc huge_logs.zst | grep "error" # 709 MB decompressed through memory
Crystal: indexed search, skip to matches - cuz search huge_logs.cuz "error" # Only decompress matching blocks
Thanks for pointing this up. We will update the readme file.