The best way to store information depends on how you intend to use (query) it.
The query itself represents information. If you can anticipate 100% of the ways in which you intend to query the information (no surprises), I'd argue there might be an ideal way to store it.
as a line of thought, it totally does. you just extend the workload description to include writes. where this get problematic is that the ideal structure for transactional writes is nearly pessimal from a read standpoint. which is why we seem to end up doubling the write overhead - once to remember and once to optimize. or highly write-centric approach like LSM
I'd love to be clued in on more interesting architectures that either attempt to optimize both or provide a more continuous tuning knob between them
This is exactly right, and the article is clickbait junk.
Given the domain name, I was expecting something about the physics of information storage, and some interesting law of nature.
Instead, the article is a bad introduction to data structures.
Millions of years of evolution has resulted in the human brain being the best way to store information.
I doubt we humans will be able to do better (faster, more capacity, more analytical, more intuitive, more logical) storage (at an individual level, not at mass scale, since that's kinda achieved already by the behemoths like Google, etc.) in a few thousand years of civilization.
Quantum computing may be the game changer though.
I read somewhere that the entirety of humanity's information, including all knowledge and data of past (of every human ever) and current, if stored via quantum computing - that quanta of quantum information will just be the size of a football.
Pedantic, but the article is talking about the way we structure/organize information, not store it. When I think of the word store, I think of the physical medium. The way we organize the information is only partially related
It's not pedantic, you are correctly using words as we understand them, and they are not. The headline needs a sharp correction. Editing jobs are in very short supply these days.
Oh come on. Programmers discuss how to "store" data in memory as a data model all the time.
You're reducing definitions and meaning too far to make an ultimately empty point just to contribute the thread.
If social medias only contribution is language policing, then it really should die off. What a waste of resources so functional illiterate nobodies can project ego.
I mean if we're talking about the physical storage of medium, the single most dense way would be to write it on the surface of a black hole. I still haven't figured out how to read it back though.
One format I'm missing: storage for conversations and social media posts. Both are complex media (text + images/videos + metadata), and one is actually a collection of such posts.
How would you go about storing those in a somewhat human-readable format? My goal is to archive my chats and social media activity.
Use a SQLite3 database. Have a table for the posts (or any other appropriate schema, depending on what metadata you have). Using SQLite3 has the advantage of future flexibility (new/different tables and schema as needed, full-text search, etc.).
You can have another table for attachments (images, videos, etc.). If they're small, store them directly in a BLOB. If they're not, store them alongside the database, and only store the relative path in the attachments table.
You may opt to convert images and videos to a single format (e.g. PNG and H.264 MP4), but you can lose information depending on the target format. It may be preferable to leave them in the original (or highest quality) format.
The thing about archives is you either parse them now or parse them later. With how much JS and other crap is served in modern social media frontends, I'm not sure WARC is the best format for archiving from them.
But that is the point of WARC: otherwise, your archival method need some sort of general inteligence (ai or human behind the scenes) to store exacly what you need.
With WARC (and good WARC tooling like Browsetrix-crawler) you store everything HTTP the site sent.
There are, however, several objectively bad ways. In "Service Model" (a novel that I recommend) a certain collection of fools decides to sort bits by whether it's a 1 or a 0, ending up with a long list of 0's followed by a long list of 1's.
I've been thinking about trade-offs as "pick two of three" in the abstract, but the bookshelf example made it concrete. The insight that matters is: if you know your query patterns, you can optimize differently.
As a PM, I keep trying to build systems that work for "every case." But this article reminded me that's the wrong goal. The hash table works because it accepts the space-time trade-off. The heap works because it embraces disorder for non-priority items.
Sometimes the best system isn't the most elegant one—it's the one that matches how you'll actually use it.
Good reminder to stop over-optimizing for flexibility I'll never need.
You're a PM and this basic-level watered down article barely discussing anything "clicked for you in a way" you didn't expect? Of course the best system is desinged based on requirements, how can a PM not know this before being a PM?
I also like (old) .ini / TOML for small (bootstrap) config files / data exchange blobs a human might touch.
+
Re: PostgreSQL 'unfit' conversations.
I'd like some clearer examples of the desired transactions which don't fit well. After thinking about them in the background a bit I've started to suspect it might be an algorithmic / approach issue obscured by storage patterns that happen to be enabled by some other platforms which work 'at scale' supported by hardware (to a given point).
As an example of a pattern that might not perform well under PostgreSQL, something like lock-heavy multiple updates for flushing a transaction atomically. E.G. Bank Transaction Clearance like tasks. If every single double-entry booking requires it's own atomic transaction that clearly won't scale well in an ACID system. Rather the smaller grains of sand should be combined into a sandstone block / window of transactions which are processed at the same time and applied during the same overall update. The most obvious approach to this would be to switch from a no-intermediate values 'apply deduction and increment atomically' action to a versioned view of the global data state PLUS a 'pending transactions to apply' log / table (either/both can be sharded). At a given moment the transactions can be reconciled, for performance a cache for 'dirty' accounts can store the non-contested value of available balance.
would it be more accurate to say "to store using information, using information"? Since everything ultimately boils down to information, humans trying to store information is a bit recursive?
Or it's neither, and the intended effect is zero variation in the retrieval time, as when trying to avoid leaking secrets via timing attacks.
(Or I guess, more generally, the intended effect is zero correlation between the information and the time it takes to retrieve it. If retrieval time were completely random, it would achieve the goal, but it wouldn't have zero variation.)
The best way to store information depends on how you intend to use (query) it.
The query itself represents information. If you can anticipate 100% of the ways in which you intend to query the information (no surprises), I'd argue there might be an ideal way to store it.
This line of thought works for storage in isolation, but does not hold up if write speed is a concern.
So long as (fast/optimal) real-time access to new data is not a concern, you can introduce compaction to solve both problems.
> (fast/optimal) real-time access to new data
https://en.wikipedia.org/wiki/Optimal_binary_search_tree#Dyn...
as a line of thought, it totally does. you just extend the workload description to include writes. where this get problematic is that the ideal structure for transactional writes is nearly pessimal from a read standpoint. which is why we seem to end up doubling the write overhead - once to remember and once to optimize. or highly write-centric approach like LSM
I'd love to be clued in on more interesting architectures that either attempt to optimize both or provide a more continuous tuning knob between them
This is exactly right, and the article is clickbait junk.
Given the domain name, I was expecting something about the physics of information storage, and some interesting law of nature. Instead, the article is a bad introduction to data structures.
You both are affirming the title of the article.
"No single best way", meaning "it depends."
But don't let something like literacy get in the way of a opportunity to engage in meaningless outrage.
Millions of years of evolution has resulted in the human brain being the best way to store information.
I doubt we humans will be able to do better (faster, more capacity, more analytical, more intuitive, more logical) storage (at an individual level, not at mass scale, since that's kinda achieved already by the behemoths like Google, etc.) in a few thousand years of civilization.
Quantum computing may be the game changer though.
I read somewhere that the entirety of humanity's information, including all knowledge and data of past (of every human ever) and current, if stored via quantum computing - that quanta of quantum information will just be the size of a football.
Pedantic, but the article is talking about the way we structure/organize information, not store it. When I think of the word store, I think of the physical medium. The way we organize the information is only partially related
It's not pedantic, you are correctly using words as we understand them, and they are not. The headline needs a sharp correction. Editing jobs are in very short supply these days.
Oh come on. Programmers discuss how to "store" data in memory as a data model all the time.
You're reducing definitions and meaning too far to make an ultimately empty point just to contribute the thread.
If social medias only contribution is language policing, then it really should die off. What a waste of resources so functional illiterate nobodies can project ego.
No, I'll think I'll double down, because I do think I'm right here.
https://en.wikipedia.org/wiki/Data_storage is a different website from https://en.wikipedia.org/wiki/Data_store because they are different, slightly overlapping concepts.
I mean if we're talking about the physical storage of medium, the single most dense way would be to write it on the surface of a black hole. I still haven't figured out how to read it back though.
There are plenty of good enough ways:
* For lossless compression of generic data, gzip or zstd.
* For text, documentation, and information without fancy formatting, markdown, which is effectively a plain-text superset.
* For small datasets, blobs, objects, and what not, JSON.
* For larger datasets and durable storage, SQLite3.
Whenever there's text involved, use UTF-8. Whenever there's dates, use ISO8601 format (UTC timezone) or Unix timestamps.
Following these rules will keep you happy 80% of the time.
One format I'm missing: storage for conversations and social media posts. Both are complex media (text + images/videos + metadata), and one is actually a collection of such posts.
How would you go about storing those in a somewhat human-readable format? My goal is to archive my chats and social media activity.
Use a SQLite3 database. Have a table for the posts (or any other appropriate schema, depending on what metadata you have). Using SQLite3 has the advantage of future flexibility (new/different tables and schema as needed, full-text search, etc.).
You can have another table for attachments (images, videos, etc.). If they're small, store them directly in a BLOB. If they're not, store them alongside the database, and only store the relative path in the attachments table.
You may opt to convert images and videos to a single format (e.g. PNG and H.264 MP4), but you can lose information depending on the target format. It may be preferable to leave them in the original (or highest quality) format.
Why not just use WARC and a program that can read them? Do archives need to be human-readable?
The thing about archives is you either parse them now or parse them later. With how much JS and other crap is served in modern social media frontends, I'm not sure WARC is the best format for archiving from them.
But that is the point of WARC: otherwise, your archival method need some sort of general inteligence (ai or human behind the scenes) to store exacly what you need.
With WARC (and good WARC tooling like Browsetrix-crawler) you store everything HTTP the site sent.
There are, however, several objectively bad ways. In "Service Model" (a novel that I recommend) a certain collection of fools decides to sort bits by whether it's a 1 or a 0, ending up with a long list of 0's followed by a long list of 1's.
In a similar vein, someone decided that everyone should have subdirectories under home named "Pictures", "Videos", "Music", "Documents", …
There's a similar anecdote in Iain M. Banks' The Player of Games.
https://scifi.stackexchange.com/questions/270578/negotiator-...
It _does_ open up amazing opportunities for compression though.
That's fine so long as there's an index!
That depends on the aim. The purpose of something determines how fitting the means are.
Also, let us not confuse "relative" with "not objective". My father is objectively my father, but he is objectively not your father.
This clicked for me in a way I didn't expect.
I've been thinking about trade-offs as "pick two of three" in the abstract, but the bookshelf example made it concrete. The insight that matters is: if you know your query patterns, you can optimize differently.
As a PM, I keep trying to build systems that work for "every case." But this article reminded me that's the wrong goal. The hash table works because it accepts the space-time trade-off. The heap works because it embraces disorder for non-priority items.
Sometimes the best system isn't the most elegant one—it's the one that matches how you'll actually use it.
Good reminder to stop over-optimizing for flexibility I'll never need.
Thanks for sharing.
You're a PM and this basic-level watered down article barely discussing anything "clicked for you in a way" you didn't expect? Of course the best system is desinged based on requirements, how can a PM not know this before being a PM?
Postgres is close.
I would say Sqlite is closer, you find it on every phone, browser, server. I bet Sqlite files will be still readable in 2100. And I love Postgres.
Relevant: https://sqlite.org/mostdeployed.html
Or (real) SQLite for reasonably scaled work.
I also like (old) .ini / TOML for small (bootstrap) config files / data exchange blobs a human might touch.
+
Re: PostgreSQL 'unfit' conversations.
I'd like some clearer examples of the desired transactions which don't fit well. After thinking about them in the background a bit I've started to suspect it might be an algorithmic / approach issue obscured by storage patterns that happen to be enabled by some other platforms which work 'at scale' supported by hardware (to a given point).
As an example of a pattern that might not perform well under PostgreSQL, something like lock-heavy multiple updates for flushing a transaction atomically. E.G. Bank Transaction Clearance like tasks. If every single double-entry booking requires it's own atomic transaction that clearly won't scale well in an ACID system. Rather the smaller grains of sand should be combined into a sandstone block / window of transactions which are processed at the same time and applied during the same overall update. The most obvious approach to this would be to switch from a no-intermediate values 'apply deduction and increment atomically' action to a versioned view of the global data state PLUS a 'pending transactions to apply' log / table (either/both can be sharded). At a given moment the transactions can be reconciled, for performance a cache for 'dirty' accounts can store the non-contested value of available balance.
would it be more accurate to say "to store using information, using information"? Since everything ultimately boils down to information, humans trying to store information is a bit recursive?
See also, RUM Conjecture: https://www.codementor.io/@arpitbhayani/the-rum-conjecture-1...
Conceptually similar to CAP, but with storage trade-offs. The idea is you can only pick 2 out of 3.
Oh I know this one. False. Compress it first, then encrypt. :)
Or it's the opposite, where the slowest possible retrieval time is the intended effect, as is the basis of many cryptographic algorithms.
Or it's neither, and the intended effect is zero variation in the retrieval time, as when trying to avoid leaking secrets via timing attacks.
(Or I guess, more generally, the intended effect is zero correlation between the information and the time it takes to retrieve it. If retrieval time were completely random, it would achieve the goal, but it wouldn't have zero variation.)
It's always Markdown. Markdown is the best way to store information. ;)
Which implementation of Markdown is a correct Markdown? Why not org-mode syntax?
Claude Code vehemently agrees.
You're absolutely right!