Thank you for being one of the few projects replacing a POSIX tool which properly sets the expectation that it's for personal use. It causes me no end of consternation that I see many tools introduced which provide only the barest minimum of functionality and skip over extended attributes, ACLs, and fail to keep compatibility with flags, or don't properly separate STDOUT & STDERR.
While these may be sufficient for a naive developer, this oversight then breaks many downstream tools.
Again though, thanks for sharing. Bringing your own spin and ideas into the world can be anxiety inducing and I'm pleased you went about this in a helpful and measured way!
Thanks for all the feedback! Let me clarify a few things about lla.
The most amazing part of this project wasn't just building another ls alternative - it was the incredible learning journey. Building a systems tool in Rust while implementing a plugin architecture taught me more in a few weeks than months of reading could have.
Yes, it does more than traditional ls, and that's intentional. The plugin system came from scratching my own itch of constantly switching between different terminal tools. Each feature added was a chance to dive deeper into systems programming and Unix internals.
The performance still needs work, and the documentation could be better. But that's the beauty of open source - you ship it, learn from the feedback, and keep improving. Building in public is an incredible way to level up your skills.
For anyone considering a similar project: pick a common tool you use daily and try reimagining it. You'll be surprised how much you learn along the way.
Did anyone here use Genera on an original lisp machine? It had a pseudo-graphical interface and a directory listing provided clickable results. It would be really neat if we could use escaping to confer more information to the terminal about what a particular piece of text means.
Feature-request: bring back clickable ls results!
Bonus points for defining a new term type and standard for this.
This is nice, but a poor substitute for what Genera was doing.
You see, Genera knows the actual type of everything that is clickable. When a program needs an input, objects of the wrong type _lose their interactivity_ for the duration. So if you list the files in some directory, the names of those files are indeed links that you can click on. Clicking on one would bring up a context menu of relevant actions (view, edit, print, delete, etc). If a program asks for a filename as input then clicking on a file instead supplies the file object to the program. Clicking on objects of other types does nothing.
I have this side-project fantasy of a very simple terminal pipe-types project. The basic idea is a set of very basic standardized types, demarcated using escape sequences. Dates, filenames, URLs, numbers, possibly one or two number units as well (time periods, file sizes only).
Tools that already produce columnar data (ls) get a flag that lets them output this format, and tools that work with piped data (cut, sort, uniq) get equivalents or modes that let them easily work with this.
Essentially, simple typed tables held in text, with enhancements for existing tooling to know how to deal with it. Would make my day-to-day on the command line much easier.
But note that on the Lisp Machine/Genera, every type has a presentation and can be “printed” to the REPL. This includes any new classes that you create as part of your own programs. It’s not just a small list of standard types, but every type.
The standard tutorial for the system is to implement Conway’s Game of Life. It has you create a class to hold the game board and then guides you through the process of defining a presentation for it so that the it can be displayed easily.
...glom on to this: "+JSONSchema" with some sort of UNIX-ish taxonomy. Everything from `man test`, add in `man du`, `date`, `... ago` (relative time) as you'd mentioned.
`jc ls | add_schema...` => `jq ...`
...or `jc ls --with-schema | jq ...`
(it appears as though `jc` already supports schema's, so perhaps it'd be `jc ls --with-types` or something, but there's your starting point!)
That's neat and a similar idea. I think JSON probably ends up being too expressive (not just an array of identically-shaped shallow objects), too restrictive (too few useful primitives), and also too verbose of a format, but the idea of a wrapping command like that as a starting point is neat
"prefer shallow arrays of 'records', possibly with a deeply nested 'uri'-style identifier"
...the clutch result is: "it can be loaded into a database and treated as a table".
The origin of this technique for me was someone saying back in 2000'ish timeframe (and effectively modernized here):
sqlite-utils insert example.db ls_part <( jc ls -lart )
sqlite3 example.db --json \
"SELECT COUNT(*) AS c, flags FROM ls_lart GROUP BY flags"
[
{
"c": 9,
"flags": "-rw-r--r--"
},
{
"c": 2,
"flags": "drwxr-xr-x"
}
]
...this is a 'trivial' example, but it puts a really fine point on the capabilities it unlocks. You're not restricted to building a single pipeline, you can use full relational queries (eg: `... WHERE date > ...`, `... LEFT JOIN files ON git_status...`), you can refer to things by column names rather than weird regexes or `awk` scripts.
This particular example is "dumb" (but ayyyy, I didn't get a UUOC cat award!) in that you can easily muddle through it in different (existing pipeline) ways, but SQL crushes the primitive POSIX relationship tooling (so old, ugly, and unused they're tough to find!), eg: `comm`, `paste`, `uniq`, `awk`
> Feature-request: bring back clickable ls results!
Doesn't your desktop (or distro) have a graphical file manager? On KDE it's Dolphin, which ex-Windows users absolutely love. I don't know what it would be on Gnome or other desktops.
One slept on filesystem cli tool on linux is `gio`. So it comes with glib2. But today glib2 is a dependency of vte, polkit, pipewire, ffmpeg, the entire gtk ecosystem,... you get the point. So you can basically depend on it being there on most linux installs, especially desktop.
On my system it uses twice as much CPU as plain old ls in a directory with just 13k files. To recursively list a directory with 500k leaf files, lla needs > 10x as much CPU. Apparently it is both slower and with higher complexity.
Will definitely prioritize optimization in the next releases. Planning to benchmark against ls on various systems and file counts to get this properly sorted.
Other than colorization, what are people getting out of ls replacements like this? I've recently started using ranger which might replace my ls usage for the most part since it not only shows everything in the directory but has vim like shortcuts for filtering, sorting, and searching the directory as well as previewing files and entering other directories
If you run `dircolors --print-database|less` you will see that GNU ls only highlights/colors the path/filenames according to a simplistic scheme where a file can only resolve to one type even though on many terminals today "foreground overlays background overlays bold/italic/etc". (https://github.com/c-blake/lc#vector-typemulti-dimensionalit... has a more advanced idea.)
This tool by triyanox -- just from the screen shot if you click through -- will also colorize permission masks and sizes, dates, user & group.
`lc` mentioned elsethread [1] was always extensible with plugins for formatting and file-typing (but also always supported libmagic-based file-typology). There are other fairly distinctive ideas in `lc`, actually.. the README has a list.
While I like it and it's a good idea, I think the reality is that developers capable enough to write shared library/DLL plugins are more likely to just submit PRs and make such stuff built-in but maybe optional.
Categorization and hashes seem to be good ideas, yet you could do all of these with other tools already.
You could be knowing the tool 'exa', a similar ls alternative. Just wanted to mention.
I use git command line interface. Not because it is good (it isn't) or because I enjoy suffering (I think I don't), but because it is a standard on all the machines that have, you know, git.
What good is a ls alternative if I need to install it everywhere I need ls? I'd prefer using the standard ls even if it is not ideal. But maybe that's just me.
This is also one of the reasons I write C++ with vim without any auto-completion nor fancy plugins (I do use syntax highlighting though, but I think it comes by default with vim nowadays), as well as using GNU screen -- not every machines install tmux by default, surprisingly. In case I need to login into some random Linux box, I'm sure I'll be almost as productive as I am on my own machine.
You mean, you're almost as unproductive on your development machine as on a random remote system that has no tools. And you somehow regard this as some sort of playing field leveling that generates an advantage.
Imagine a car mechanic that won't use a big hydraulic lift that hoists a car in seconds and lets him walk under it, claiming that by using a manually cranked portable jack, he can be almost as productive when fixing something by the roadside with emergency equipment as he is in his garage.
If you ever meet such a mechanic you can be sure that he programs computers as a hobby.
I developed on SCO (and, later, Unixware) on a PC, all of the clients were running the gamut of Unix OSes: HPUX, DGUX, AIX, SunOS, you name it.
Most of the time was spent on our box in the office, but I was constantly bouncing back and forth to client systems. Either on site, or over the modem. Having to juggle Termcaps and the whole thing. It was polyglot machine/OS world back then.
Just had to learn to get the best out of a baseline set of Unix tools. vi instead of emacs, awk instead of perl. Master those and never be left wanting on a new environment, so I can hit the ground running. No need to "bootstrap" (if the client would even let you, not always). Couldn't even rely on a C compiler.
I assume this is tongue-in-cheek, but I don't think the comparison works at all.
I spend maybe 1% of my working hours (being generous) using `ls` and something like 50% (likely more) using my editor.
If there is some alternative to `ls` that makes my `ls` workflows 2x faster, my productivity increases by 0.5%. If I use a sub-optimal editor that makes my workflow 2x slower, I lose 25% of my productivity.
When I need to login to a remote box, I am also very likely to need to use `ls` since I am less familiar than on my own machine, whereas I am unlikely to do any sort of heavy development work (typically I just need to edit a couple configuration files, or do some git operations).
I’ve been on machines in the last few years that didn’t have screen either. Maybe it was a minimal install or something, but I specifically remember having to install it to get some long running stuff going.
(Thinking it was Ubuntu server, but guessing someone will correct me)
Tmux vs screen is an odd one; it kinda feels like screen was included in the era when people were actually trying to make the default install on servers kind of nice to use with a functional set of assumed programs. And now, it is fairly widespread just due to legacy.
Nowadays, and possibly for the better (every line of code is a potential bug and every bug is a potential vulnerability) it seems like systems don’t want to include this sort of stuff. So, I’m sure if the decision were made today, tmux or screen, tmux would win. Unfortunately, “none” seems like the real future option…
What's the point of suffering everywhere if you don't enjoy it? It's not like using a better alternative prevents you from knowing how to use ls, but only in those cases where there is no better alternative
Coloring files of the same file-type is my favorite feature. Is the extension used to group them or a MIME-header parser? I guess the extension, since it is faster.
I’ve tried a few of these, but most of them seem to be following the trend of folding other shell functionality into one tool. Searching for contents (find + grep -H, or ripgrep), filtering (grep), sorting (ls does it natively, or you can use sort, sort -h for sorting human readable sizes), the list goes on and on.
I guess this is a mini lament that many of these tools are moving away from the Unix philosophy of do one thing well, and make it easy to chain.
And a last very small lament that BeOS didn’t succeed, and their filesystem-as-a-database approach didn’t become more standard.
You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
It does indeed also include other functionality that might traditionally be left to other tools (like filtering files). But this is nothing that GNU grep wasn't already doing itself anyway.
IMO, it's better to view the Unix philosophy as a means to an end and not an end to itself. And IMO, it's important to weigh the benefits of coupling to the user experience.
>view the Unix philosophy as a means to an end and not an end to itself
it won't be a means to an end any more if you don't preserve it, so not breaking that aspect of it has to be one of your ends. if you use it to take ls to a new place but that place is not within the ecosystem, it will be an evolutionary dead end, or worse, the first meteor in the meteor storm that ends all life.
current/traditional unix may not be the be-all/end-all, but replacing it/changing it requires viewing it comprehensively and changing all the tools at once or having a plan to. A good example of this is Plan9
it is an end to itself. the reason it's a means to an end is because that was its end goal. in being a means to an end, it is an end (its end) unto itself, opposite to what you said, imho
I still can't parse what you're saying. The Unix philosophy is a means to an end, where the ultimate end is improved user experience. The means is de-coupling and composition. But there are other means to improving the user experience.
> in being a means to an end, it is an end (its end) unto itself
This either makes zero sense or is vacuously true and clearly not in conflict with what I'm saying.
I think ripgrep specifically is counted in the comment you reply to as a tool that _does_ do one thing well, and that one should use it (or grep) in combination with an ls, instead of giving ls filtering abilities.
I suppose. But I wanted to point out that ripgrep couples functionality, specifically in contradiction to the Unix philosophy. And actually, many command, including "traditional" tooling, so as well.
The point is that many pay lip service to the Unix philosophy as if it were an end. But it isn't.
> You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
Headings on when isatty and off when piping the output put me off when I first tried ripgrep. I don't expect the tools to change their output format on me.
Luckily, you made this behavior configurable, so I'm a happy convert now.
Yes. The columns. The point is that commands have been changing their output format, not just their colors, based on tty for ages. So the criticism you lodge against ripgrep also applies to some of the most core commands you probably use daily.
I would be quite surprised if you didn't rely on this without even knowing it. Even a simple `ls | wc -l` relies on it.
I say this because it's tiring to see folks lament about this feature in ripgrep as if it's something new that ripgrep does. It's not. It's a well established idiom among Unix command line tools.
They don't do one thing well since it's all text, not structured data, which makes chained analysis a challenge, which leads to the desire for integration
ls is tabular data, and you can format it (ls -1, ls -l, ls -w, plus sorting, field formatting, and more), and you can cut/parse/format in a standard way. Every field sans the filename is fixed length, can be handled with awk/cut/sed according your daily mood and requirements, etc. etc.
So, ls can be chained very nicely, which I do every day, even without thinking.
You don't need to have a "structured data with fields" to parse it. You just need to think it like a tabular data with line/column numbers (ls -l, etc.) or just line numbers (ls -1).
So, as long as ls does one thing well, it's alright.
Ah, some of the "enhanced" ls tools can't distinguish between pipe and a terminal, and always print color/format escape codes to pipe too, doubling the fun of using them. So, thanks, I'll stick with my standard ls. That one works.
> You don't need to have a "structured data with fields" to parse it.
You do if you want to have nice things like being able to format your output without having to worry about breaking the dumb tools down the pipe, which can't sort the numbers they don't see:
- 2.1K (this isn't the same as the second)
- 2.1K
- 2.1M
Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"?
> print color/format escape codes to pipe too
A problem that would disappear with... structured data!
Then you sort at the point you can see the numbers and discard them later.
> Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"
awk can sort the columns for you. Plus, ls can already sort by size. Try "ls -lS " for biggest file first, or "ls -lSr" for smallest file first. Add "-h" to make human readable.
> A problem that would disappear with... structured data!
No. A problem that would disappear with "a small if block which asks which environment I'm in". If you're in a shell "-t" test in sh/bash will tell you that. If you're coding a tool, there are standard ways to do that (via termcap IIRC). Standard UNIX tools are doing this for decades now.
IOW, structured data is not a cure for laziness...
> Then you sort at the point you can see the numbers and discard them later
This sort of human overhead is only needed to compensate for the deficiencies of the data structures
> ls can already sort by size
That's the benefit of integration you're arguing against with your deficient piping suggestions
> IOW, structured data is not a cure for laziness...
It is precisely what good design is for - it reduces the need for various dumb workarounds that bad design requires, which means you can be more lazy and avoid said workaround
> Yes, because their authors are not that lazy.
This just ignores the argument, which was "some better new tools don't do that" isn't relevant when some better new tools also do that
A lot of this post hinges on the fact that newlines in filenames were legal, and that people wrote shell without handling quoting correctly. While quoting (as well as ls altering filenames) is still an issue, find -print0, read -d '', and similar are no longer neccessary. Newlines are now forbidden in filenames: https://blog.toast.cafe/posix2024-xcu
> A bunch of C functions are now encouraged to report EILSEQ if the last component of a pathname to a file they are to create contains a newline
This, yes, makes newlines in filenames effectively illegal on operating systems strictly conforming to the new POSIX standard. However, older systems will not be enforcing this and any operating system which exposes a syscall interface that does not require libc (such as Linux) is also not required to emit any errors. The only time even in the future that you should NOT worry about handling the newline case is on filesystems where it's is expressly forbidden, such as NTFS.
Most utilities that create files are encouraged to error on newline filenames, which makes this effective illegality stronger. The post also discusses the future of this encouragement, which is turning it into a requirement.
> However, older systems will not be enforcing this
Eventually, newlines in filenames will go the way of /usr/xpg4/bin/sh.
I'd like to note that up until this point, there hasn't (and isn't) been a fully POSIX compliant way to do many shell operations on newline containing filenames. They are already effectively unsupported, and the standard that adds support also discourages them from being created and used. The best way to handle them up until this point has been to not use sh(1).
In past, there have been Linux-based operating systems that have been certified as Single Unix Specification compliant, and part of said specification is POSIX. I would imagine GNU and Busybox and Musl will be willing to implement the changes proposed by POSIX 2024, which inevitably leads down the road of newlines being banned.
Tbh, i dont understand why people want to rewrite ls of all things.
Like don't get me wrong, if they had fun, that's great.
But all i use ls for is getting a list of files. I barely ever even use the -la options. There just doesn't seem like a lot of room for improvement in something so simple.
I think the standard ls doesn't have much in terms of color/icons, so its simplicity probably makes it a great side project for improving on.
Not a big surface area, some easy improvements. A whole lot less stressful than rewriting grep (although I'm massively grateful Burnt Sushi did such a crazy thing)
Thanks @benrutter! You nailed it - ls is like the "Hello World" of system tools. Simple enough that you won't tear your hair out, but meaty enough to learn a ton.
Started with "ooh, pretty colors!" and before I knew it I was deep in filesystem APIs and terminal wizardry. Way less scary than tackling grep.
Sometimes the best projects are the ones where you can't mess up too badly... well, unless you accidentally delete everything while testing
That's the first thing I noticed in the options, it has modified date but not create or access date (listing or sorting) that I could tell. Of course it could be added, or I could just use `ls`.
https://github.com/c-blake/lc shows all files, including hidden files (starting with dot aka dot files) by default, suppressible in output with -xdot or a shell/internal alias to the same effect.
It helps to start with a more extensible/less built-in idea of "file type". "odd permissions" are another type that might interest someone, for example, such as "setgid but not group-executable" or "writable but not readable" or etc.
Yes, I know one can also use `find` or etc. for that, but there's no crime in there being >1 way to see things and, for some people, colors can make things really stand out - as can sort order which is another more color-blind possibility in `lc` as well as the simple filter-or-not of ls -a/-A.
Thanks for the great list! Yep, eza and g are fantastic - I actually use eza daily and love how g handles git integration.
What made me excited to experiment with lla was playing with the plugin architecture. While these other tools have great built-in features, I wanted to see if I could make something where the community could easily add their own capabilities without touching the core code. Kind of like how vim and neovim handle plugins.
Got inspired by how people keep building these ls alternatives to scratch their own unique itches. Figured why not make it easier for everyone to scratch their own itch through plugins? Still very much an experiment, but it's been fun seeing what's possible!
alias ls="EZA_COLORS='da=36' eza --time-style=relative --color-scale=age"
alias lsa="ls --almost-all" # ignore . ..
alias l="ls --long --classify=always" # show file indicators
alias la="l --almost-all"
# Tree view
alias ltreea="ls --tree"
alias ltree="ltreea --level=2"
# Sort by time or size
alias lt="ls --long --sort=time"
alias lta="lt --almost-all"
# lsd is faster than eza
alias lss="lsd --long --total-size --sort=size --reverse"
alias lssa="lss --almost-all"
lla seems to go beyond what ls should do for some reason. Why show git and code complexity info? Just use tools dedicated for these things, otherwise, it will be an unmaintainable mess. If you can solve a problem easily with external tools, then there's no reason to add a feature for it.
That's a great list. I have a similar list and the aliases grow out of frequently used arguments. For example, I found myself often doing an ls -Altch and so lsth was born. I find that aliases that or born of frequently used arguments are easily remembered. Over time that one grew to include a pipe to head because most of the time I just want to see the top 20 or so most recently modified files in the directory.
That's the amazing part I'm talking about the learning experience you get from weeks of working on something like that is better than reading countless documentations
Oh, of course the development is fun and exciting and a learning experience.
But before inviting others to use something, please think of how to make its use more clear. After all, I assume you post this so that people use it, not only admire your coding skills. There is a group of people who have learned to read and rely on man pages.
For example, the top-level README says:
> -s, --sort <CRITERIA>: Sort by "name", "size", or "date"
OK, does "date" refer to creation date, modification date, access date?
I can understand "size", but does it produce smallest-first or largest-first? It might not matter if... ah, no, there is no -r/--reverse flag.
Can I have more than one "criteria" (since the plural is used)?
Getting answers for such questions now means I have to go read the code in src/args.rs and follow to the implementation of the various functions. And in a few days, when I have the same questions again and I have forgotten the options, I will again have to dive into the code.
Please consider providing a short man page. It documents the "calling interface" to your program and makes it easier to use. I usually start writing one even before implementing the whole thing, to clearly articulate what I expect the program to do.
Fair critique about the documentation - this needs proper attention.
Writing a man page first is a solid approach - it forces clear thinking about the interface before implementation. I'll prioritize adding complete documentation for all options and the plugin system.
The code works, but without good docs it's not truly useful.
While a man page or good documentation is maybe not too intriguing for you I consider it essential for other users to adopt.
Maybe there are new or modern ways to create man pages that can be stimulating for your learning experience?
I notice prior HN comments of yours mention the physical design of the NeXT cube. I cannot say it will make you not hate software, but you still might appreciate that another alternative ls, https://github.com/c-blake/lc, both re-thinks/breaks more radically with ls-tradition and adapts well to something very similar to a terminal variant of the https://en.wikipedia.org/wiki/Miller_columns used in the NeXT file tree graphical browser/navigator via simple shell process substitution composition. E.g., a 3-level scenario on an 80-column looks like:
Some shell script that uses $((COLUMNS)) arithmetic to do 2 or 4 or whatever terminal width is a pretty simple exercise for the reader and one might want to pipe to less.
You can guess it is written in Rust before even checking the repo whenever you see that somebody made a clone of some popular systems tool like top, ls, cd, etc.
Thank you for being one of the few projects replacing a POSIX tool which properly sets the expectation that it's for personal use. It causes me no end of consternation that I see many tools introduced which provide only the barest minimum of functionality and skip over extended attributes, ACLs, and fail to keep compatibility with flags, or don't properly separate STDOUT & STDERR.
While these may be sufficient for a naive developer, this oversight then breaks many downstream tools.
Again though, thanks for sharing. Bringing your own spin and ideas into the world can be anxiety inducing and I'm pleased you went about this in a helpful and measured way!
Thanks for all the feedback! Let me clarify a few things about lla. The most amazing part of this project wasn't just building another ls alternative - it was the incredible learning journey. Building a systems tool in Rust while implementing a plugin architecture taught me more in a few weeks than months of reading could have. Yes, it does more than traditional ls, and that's intentional. The plugin system came from scratching my own itch of constantly switching between different terminal tools. Each feature added was a chance to dive deeper into systems programming and Unix internals. The performance still needs work, and the documentation could be better. But that's the beauty of open source - you ship it, learn from the feedback, and keep improving. Building in public is an incredible way to level up your skills. For anyone considering a similar project: pick a common tool you use daily and try reimagining it. You'll be surprised how much you learn along the way.
Did anyone here use Genera on an original lisp machine? It had a pseudo-graphical interface and a directory listing provided clickable results. It would be really neat if we could use escaping to confer more information to the terminal about what a particular piece of text means.
Feature-request: bring back clickable ls results!
Bonus points for defining a new term type and standard for this.
There's already `ls --hyperlink` for clickable results, but that depends on your terminal supporting the URL escape sequence.
This is nice, but a poor substitute for what Genera was doing.
You see, Genera knows the actual type of everything that is clickable. When a program needs an input, objects of the wrong type _lose their interactivity_ for the duration. So if you list the files in some directory, the names of those files are indeed links that you can click on. Clicking on one would bring up a context menu of relevant actions (view, edit, print, delete, etc). If a program asks for a filename as input then clicking on a file instead supplies the file object to the program. Clicking on objects of other types does nothing.
> Genera knows the actual type of everything
I have this side-project fantasy of a very simple terminal pipe-types project. The basic idea is a set of very basic standardized types, demarcated using escape sequences. Dates, filenames, URLs, numbers, possibly one or two number units as well (time periods, file sizes only).
Tools that already produce columnar data (ls) get a flag that lets them output this format, and tools that work with piped data (cut, sort, uniq) get equivalents or modes that let them easily work with this.
Essentially, simple typed tables held in text, with enhancements for existing tooling to know how to deal with it. Would make my day-to-day on the command line much easier.
Arcan is experimenting with something like this (among others): https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger...
See also:
* NuShell (https://www.nushell.sh/)
Could be fun :)
But note that on the Lisp Machine/Genera, every type has a presentation and can be “printed” to the REPL. This includes any new classes that you create as part of your own programs. It’s not just a small list of standard types, but every type.
The standard tutorial for the system is to implement Conway’s Game of Life. It has you create a class to hold the game board and then guides you through the process of defining a presentation for it so that the it can be displayed easily.
I always thought to do that by having a virtual file system that tags my files and so they are available at specific location if they fit the bill.
https://kellyjonbrazil.github.io/jc/docs/parsers/ls.html
...glom on to this: "+JSONSchema" with some sort of UNIX-ish taxonomy. Everything from `man test`, add in `man du`, `date`, `... ago` (relative time) as you'd mentioned.
`jc ls | add_schema...` => `jq ...`
...or `jc ls --with-schema | jq ...`
(it appears as though `jc` already supports schema's, so perhaps it'd be `jc ls --with-types` or something, but there's your starting point!)
That's neat and a similar idea. I think JSON probably ends up being too expressive (not just an array of identically-shaped shallow objects), too restrictive (too few useful primitives), and also too verbose of a format, but the idea of a wrapping command like that as a starting point is neat
I'll share this comment from 7 months ago with you:
https://news.ycombinator.com/item?id=40100069
"prefer shallow arrays of 'records', possibly with a deeply nested 'uri'-style identifier"
...the clutch result is: "it can be loaded into a database and treated as a table".
The origin of this technique for me was someone saying back in 2000'ish timeframe (and effectively modernized here):
...this is a 'trivial' example, but it puts a really fine point on the capabilities it unlocks. You're not restricted to building a single pipeline, you can use full relational queries (eg: `... WHERE date > ...`, `... LEFT JOIN files ON git_status...`), you can refer to things by column names rather than weird regexes or `awk` scripts.This particular example is "dumb" (but ayyyy, I didn't get a UUOC cat award!) in that you can easily muddle through it in different (existing pipeline) ways, but SQL crushes the primitive POSIX relationship tooling (so old, ugly, and unused they're tough to find!), eg: `comm`, `paste`, `uniq`, `awk`
That's one aspect I prefer in playing with TempleOS over Linux. The rest of the command line is a bit of a pain, with no history, C-as-a-shell, etc.
Maybe some aspects of the Plan9 UI? (rio/9term, plumber; acme as well).
You should be able to get this to work on Unix with plan9port.
It's not really that, but have you tried ranger?
One slept on filesystem cli tool on linux is `gio`. So it comes with glib2. But today glib2 is a dependency of vte, polkit, pipewire, ffmpeg, the entire gtk ecosystem,... you get the point. So you can basically depend on it being there on most linux installs, especially desktop.
Checkout the man page: https://www.mankier.com/1/gio
highlights:
- showing progress in `cp` equivalent
- Easy cli interface to freedesktop trash (!)
- tree command
- filesystem changes monitor (inotify wrapper)
Sounds like a fun project. However, from the readme:
Efficient file listing: Optimized for speed, even in large directories
What exactly is it doing differently to optimize for speed? Isn't it just using the regular fs lib?
On my system it uses twice as much CPU as plain old ls in a directory with just 13k files. To recursively list a directory with 500k leaf files, lla needs > 10x as much CPU. Apparently it is both slower and with higher complexity.
Not trying to “gotcha” you, but I would imagine that 10x the CPU of ls is still very little, or am I wrong?
In the case of the 500k tree, `lla` needs 2.5 seconds, so it's pretty substantial.
Will definitely prioritize optimization in the next releases. Planning to benchmark against ls on various systems and file counts to get this properly sorted.
But it’s written in rust so it’s super fast. Did you take that into account when running your benchmarks? /s
I clicked on this (without noting "github") expecting an essay on the joys of building an alternative to ls.
This is basically a Show HN without a summary I think.
fwiw:
https://news.ycombinator.com/showhn.html
Other than colorization, what are people getting out of ls replacements like this? I've recently started using ranger which might replace my ls usage for the most part since it not only shows everything in the directory but has vim like shortcuts for filtering, sorting, and searching the directory as well as previewing files and entering other directories
ls does colored output. I'm surprised it's not the default for you.
If you run `dircolors --print-database|less` you will see that GNU ls only highlights/colors the path/filenames according to a simplistic scheme where a file can only resolve to one type even though on many terminals today "foreground overlays background overlays bold/italic/etc". (https://github.com/c-blake/lc#vector-typemulti-dimensionalit... has a more advanced idea.)
This tool by triyanox -- just from the screen shot if you click through -- will also colorize permission masks and sizes, dates, user & group.
Excellent new idea re plugins, a lot of these tools are too inflexible !
`lc` mentioned elsethread [1] was always extensible with plugins for formatting and file-typing (but also always supported libmagic-based file-typology). There are other fairly distinctive ideas in `lc`, actually.. the README has a list.
While I like it and it's a good idea, I think the reality is that developers capable enough to write shared library/DLL plugins are more likely to just submit PRs and make such stuff built-in but maybe optional.
[1] https://news.ycombinator.com/item?id=42229841
"Always" is just 4 years? Lc is also one of these new tools
> more likely to just submit PRs and make such stuff built-in but maybe optional.
Which are more likely to just be rejected by the more conservative maintainers of the tool. That's the empowering beauty of plugins - no such barriers
Your tone is rather disputatious/critical, but we have literally no dispute here.
Categorization and hashes seem to be good ideas, yet you could do all of these with other tools already. You could be knowing the tool 'exa', a similar ls alternative. Just wanted to mention.
I use git command line interface. Not because it is good (it isn't) or because I enjoy suffering (I think I don't), but because it is a standard on all the machines that have, you know, git.
What good is a ls alternative if I need to install it everywhere I need ls? I'd prefer using the standard ls even if it is not ideal. But maybe that's just me.
This is also one of the reasons I write C++ with vim without any auto-completion nor fancy plugins (I do use syntax highlighting though, but I think it comes by default with vim nowadays), as well as using GNU screen -- not every machines install tmux by default, surprisingly. In case I need to login into some random Linux box, I'm sure I'll be almost as productive as I am on my own machine.
You mean, you're almost as unproductive on your development machine as on a random remote system that has no tools. And you somehow regard this as some sort of playing field leveling that generates an advantage.
Imagine a car mechanic that won't use a big hydraulic lift that hoists a car in seconds and lets him walk under it, claiming that by using a manually cranked portable jack, he can be almost as productive when fixing something by the roadside with emergency equipment as he is in his garage.
If you ever meet such a mechanic you can be sure that he programs computers as a hobby.
I did the same thing back in they day.
I developed on SCO (and, later, Unixware) on a PC, all of the clients were running the gamut of Unix OSes: HPUX, DGUX, AIX, SunOS, you name it.
Most of the time was spent on our box in the office, but I was constantly bouncing back and forth to client systems. Either on site, or over the modem. Having to juggle Termcaps and the whole thing. It was polyglot machine/OS world back then.
Just had to learn to get the best out of a baseline set of Unix tools. vi instead of emacs, awk instead of perl. Master those and never be left wanting on a new environment, so I can hit the ground running. No need to "bootstrap" (if the client would even let you, not always). Couldn't even rely on a C compiler.
I assume this is tongue-in-cheek, but I don't think the comparison works at all.
I spend maybe 1% of my working hours (being generous) using `ls` and something like 50% (likely more) using my editor.
If there is some alternative to `ls` that makes my `ls` workflows 2x faster, my productivity increases by 0.5%. If I use a sub-optimal editor that makes my workflow 2x slower, I lose 25% of my productivity.
When I need to login to a remote box, I am also very likely to need to use `ls` since I am less familiar than on my own machine, whereas I am unlikely to do any sort of heavy development work (typically I just need to edit a couple configuration files, or do some git operations).
I’ve been on machines in the last few years that didn’t have screen either. Maybe it was a minimal install or something, but I specifically remember having to install it to get some long running stuff going.
(Thinking it was Ubuntu server, but guessing someone will correct me)
Tmux vs screen is an odd one; it kinda feels like screen was included in the era when people were actually trying to make the default install on servers kind of nice to use with a functional set of assumed programs. And now, it is fairly widespread just due to legacy.
Nowadays, and possibly for the better (every line of code is a potential bug and every bug is a potential vulnerability) it seems like systems don’t want to include this sort of stuff. So, I’m sure if the decision were made today, tmux or screen, tmux would win. Unfortunately, “none” seems like the real future option…
Even ls isn't standard on all machines. GNU ls is different from BSD ls.
What's the point of suffering everywhere if you don't enjoy it? It's not like using a better alternative prevents you from knowing how to use ls, but only in those cases where there is no better alternative
Coloring files of the same file-type is my favorite feature. Is the extension used to group them or a MIME-header parser? I guess the extension, since it is faster.
This is also part of GNU ls, at least.
I think.This github page doesn't say anything about why it turned out to be amazing, seems like a fun side project.
Yeah, talk about hiding the headline...
I see a screenshot that looks like the output of ls, ok it has colors, and some filenames have "!!" behind it. Great success?
Haha! Aren't all rust rewrites about colors take `bat` for example! Btw "!!" are from the git plugin, a quick way to see my workspace git status
Yeah, why use this instead of ls? What makes it worthwhile as a daily driver?
There seems to be a lot of projects that is now competing to replace ls (for people preferences)
For reference, those are the ones I am familiar with. They are somehow active in contrast to things like exa which is not maintained anymore.
eza: (https://github.com/eza-community/eza)
lsd: (https://github.com/Peltoche/lsd)
colorls: (https://github.com/athityakumar/colorls)
g: (https://github.com/Equationzhao/g)
ls++: (https://github.com/trapd00r/LS_COLORS)
logo-ls: (https://github.com/canta2899/logo-ls) - this is forked because main development stopped 4 years ago.
Any more?
Personally I prefer eza and wrote a zsh plugin that is basically aliases that matches what I have from my muscle memory.
I’ve tried a few of these, but most of them seem to be following the trend of folding other shell functionality into one tool. Searching for contents (find + grep -H, or ripgrep), filtering (grep), sorting (ls does it natively, or you can use sort, sort -h for sorting human readable sizes), the list goes on and on.
I guess this is a mini lament that many of these tools are moving away from the Unix philosophy of do one thing well, and make it easy to chain.
And a last very small lament that BeOS didn’t succeed, and their filesystem-as-a-database approach didn’t become more standard.
You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
It does indeed also include other functionality that might traditionally be left to other tools (like filtering files). But this is nothing that GNU grep wasn't already doing itself anyway.
IMO, it's better to view the Unix philosophy as a means to an end and not an end to itself. And IMO, it's important to weigh the benefits of coupling to the user experience.
>view the Unix philosophy as a means to an end and not an end to itself
it won't be a means to an end any more if you don't preserve it, so not breaking that aspect of it has to be one of your ends. if you use it to take ls to a new place but that place is not within the ecosystem, it will be an evolutionary dead end, or worse, the first meteor in the meteor storm that ends all life.
current/traditional unix may not be the be-all/end-all, but replacing it/changing it requires viewing it comprehensively and changing all the tools at once or having a plan to. A good example of this is Plan9
I don't know what you're trying to say and I don't see how it's in conflict with anything I've said.
>not an end to itself
it is an end to itself. the reason it's a means to an end is because that was its end goal. in being a means to an end, it is an end (its end) unto itself, opposite to what you said, imho
I still can't parse what you're saying. The Unix philosophy is a means to an end, where the ultimate end is improved user experience. The means is de-coupling and composition. But there are other means to improving the user experience.
> in being a means to an end, it is an end (its end) unto itself
This either makes zero sense or is vacuously true and clearly not in conflict with what I'm saying.
I think ripgrep specifically is counted in the comment you reply to as a tool that _does_ do one thing well, and that one should use it (or grep) in combination with an ls, instead of giving ls filtering abilities.
I suppose. But I wanted to point out that ripgrep couples functionality, specifically in contradiction to the Unix philosophy. And actually, many command, including "traditional" tooling, so as well.
The point is that many pay lip service to the Unix philosophy as if it were an end. But it isn't.
> You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
Headings on when isatty and off when piping the output put me off when I first tried ripgrep. I don't expect the tools to change their output format on me.
Luckily, you made this behavior configurable, so I'm a happy convert now.
> I don't expect the tools to change their output format on me.
You probably do! If you've ever used `ls`, then it does exactly this.
If you mean the ANSI color stuff, yes - I do expect these to disappear :)
I meant the "shape" of the output. It just doesn't follow the principle of least surprise.
edit: you probably meant the columns. I forgot about that, I haven't parsed ls(1) output in ages ;)
Yes. The columns. The point is that commands have been changing their output format, not just their colors, based on tty for ages. So the criticism you lodge against ripgrep also applies to some of the most core commands you probably use daily.
I would be quite surprised if you didn't rely on this without even knowing it. Even a simple `ls | wc -l` relies on it.
I say this because it's tiring to see folks lament about this feature in ripgrep as if it's something new that ripgrep does. It's not. It's a well established idiom among Unix command line tools.
Isn’t “don’t parse ls” like the third commandment of Unix?
You've never done `ls | wc -l`?
They don't do one thing well since it's all text, not structured data, which makes chained analysis a challenge, which leads to the desire for integration
ls is tabular data, and you can format it (ls -1, ls -l, ls -w, plus sorting, field formatting, and more), and you can cut/parse/format in a standard way. Every field sans the filename is fixed length, can be handled with awk/cut/sed according your daily mood and requirements, etc. etc.
So, ls can be chained very nicely, which I do every day, even without thinking.
You don't need to have a "structured data with fields" to parse it. You just need to think it like a tabular data with line/column numbers (ls -l, etc.) or just line numbers (ls -1).
So, as long as ls does one thing well, it's alright.
Ah, some of the "enhanced" ls tools can't distinguish between pipe and a terminal, and always print color/format escape codes to pipe too, doubling the fun of using them. So, thanks, I'll stick with my standard ls. That one works.
> You don't need to have a "structured data with fields" to parse it.
You do if you want to have nice things like being able to format your output without having to worry about breaking the dumb tools down the pipe, which can't sort the numbers they don't see:
- 2.1K (this isn't the same as the second) - 2.1K - 2.1M
Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"?
> print color/format escape codes to pipe too
A problem that would disappear with... structured data!
> Ah, some of the "enhanced" ls tools
so use the other "some" that can?
> which can't sort the numbers they don't see
Then you sort at the point you can see the numbers and discard them later.
> Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"
awk can sort the columns for you. Plus, ls can already sort by size. Try "ls -lS " for biggest file first, or "ls -lSr" for smallest file first. Add "-h" to make human readable.
> A problem that would disappear with... structured data!
No. A problem that would disappear with "a small if block which asks which environment I'm in". If you're in a shell "-t" test in sh/bash will tell you that. If you're coding a tool, there are standard ways to do that (via termcap IIRC). Standard UNIX tools are doing this for decades now.
IOW, structured data is not a cure for laziness...
> so use the other "some" that can?
Yes, because their authors are not that lazy.
> Then you sort at the point you can see the numbers and discard them later
This sort of human overhead is only needed to compensate for the deficiencies of the data structures
> ls can already sort by size
That's the benefit of integration you're arguing against with your deficient piping suggestions
> IOW, structured data is not a cure for laziness...
It is precisely what good design is for - it reduces the need for various dumb workarounds that bad design requires, which means you can be more lazy and avoid said workaround
> Yes, because their authors are not that lazy.
This just ignores the argument, which was "some better new tools don't do that" isn't relevant when some better new tools also do that
vanilla ls has never been particularly chainable - https://mywiki.wooledge.org/ParsingLs
A lot of this post hinges on the fact that newlines in filenames were legal, and that people wrote shell without handling quoting correctly. While quoting (as well as ls altering filenames) is still an issue, find -print0, read -d '', and similar are no longer neccessary. Newlines are now forbidden in filenames: https://blog.toast.cafe/posix2024-xcu
> Newlines are now forbidden in filenames
No. To quote that article
> A bunch of C functions are now encouraged to report EILSEQ if the last component of a pathname to a file they are to create contains a newline
This, yes, makes newlines in filenames effectively illegal on operating systems strictly conforming to the new POSIX standard. However, older systems will not be enforcing this and any operating system which exposes a syscall interface that does not require libc (such as Linux) is also not required to emit any errors. The only time even in the future that you should NOT worry about handling the newline case is on filesystems where it's is expressly forbidden, such as NTFS.
Most utilities that create files are encouraged to error on newline filenames, which makes this effective illegality stronger. The post also discusses the future of this encouragement, which is turning it into a requirement.
> However, older systems will not be enforcing this
Eventually, newlines in filenames will go the way of /usr/xpg4/bin/sh.
I'd like to note that up until this point, there hasn't (and isn't) been a fully POSIX compliant way to do many shell operations on newline containing filenames. They are already effectively unsupported, and the standard that adds support also discourages them from being created and used. The best way to handle them up until this point has been to not use sh(1).
Linux isn't POSIX compliant, and as far as I know has no plans to ban newlines in filenames, or even add an option to disable newlines.
In past, there have been Linux-based operating systems that have been certified as Single Unix Specification compliant, and part of said specification is POSIX. I would imagine GNU and Busybox and Musl will be willing to implement the changes proposed by POSIX 2024, which inevitably leads down the road of newlines being banned.
Howw would that work? Checking strings passed to open and rejecting them? Would we then have undeletable files, as we can't refer to their filenames?
I know Linux allows newlines in filenames, but every time I hear it I want to drink.
If you like that philosophy check out nushell. They go pretty hard core on that and they can because of structured output
I agree with this.
If they want something that is easy to use in a non-scriptable way, maybe they should replicate Norton Commander instead.
Look into far2l
Take a look at lc (but not the terminal screenshots! ;)): https://github.com/c-blake/lc
lc is a highly configurable "multi-dimensional"[1] file lister written in Nim focused on flexibility and configurability.
Key features:
- Multi-level sorting by combinations of attributes like size, time, and file type, with user-defined precedence
- Configurable file kind sorting order
- Value-dependent coloring for file attributes such as timestamps, permissions, or sizes.
- Abbreviations: Automatically shorten filenames, user/group names or symlink targets.
- File type classification: Integrates libmagic for file type inspection.
- Hyperlink support
- Per-directory configs: custom behaviors for specific directories using local tweak files (.lc).
- Lightweight (~900 lines of code) with only author's CLI library "cligen" and Nim's stdlib as dependencies.
and more.
[1]: https://github.com/c-blake/lc#vector-typemulti-dimensionalit...
Tbh, i dont understand why people want to rewrite ls of all things.
Like don't get me wrong, if they had fun, that's great.
But all i use ls for is getting a list of files. I barely ever even use the -la options. There just doesn't seem like a lot of room for improvement in something so simple.
I think the standard ls doesn't have much in terms of color/icons, so its simplicity probably makes it a great side project for improving on.
Not a big surface area, some easy improvements. A whole lot less stressful than rewriting grep (although I'm massively grateful Burnt Sushi did such a crazy thing)
Thanks @benrutter! You nailed it - ls is like the "Hello World" of system tools. Simple enough that you won't tear your hair out, but meaty enough to learn a ton. Started with "ooh, pretty colors!" and before I knew it I was deep in filesystem APIs and terminal wizardry. Way less scary than tackling grep. Sometimes the best projects are the ones where you can't mess up too badly... well, unless you accidentally delete everything while testing
Well, recursive display is nice, I guess, as well as searching on partial filenames
Has been roughly doing the job since the 70s (?):
> I barely ever even use the -la options.
Certainly I use these less than plain "ls," but digging through hidden files and folders and looking at timestamps is very important for me.
That's the first thing I noticed in the options, it has modified date but not create or access date (listing or sorting) that I could tell. Of course it could be added, or I could just use `ls`.
I use ls -la via the ll alias exclusively. I find it far more readable to my eyes than plain ls.
Hidden files are almost always of interest to me since my job involves configuring servers.
https://github.com/c-blake/lc shows all files, including hidden files (starting with dot aka dot files) by default, suppressible in output with -xdot or a shell/internal alias to the same effect.
It helps to start with a more extensible/less built-in idea of "file type". "odd permissions" are another type that might interest someone, for example, such as "setgid but not group-executable" or "writable but not readable" or etc.
Yes, I know one can also use `find` or etc. for that, but there's no crime in there being >1 way to see things and, for some people, colors can make things really stand out - as can sort order which is another more color-blind possibility in `lc` as well as the simple filter-or-not of ls -a/-A.
It's a rite of passage. I had some colorful 'dir' alternatives on MS-DOS 5 and eventually made my own with Turbo Pascal. Easy & fun afternoon project
Thanks for the great list! Yep, eza and g are fantastic - I actually use eza daily and love how g handles git integration. What made me excited to experiment with lla was playing with the plugin architecture. While these other tools have great built-in features, I wanted to see if I could make something where the community could easily add their own capabilities without touching the core code. Kind of like how vim and neovim handle plugins. Got inspired by how people keep building these ls alternatives to scratch their own unique itches. Figured why not make it easier for everyone to scratch their own itch through plugins? Still very much an experiment, but it's been fun seeing what's possible!
Eza is great. I was pleasantly surprised at how nice the mime type icons meshed with the terminal.
Also “walk” is great for interactive navigation.
- https://github.com/antonmedv/walk
lc: https://github.com/c-blake/lc (in Nim).
I also used eza to replace the tree command with the --tree flag.
I have these aliases for various purposes:
# Different options to search for files
# da=36 cyan timestamps
alias ls="EZA_COLORS='da=36' eza --time-style=relative --color-scale=age"
alias lsa="ls --almost-all" # ignore . ..
alias l="ls --long --classify=always" # show file indicators
alias la="l --almost-all"
# Tree view
alias ltreea="ls --tree"
alias ltree="ltreea --level=2"
# Sort by time or size
alias lt="ls --long --sort=time"
alias lta="lt --almost-all"
# lsd is faster than eza
alias lss="lsd --long --total-size --sort=size --reverse"
alias lssa="lss --almost-all"
lla seems to go beyond what ls should do for some reason. Why show git and code complexity info? Just use tools dedicated for these things, otherwise, it will be an unmaintainable mess. If you can solve a problem easily with external tools, then there's no reason to add a feature for it.
That's a great list. I have a similar list and the aliases grow out of frequently used arguments. For example, I found myself often doing an ls -Altch and so lsth was born. I find that aliases that or born of frequently used arguments are easily remembered. Over time that one grew to include a pipe to head because most of the time I just want to see the top 20 or so most recently modified files in the directory.
Creating command-line utilities is nice, but I personally lament the lack of man pages when people write something new.
That's the amazing part I'm talking about the learning experience you get from weeks of working on something like that is better than reading countless documentations
Oh, of course the development is fun and exciting and a learning experience.
But before inviting others to use something, please think of how to make its use more clear. After all, I assume you post this so that people use it, not only admire your coding skills. There is a group of people who have learned to read and rely on man pages.
For example, the top-level README says:
> -s, --sort <CRITERIA>: Sort by "name", "size", or "date"
OK, does "date" refer to creation date, modification date, access date? I can understand "size", but does it produce smallest-first or largest-first? It might not matter if... ah, no, there is no -r/--reverse flag. Can I have more than one "criteria" (since the plural is used)?
Getting answers for such questions now means I have to go read the code in src/args.rs and follow to the implementation of the various functions. And in a few days, when I have the same questions again and I have forgotten the options, I will again have to dive into the code.
Please consider providing a short man page. It documents the "calling interface" to your program and makes it easier to use. I usually start writing one even before implementing the whole thing, to clearly articulate what I expect the program to do.
Fair critique about the documentation - this needs proper attention. Writing a man page first is a solid approach - it forces clear thinking about the interface before implementation. I'll prioritize adding complete documentation for all options and the plugin system. The code works, but without good docs it's not truly useful.
While a man page or good documentation is maybe not too intriguing for you I consider it essential for other users to adopt. Maybe there are new or modern ways to create man pages that can be stimulating for your learning experience?
I know its only for personal use, but I've never had any problems with ls not being "high-performance" enough...
brew support?
Great idea! I will be working on it!
The things I take for granted. This is a breath of fresh air! Way to rethink the fundamentals!
I can't tell if you're being sarcastic or not.
For the record I was not being sarcastic but maybe I was feeling a bit too romantic or overly supportive of OP
I notice prior HN comments of yours mention the physical design of the NeXT cube. I cannot say it will make you not hate software, but you still might appreciate that another alternative ls, https://github.com/c-blake/lc, both re-thinks/breaks more radically with ls-tradition and adapts well to something very similar to a terminal variant of the https://en.wikipedia.org/wiki/Miller_columns used in the NeXT file tree graphical browser/navigator via simple shell process substitution composition. E.g., a 3-level scenario on an 80-column looks like:
Some shell script that uses $((COLUMNS)) arithmetic to do 2 or 4 or whatever terminal width is a pretty simple exercise for the reader and one might want to pipe to less.You can guess it is written in Rust before even checking the repo whenever you see that somebody made a clone of some popular systems tool like top, ls, cd, etc.