Given that some 80% of developers are now using AI in their regular work, blob-util is almost certainly the kind of thing that most developers would just happily have an LLM generate for them. Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.
Letting LLM write utility code is a sword that cuts both ways. You often create a throw-away code that is unproven and requires maintenance. It's not a guarantee that the blobutil or toString or whatever created by AI won't fail at some edge cases. That's why e.g. in Java there is Apache commons which is perceived as an industry standard nowadays.
I see this as an absolute win. The state of micro dependencies of js was a nightmare that only happened because a lot of undereducated developers flooded the market to get that sweet faang money.
Now that both have dried up I hope we can close the vault door on js and have people learn how to code again.
Oh god, without tree shaking, lodash is such a blight.
I've seen so many tiny packages pull in lodash for some little utility method so many times. 400 bytes of source code becomes 70kb in an instant, all because someone doesn't know how to filter items in an array. And I've also seen plenty of projects which somehow include multiple copies of lodash in their dependency tree.
Its such a common junior move. Ugh.
Experienced engineers know how to pull in just what they need from lodash. But ... most experienced engineers I know & work with don't bother with it. Javascript includes almost everything you need these days anyway. And when it doesn't, the kind of helper functions lodash provides are usually about 4 lines of code to write yourself. Much better to do that manually rather than pull in some 70kb dependency.
Unless you're part of the demoscene or your webpage is being loaded by Voyager II, why is 70kb of source code a problem?
Not wanting to use well constructed, well tested, well distributed libraries to make code simpler and more robust is not motivated by any practical engineering concern. It's just nostalgia and fetishism.
I agree the JS standard library includes most of the stuff you need these days, rendering jquery and half of lodash irrelevant now. But there's still a lot of useful utilities in lodash, and potentially a new project could curate a collection of new, still relevant utilities.
The problem with helper functions is that they're often very easy to write, but very hard to figure out the types for.
Take a generic function that recursively converts snake_case object keys to pascalCase. That's about 10 lines of Javascript, you can write that in 2 mins if you're a competent dev. Figuring out the types for it can be done, but you really need a lot of ts expertise to pull it off.
Not really familiar with TS, but what would be so weird with the typing? Wouldn't it be generic over `T -> U`, with T the type with snake_case fields and U the type with pascalCase fields?
Turns out in TypeScript you can model the conversion of the keys themselves from snake_case to pascalCase within the type system[0]. I assume they meant that this was the difficult part.
Your first sentence cheers that we're moving from NPM micropackages to LLM-generated code, and then you say this will result in people having to learn to code again.
I don't see how the conclusion follows from this.
There will be many LLM-generated functions purporting to do the same thing, and a bug in one of them that gets fixed means only one project gets fixed instead of every project using an NPM package as a dependency.
There will be many LLM-generated functions doing the same thing, in the same project, unless the human pays attention.
I've been playing a lot recently with various models, lately with the expensive claude models (API, for the large context windows), and in every attempt things are really impressive at the beginning, and start going south once the codebase reaches about 10k to 15k lines of code. Even with tools split out into a separate library and separate documentation at that point it has a tendency to generate tool functions again in the module it's currently working on over taking the already defined one in the helper library.
I don't quite understand your argument. Wasn't the post about how users might replace transparent dependencies with transparent LLM drop-ins? I don't see how having an LLM to do the same job would enable someone to learn more. They're probably the kind of person who will ask the LLM to perform a refactor when problems arise, so they won't learn that much through osmosis.
Less incentive to write small libraries. Less incentive to write small tutorials on your own website. Unless you are a hacker or a spammer where your incentives have probably increased. We are entering the era of cheap spam of everything with little incentive for quality. All this for the best case outcome of most people being made unemployed and rolling the dice on society reorganising to that reality.
> or a spammer where your incentives have probably increased.
Slight pushback on this. The web has been spammed with subpar tutorials for ages now. The kind of medium "articles" that are nothing more than "getting started" steps + slop that got popular circa 2017-2019 is imo worse than the listy-boldy-emojy-filled articles that the LLMs come up with. So nothing gained, nothing lost imo. You still have to learn how to skim and get signals quickly.
I'd actually argue that now it's easier to winnow the slop. I can point my cc running in a devcontainer to a "tutorial" or lib / git repo and say something like "implement this as an example covering x and y, success condition is this and that, I want it to work like this, etc.", and come back and see if it works. It's like a litmus test of a tutorial/approach/repo. Can my cc understand it? Then it'll be worth my time looking into it. If it can't, well, find a different one.
I think we're seeing the "low hanging fruit" of slop right now, and there's an overcorrection of attitude against "AI". But I also see that I get more and more workflows working for me, more or less tailored, more or less adapted for me and my uses. That's cool. And it's powered by the same underlying tech.
The thing is, what is the actual point of this approach? Is it for leaning? I strongly believe there’s no learning without inmersion and practice. Is it for automation? The whole idea of automation is to not think about the thing again unless there’s a catastrophic error, it’s not about babysitting a machine. Is it about judgment? Judgment is something you hone by experiencing stuff then deciding whether it’s bad or not. It’s not something you delegate lightly.
The problem isn't that AI slop is doing something new. Phishing, blogspam, time wasting PRs, website scraping, etc have all existed before.
The problem is that AI makes all of that far, far easier.
Even using tooling to filter articles doesn't scale as slop grows to be a larger and larger percentage of content, and it means I'm going to have to consider prompt injections and running arbitrary code. All of this is a race to the bottom of suck.
The difference is that the cost of slop has decreased by orders of magnitude. What happens when only 1 in 10,000 of those tutorials you can find is any good, from someone actually qualified to write it?
One instance of definite benefit of AI is AI summary web search. Searching for answers to simple questions and not having to cut though SEO slop is such an improvement
The summary is often incorrect in at least some subtle details, which is invisible to a lot of people who do not understand LLM limitations.
Now, we can argue that a typical SEO-optimized garbage article is not better, but I feel like the trust score for them was lower on average from a typical person.
I don't understand this position, do you have direct evidence that Google actively made search worse? Before I'm misunderstood I do want to clarify that IMO, the end user experience for web searching on Google is much worse in 2025 than it was in say 2000. But, the web was also much much smaller, less commercial and the SNR was much better in general.
Sure, web search companies moved away from direct keyword matching to much more complex "semantics-adjacent" matching algorithms. But we don't have the counterfactual keyword-based Google search algorithm from 2000 on data from 2025 to claim that it's just search getting worse, or the problem simply getting much harder over time and Google failing to keep up with it.
In light of that, I'm much more inclined to believe that it's SEO spam becoming an industry that killed web search instead of companies "nerfing their own search engines".
Hard disagree. AI summaries are useless for the same reason AI summaries from Google and DDG are useless: it's almost always missing the context. The AI page summaries typically take the form of "here's the type of message that the author of this page is trying to convey" instead of "here's what the page actually says". Just give me the fucking contents. If I wanted AI slop I'd ask my fucking doorknob.
I think you have some of your wires crossed, asking Google for "here's the type of message that the author of this page is trying to convey" is not what most people think is a simple question (also asking Google to reprint copyrighted material us also a non starter). Asking Google "what is the flag for persevering Metadata using scp" and getting the flag name instead of a SEO article with the a misleading title go on about so third party program that you can download that does exactly that and never actually tell you the answer is ridiculous and I am happy AI has help reduce the click bait
"here's the type of message that the author of this page is trying to convey" is not what most people think is a simple question
It's also not the question I asked. I'm literally trying to parse out what question was asked. That's what makes AI slop so infuriating: it's entirely orthogonal to the information I'm after.
Asking Google "what is the flag for persevering Metadata using scp" and getting the flag
name instead of a SEO article with the a misleading title go on about so third party program
that you can download that does exactly that and never actually tell you the answer is
ridiculous and I am happy AI has help reduce the click bait
Except that the AI slop Google and Microsoft and DDG use for summaries masks whether or not a result is SEO nonsense. Instead of using excerpts of the page the AI summary simply suggests that the SEO garbage is answering the question you asked. These bullshit AI summaries make it infinitely harder to parse out what's actually useful. I suppose that's the goal though. Hide that most of the results are low quality and force you to click through to more pages (ad views) to find something relevant. AI slop changes the summaries from "garbage in, garbage out" to simply "garbage out".
I was searching a specific niche on Youtube today, and scrolled endlessly trying to find something that wasn't AI generated. Youtube is being completely spammed.
Quick! Tell me what this does or you're not a real programmer, not a craftsman, and are a total hack who understands nothing about quality (may not even be a good person, want to lay off bread-and-butter red-blooded american programmers)
> the era of small, low-value libraries like blob-util is over.
Thankfully (not against blob-util specifically because I've never intentionally used it), I wouldn't completely blame llms either since languages like Go never had this dependency hell.
npm is a security nightmare not just because of npm the package manager, because the culture of the language rewards behavior such as "left-pad".
Instead of writing endless utilities for other project to re-use, write actual working things instead - that's where the value/fun is.
> you just have to do maintenance through manual find-and-replace now
Do you? It doesn't seem even remotely like an apples-to-apples comparison to me.
If you're the author of a library, you have to cover every possible way in which your code might be used. Most of the "maintenance" ends up being due to some bug report coming from a user who is not doing things in the way you anticipated, and you have to adjust your library (possibly causing more bugs) to accommodate, etc.
If you instead imaging the same functionality being just another private thing within your application, you only need to make sure that functionality works in the one single way you're using it. You don't have to make it arbitrarily general purpose. You can do error handling elsewhere in your app. You can test it only against the range of inputs you've already ensured are the case in your app, etc. The amount of "maintenance" is tiny by comparison to what a library maintainer would have to be doing.
It seems obvious to me that "maintenance" means a much more limited thing when talking about some functionality that the rest of your app is using (and which you can test against the way you're using it), versus a public library that everyone is using and needs to work for everyone's usage of it.
> If you're the author of a library, you have to cover every possible way in which your code might be used.
You don't actually. You write the library for how you use it, and you accept pull requests that extend it if you feel it has merit.
If you don't, people are free to fork it and pull in your improvements periodically. Or their fork gets more popular, and you get to swap in a library that is now better-maintained by the community.
As long as you pin your package, you're better off. Replicating code pretty quickly stops making sense.
It would be healthy that it becomes more common, in fact the privately-owned public garden model of the Valetudo project [1] is the sanest way for FOSS maintainers to look at their projects.
Usually these types if things never change. I understand that all code is a liability, but npm takes this way too far. Many utility functions can be left untouched for many years if not forever.
It's not NPM. It's JS culture. I've done a lot of time programming in TypeScript, and it never fails that in JS programmer circles they are constantly talking about updating all their packages, completely befuddled why I'd be using some multiple year old version of a library in production, etc.
Meanwhile Java goes the other way: twenty-year old packages that are serious blockers to improved readability. Running Java that doesn't even support Option (or Maybe or whatever it's called in Java).
Java writes to a bytecode spec that has failed to keep up with reality, to its detriment. Web development keeps up with an evolving spec pushed forward by compatibility with what users are actually using. This is "culture" only in the most distant, useless sense of the word. It is instead context, which welcomes it back into the world of just fucking developing software, no matter how grey-haired HN gets with rage while the world moves on.
EDIT: Obvious from the rest of your responses in this thread that this is trolling, leaving this up for posterity only
Most of these util libraries require basically no changes ever. The problem is the package maintainers getting hacked and malicious versions getting pushed out.
If you use an LLM to generate a function, it will never be updated.
So why not do the same thing with a dependency? Install it once and never update it (and therefore hacked and malicious versions can never arrive in your dependency tree).
You're a JS developer, right? That's the group who thinks a programmer's job includes constantly updating dependencies to the latest version constantly.
You're not a web developer, right? See my other comment about context if you want to learn more about the role of context in software development in general. If you keep repeating whatever point you're trying to make about some imaginary driving force to pointlessly update dependencies in web dev, you'll probably continue to embarrass yourself, but it's not hard to understand if you read about it instead of repeating the same drivel under every comment in this thread.
Yeah it's the main thing I really dislike about this - how do you make sure you know where it's from? (ie licensing) What if there are updates you need? Are you going to maintain it forever?
For some definition of "small piece of code" that may be ok, but also sometimes this is more than folks consider
Do you know that you can just add a small text file or a comment explaining that a module is vendored code. Ad updates is handled the same way as the rest of the code. And you will be “maintaining” it as long as you need to. Libraries are not “here be dragons” best left to adventurous ones.
If I vendor a dependency that currently works for what my program does, I only have to care about it again if a security hole is discovered in it or if my program changes and the dependency is insufficient in some way. I don't have to worry about the person I'm importing code from going weird or introducing a bug that affects me.
Yep something like blob util could be a blog post or a gist (or several stack overflow answers). And a lot of NPM library falls under that. They always have the anemic standard library of JavaScript forgetting that the C library is even smaller.
I am sure I am not the only one who thinks these micro-dependencies are worthless anyway. You'd be better off just listing the functions in a markdown file for people to copy over than ship an entire package for it.
This isn't "small" open source, "small" would be something you put together in a week or weekend. These are like "micro" projects, where more work goes into actually publishing and maintaining the repository than actually writing the library.
I like the approach C sometimes takes, with the "tiny header file" type of libraries. Though I guess that also stems from the lack of a central build system.
> I’m still trying to figure out what kinds of open source are worth writing in this new era
Is there any upside to opensourcing anything anymore? Anything published today becomes training data for the next model, with no attribution to the original work.
If the goal is to experiment, share ideas, or let others learn from the work, maybe the better default now is "source available", instead of FOSS in the classic sense. It gives people visibility while setting clearer boundaries on how the work can be used.
I learned most of what I know thanks to FOSS projects so I'm still on the fence on this.
I keep seeing this attitude and I don't really understand it at all; there's no upside to publishing open source work because it might be utilized by more people, is that correct?
Or is it the attribution? There are many many libraries I have used and continue to use and I don't know the author's internet handle or Christian name. Does that matter? Why?
I have written a lot of code that my name is no longer attached to. I don't care and I don't know why anyone does. If it were valuable I would have made more money off of it in the first place, and I don't have the ego to just care that people know it's my code either.
I want the things I do today to have an upside for people in the future. If that means I write code that gets incorporated into a model that people use to build things N number of years from now, that's great. That's awesome. Why the hell is that apparently so demotivating to some people?
You're not obligated to give away your mind for free. You're free to share, of course. But sharing implies reciprocity, a back and forth. The internet used to be like that, but if the environment changes, you adapt your behavior accordingly.
In the long run I think it's time to starve the system from input until it's attitude reverts to reciprocal. It's not what I'd want, but it seems necessary. People learn from consequences, not from words alone
Code is only useful if it's used. I could write a ton of code and be buried with it, or publish it for people (or AI software, or dolphins or aliens) to use. Who has the energy to have Anubis measure whether my code, or yours, is ethical enough? I'm going to die someday!
This is kinda how I've felt for months. I don't have any interest in continuing existing open source projects and don't want to create any new ones.
What's the point?
All of my personal projects for the past few months have been entirely private, I don't even host them on Github anymore, I have a private Forgejo instance I use instead.
I also don't trust any new open source project I stumble upon anymore unless I know it was started at least a year ago.
I don’t think open source is going anywhere. It’s posed to get significantly stronger — as the devs which care about it learn how to leverage AI tools to make things that corporate greasemonkeys never had the inspiration to. Low quality code spammers are just marketing themselves for jobs where they can be themselves: soulless and devoid of creative impulse.
That’s the thing: open source is the only place where the true value (or lack of value) of these tools can be established — the only place where one can test mettle against metal in a completely unconstrained way.
Did you ever want to build a compiler (or an equally complex artifact) but got stuck on various details? Try now. It’s going to stand up something half-baked, and as you refine it, you will learn those details — but you’ll also learn that you can productively use AI to reach past the limits of your knowledge, to make what’s beyond a little more palatable.
All the things people say about AI is true to some degree: my take is that some people are rolling the slots to win a CRUD app, and others are trying to use it to do things that they could only imagine before —- and open source tends to be the home of the latter group.
True innovation will come from open source for sure. As the developers don't have the same economic incentives to be "safe", "ethical" "profitable" or whatever. large corporations know this and fear this development. That's why i expect a significant lobbying to take hold in USA that will try and make local AI systems illegal. And I think they will be very convincing to the government. Because the government also fears the "peasants" and giving them any true semblance of real AGI like systems. I bet very soon we will start seeing various classifications that will define what is legal and what is not for a citizen to possess or use.
> That's why i expect a significant lobbying to take hold in USA that will try and make local AI systems illegal.
I think they're going to be using porn and terrorism (as usual) to do that, but also child suicide. I also think they're going to leverage this rhetoric to lock down OSes in general, by making them uninstallable on legally-available hardware unless approved, because approved OSes will only be able to run approved LLMs.
Meaning that I think LLMs/generative AI will be the lever to eliminate general-purpose computing. As mobile went, so will desktop.
I think this is inevitable. The real question for me is whether China will partner with the west on this, or whether we will be trading Chinese CPUs with each other like contraband in order to run what we want.
> any true semblance of real AGI like systems.
This is the only part I don't agree with. This isn't going to happen, but I'm not even sure it would be more useful than what we have. We have billions of full AGI machines walking around, and most of them aren't great. I'm talking about restrictions on something technically barely better than what we have now; maybe only a significant bit more compute-efficient. Training techniques will probably be where we get the most improvements.
It's really not. Every project of any significance is now fending off AI submissions from people who have not the slightest fucking clue about what is involved in working on long-running, difficult projects or how offensive it is to just slather some slop on a bug report and demand it is given scrutiny.
Even at the 10,000 feet view it has wasted people's time because they have to sit down and have a policy discussion about whether to accept AI submissions, which involves people reheating a lot of anecdotal claims about productivity.
Having learned a bit about how to write compilers I know enough to know that I can guarantee you that an AI cannot help you solve the difficult problems that compiler-building tools and existing libraries cannot solve.
It's the same as it is with any topic: the tools exist and they could be improved, but instead we have people shoehorning AI bollocks into everything.
This isn't an AI issue. It is a care issue. People shouldn't submit PRs to project where they don't care enough to understand the project they are submitting to or the code they are submitting. This has always been a problem, there is nothing new. The thing that is new is more people can get to a point where they can submit regardless of their care or understanding. A lot of people are trying to gild their resume by saying they contributed to a project. Blaming AI is blaming the wrong problem. AI is a a tool like a spreadsheet. Project owners should instead be working ways to filter out careless code more efficiently.
Sounds like a lot of FUD to me — if major projects balk at the emergence of new classes of tools, perhaps the management strategy wasn’t resilient in the first place?
Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.
In such a setting, you’re working within a trusted party — and for a major project, that likely means extremely competent maintainers and contributors.
I don’t think these people will have any difficulty adapting to the usage of these tools …
> Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.
It is a waste of time for large-scale volunteer-led projects who now have to deal with tons of shit — when the very topic is "how do we fend off this stuff that we do not want, because our project relies on much deeper knowledge than these submissions ever demonstrate?"
yeah we are getting lots of "I don't know how to do this and AI gave me this code that doesn't work, can you fix it" or "AI said it can do this" and the feature doesn't exist... some people will even argue and say "but AI said it doesn't take long, why won't you add it"
It weaponises incompetence, carelessness and arrogance at every turn.
AI, to me, is a character test: I'm regularly fascinated by finding out who fails it.
For example, in my personal life I have been treated to AI-generated comms from someone that I would never have expected it from. They don't know I know, and they don't know that I think less of them, and I always will.
This author assumes that open sourcing a package only delivers value if is added as a dependency. Publicly sharing code with a permissive license is still useful and a radical idea.
Yeah if I find some (small) unmaintained code I need, I will just copy it (then add in my metrics and logging standards :)
It shouldn't be a radical idea, it is how science overall works.
Also, as per the educational side, I find in modern software ecosystem, I don't want to learn everything. Excellent new things or dominantly popular new things, sure, but there are a lot of branching paths of what to learn next, and having Claude code whip up a good enough solution is fine and lets me focus on less, more deeply.
(Note: I tried leaving this comment on the blog but my phone keyboard never opened despite a lot of clicking, and on mastodon but hit the length limit).
A copyleft license is much better because it ensures that it will remain open and in most cases large companies won't use it, making sure they will have to shell out the money to hire someone to do it instead.
Yep. Even just sharing code with any license is valuable. Much I have learned from reading an implementation of code I have never run even once. Solutions to tough problems are an under-recognized form of generosity.
This is a point where the lack of alignment between the free beer crowd and those they depend on is all too clear. The free beer enthusiast cannot imagine benefiting from anything other than a finished work. They are concerned about the efficient use of scarce development bandwidth without consciousness of why it is scarce or that it is not theirs to direct. They view solutions without a hot package cache as a form of waste, oblivious to how such solutions expedite the development of all other tools they depend on, commercial or free.
I do agree with this, but there are some caveats. At the end of the day it is time people invest into a project. And that is often unpaid time.
Now, that does not mean it has no value, but it is a trade-off. After about 14 years, for instance, I retired permanently from rubygems.org in 2024 due to the 100k download limit (and now I wouldn't use it after the shameful moves RubyCentral did, as well as the new shiny corporate rules with which I couldn't operate within anyway; it is now a private structure owned by Shopify. Good luck finding people who want to invest their own unpaid spare time into anything tainted by corporations here).
> Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.
Use of an AI to write your code is also a form of dependency. When the LLM spits out code and you just dump it in your project with limited vetting, that's not really that different from vendoring a dependency. It has a different set of risks, but it still has risks.
Part of the benefit over a dependency is that the code added will (hopefully) be narrowly tailored to your specific need, rather than the generic implementation from a library that likely has support for unused features.
Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about.
But in a lot of cases you can't know all the dependencies, so you lean on the community trusting that a package solves the problem well enough that you can abstract it.
You can pin the dependency and review the changes for security reasons, but fully grasping the logic is non-trivial.
Smaller dependencies are fine to copy at first, but at some point the codebase becomes too big, so you abstract it and at that point it becomes a self-maintained dependency. Which is a fair decision, but it is all about tradeoffs and sometimes too costly.
You'd get those benefits from traditional dependencies if you copy them in and never update. Is an AI dependency going to have the equivalent of "upstream fixes"?
> Part of the benefit over a dependency is that the code added will (hopefully) be narrowly tailored to your specific need, rather than the generic implementation from a library that likely has support for unused features.
In decent ecosystems there should be low or zero overhead to that.
> Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about.
Maybe. I find generic code is often easier to read than specialised custom implementations, because there is necessarily a proper separation of concerns in the generic version.
Right, but you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?" or "will the author be compromised in a supply-chain attack, or do a deliberate protestware attack?" etc. As for performance, a lot of npm packages don't have proper tree-shaking, so you might be taking on extra bloat (or installation cost). Your point is well-taken, though.
> you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?
This is not a worry with NPM. You can just specify a specific version of a dependency in your package.json, and it'll never be updated ever.
I have noticed for years that the JS community is obsessed with updating every package to the latest version no matter what. It's maddening. If it's not broke, don't fix it!
Wouldn't call it a risk in itself, but part of the benefit of using a library, a good and tailored one at least, is that it'll get modernised without my intervention. Even if the code produced for you was state-of-the-art at the moment of inclusion, will it remain that way 5 years from now?
> and you just dump it in your project with limited vetting
Well yes there’s your problem. But people have been doing this with random snippets found on the internet for a while now. The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs
> The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs
True. But the difference is the scale and ease of doing with code generators. With a few clicks you can add hundreds of lines of code which supposedly does the right thing. While in the past, you would get code snippets for a particular aspect of the problem that you are trying to solve. You still had to figure out how to add it to your code base and somehow make it “work”
Surely in any responsible development environment those hundreds of lines of code still have to be reviewed.
Or don't people do code review any more? I suppose one could outsource the code review to an AI, preferably not the one that wrote it though. But if you do that surely you will end up building systems that no one understands at all.
Agree. Any reasonable team should have code reviews in place, but an irresponsible coder would push the responsibility of code quality and correctness to code reviewers. They were doing it earlier too, but the scale and scope was much smaller.
AI offering the solution for a small problem that probably doesn’t deserve yet another dependency suggests to me that there’s a middle ground that we’ve failed to sufficiently cover: how to socialize code snippets that you’re meant to just inline into your project. Stack Overflow is probably the closest we’ve gotten to a generalized solution and it doesn’t exactly feel like a good one.
I came across this once before in the form of a react hooks library that had no dependency to install. It was just a website and when you found the hook you wanted you were meant to paste it into your project.
More likely, what we will see is the decline of low effort projects. The JavaScript/ Typescript ecosystem has been plagued with such packages. But that’s more anomalous to the JS community than it is a systemic problem with open source in general.
So if fewer people are including silly dependencies like isEven or leftPad, then I see that as a positive outcome.
Right now I tend to not use an external library unless the code would be large e.g. http server and/or the library is extremely popular.
Otherwise, writing it myself is much better. It's more customizable and often much smaller in size. This is because the library has to generalize, and it comes with bloats.
Using AI to help write is great because I should understand that anyway whether AI writes it or not or whether it's in an external library.
One example recently is that I built a virtual list myself. The code is much smaller and simpler compared to other popular libraries. But of course it's not as generalizable.
Chances are, even if you deliberately and strategically pick to work on an OSS project that you are positively sure an LLM can’t just spit out on command, it will be capable of doing so by the time you are close to completion. In that sense, one has to either be not inclined to question “what’s the point” or have a bit of gambling mentality in order to work on anything substantial.
That’s not automatically a problem, however. The problem is that even if you do come up with a really cool idea that LLM is not capable of autocompleting, and you release it under a copyleft license (to ensure the project survives and volunteer contributor’s work is not adopted and extinguished by some commercial interest), it will get incorporated into its dataset regardless of the licensing, and thereafter the LLM will be capable of spitting it out and its large corporate operator will be able to monetise your code (allowing anyone with money wishing to build a commercial product based on it).
I suppose some people would see this as progress: fewer dependencies, more robust code (even if it’s a bit more verbose), quicker turnaround time than the old “search npm, find a package, read the docs, install it” approach.
Why would randomized code be more robust?
Also, how is searching/reading the docs slower than checking the correctness of a randomized function?
> ….but I do think it’s a future where we prize instant answers over teaching and understanding
It depends. For stuff I don’t care about I’m happy to treat it as a black box. Conversely, AI now allows me to do deep dive on essentially anything I’m interested in, which has been a massive boon to my learning ability.
If you're not learning to code, then you want efficient code, so the comments are wasted bytes (ok, not a huge expense, but still).
If you are learning to code, or just want to understand how this code works, then asking an LLM is going to get a lot better result. LLMs are fantastic tutors. Endlessly patient, go down any rabbit hole with you, will continue explaining a concept until you get it, etc. I think they're going to revolutionise education, especially for practical subjects like coding.
Respect to the author for trying to educate, though.
LLMs give us the opportunity to work on more complex projects and gain fuller understanding of the problem space and concepts. Or create tons of slop. Take your pick.
Small open source is still valuable, but the bar is higher. If your project is something that's trivial and nobody just thought to do it before you and bothered to do it after, that's probably not going to survive, but if your project is a small focused tool that handles something difficult really well, it's 100% got a future.
Earlier this year I wrote an interpreter for a niche, proprietary binary format. Someone asked if I could open source it so that they could (more easily) run it on NixOS. I declined as I'm just that strongly opposed to my work being used to train AI models and further entrench the enshittification of the internet.
TLDR: [AI promises] a future where we prize instant answers over teaching and understanding
But what this article and the comments don't say: open-source is mainly a quality metric. I re-use code from popular open-source repo's in part because others have used it without complaints (or document the bugs), in part because people are embarrassed to write poor-quality open-source so it's above-par code, and in part because if there are issues in this corner of the world, this dependency will solve them over time (and I can watch and wait when I don't have time to fix and contribute).
The quality aspect drives me to prefer dependencies over AI when I don't want full ownership, so I'll often ask AI to show open-source projects that do something well.
(As an aside, this article is about AI, but AI is so pervasive now as an issue that it doesn't even need saying in the title.)
> Even now there’s a movement toward putting documentation in an llms.txt file, so you can just point an agent at it and save your brain cells the effort of deciphering English prose. (Is this even documentation anymore? What is documentation?)
I look at it as why not have the best of both worlds? The docs for my JS framework all have the option of being returned as LLM-friendly text [1].
When I utilize this myself, it's to get help fleshing out skeleton code inside of an app built w/ my framework (e.g, Claude Sonnet w/ these docs in context build a nearly ~90-100% accurate implementation for most stuff I throw at it—anything from little lib stuff up to full-blown API endpoint design and even framework-level implementations of small stuff like helping to refactor the built-in middleware loading). It's not for a lack of desire to read, but rather a lack of desire to build every little thing from scratch when I know an LLM is perfectly capable (and much faster than me).
Open source exists because coding was a significant effort and code was a thing of high value. Unsurprisingly companies hesitated to make the code public and free. All of this is changing now as coding has suddenly become trivial. So, yes, the mission of open source, in general, will be challenged.
Why wouldn't you? Your codebase (if you're a business) exists to make you money, people being able to copy some unknown portions of it without further license if they somehow legally get their hands on a copy of it seems entirely irrelevant.
PS. I think this is much less clear and much less settled law than you are suggesting.
It's more nuanced. If I even have a few lines I can prove are mine, those parts are copywritable in the same way Pride and Prejudice is public domain but pride and prejudice and zombies is copyrighted.
1. Reducing dependencies is a wrong success metric. You just end up doing more work yourself, except you can't be an expert in everything, so your code is often strictly worse.
2. Regenerating the same solutions with a probabilistic machine will produce bugs a certain percentage of the time. Dependencies are always the same code (when versioned).
3. Cognitive overhead for human review is higher with LLM-generated libs, for no additional benefit.
> Reducing dependencies is a wrong success metric. You just end up doing more work yourself
Except it's just not true in many cases because of social systems we've built. If I want to ship software to Debian I have to make sure that every single of my 3rdparty dependencies is registered and packaged as a proper debian package - a lot of time it will take much less work to rewrite some code than to get 25 100-lines-of-code micro-libraries accepted into debian.
> Claude’s version is pretty close to the blob-util version (unsurprising, since it was probably trained on it!).
AI are thieves!
> I don’t know which direction we’re going in with AI (well, ~80% of us; to the remaining holdouts, I salute you and wish you godspeed!), but I do think it’s a future where we prize instant answers over teaching and understanding.
Google ruined its search engine years ago before AI already.
The big problem I see is that we have become WAY too dependent on these mega-corporations. Which browser are people using? Typically chrome. An evil company writes the code. And soon it will fire the remaining devs and replace them with AI. Which is kind of fitting.
> Even now there’s a movement toward putting documentation in an llms.txt file, so you can just point an agent at it and save your brain cells the effort of deciphering English prose. (Is this even documentation anymore? What is documentation?)
Documentation in general sucks. But documentation is also a hard problem.
I love examples. Small snippets. FAQs. Well, many projects barely have these.
Look at ruby webassembly/wasm or ruby opal. Their documentation is about 99% useless. Or, even worse - rack in ruby. And I did not notice this in the past, in part because e. g. StackOverflow still worked, there were many blogs which helped fill up missing information too. Well all of that is largely gone now or has been slurped up by AI spam.
> the era of small, low-value libraries like blob-util is over. They were already on their way out thanks to Node.js and the browser taking on more and more of their functionality (see node:glob, structuredClone, etc.), but LLMs are the final nail in the coffin.
I still think they have value, but looking at organisations such as rubygems.org disrupt the ecosystem and bleeding it dry by kicking out small hobbyists, I think there is indeed a trend towards eliminating the silly solo devs who think their unpaid spare time is not worthy of anything at all, yet the big organisations eagerly throw down more and more restrictions onto them (my favourite example is the arbitrary 100k download limit for gems hosted at rubygems.org, but look at the new shiny corporate rules on rubygems.org - this is when corporations take over the infrastructure and control it. Ironically this also happened to pypi and they admit this indirectly: https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2f... - of course they deny that corporations control pypi now, but by claiming otherwise they admit it, because this is how hobbyists get eliminated. Just throw more and more restrictions at them without paying them. Sooner or later they decide to do something better with their time.)
He almost got it right. It's not just the fate of small open source. It's the fate of all programmers now. Why hire a programmer when an LLM costs less, works faster and makes less mistakes (OP compliments better error handling, read the article).
Unless you are a product owner, you have paying clients that love you and your product and won't simply ditch it in favour of a new clone, you are really screwed.
"when an LLM costs less, works faster and makes less mistakes"... indeed, but it doesn't follow at all that it's the fate of all programmers _now_... at least in my experience none of these things are true ATM.
Well, at the very least it costs less than asking intern to look for a lib doing something particular and give some examples... still about as accurate as the intern tho.
So far I've never seen yet a non-programmer release production-grade code using only LLMs. There's just so much to care about (from security, deployments, secret management, event-driven architectures, and a large etc.) that "just" providing a prompt to create an "app" doesn't cut it. You need infra and non-engineers just don't know shit about infra (even if it's 99% managed), you need to deploy your llm-generated code in that infra; that should happen in a ci/cd probably. And what about migrations? Git? Who's setting up the api gateway? I don't mean to say that LLMs don't know how to do that, but you need to instruct them to do so, and even there, they will make silly mistakes and you need to re-instruct them or fix it.
Prompting is just 50% of the work (and the easy part actually). Ask the Head of Product or whoever is there to deploy something valuable to production and maintain it for 6 months while not losing money. It's just not going to happen, not even with truly AGI.
An LLM might be able to replace the majority of the code Sindre Sorhus has put out there, but it's probably a stretch to think that it could replace someone like John Carmack.
Trivial NPM libraries were never needed, but LLMs really are the nail in the coffin for them even when it comes to the most incompetent programmers because now they can literally just ask an LLM to spit out the exact same thing.
That type check is honestly not pointless at all. You can never be certain of your inputs in a web app. The likelihood of that parameter being something other than an arraybuffer is non-zero, and you generally want to have code coverage for that kind of undefined behavior. TypeScript doesn't complain without a reason.
I see this as an absolute win. The state of micro dependencies of js was a nightmare that only happened because a lot of undereducated developers flooded the market to get that sweet faang money.
Now that both have dried up I hope we can close the vault door on js and have people learn how to code again.
The best outcome was things like jquery and then lodash where a whole collection of small util functions get rolled in to one package.
Oh god, without tree shaking, lodash is such a blight.
I've seen so many tiny packages pull in lodash for some little utility method so many times. 400 bytes of source code becomes 70kb in an instant, all because someone doesn't know how to filter items in an array. And I've also seen plenty of projects which somehow include multiple copies of lodash in their dependency tree.
Its such a common junior move. Ugh.
Experienced engineers know how to pull in just what they need from lodash. But ... most experienced engineers I know & work with don't bother with it. Javascript includes almost everything you need these days anyway. And when it doesn't, the kind of helper functions lodash provides are usually about 4 lines of code to write yourself. Much better to do that manually rather than pull in some 70kb dependency.
Unless you're part of the demoscene or your webpage is being loaded by Voyager II, why is 70kb of source code a problem?
Not wanting to use well constructed, well tested, well distributed libraries to make code simpler and more robust is not motivated by any practical engineering concern. It's just nostalgia and fetishism.
I agree the JS standard library includes most of the stuff you need these days, rendering jquery and half of lodash irrelevant now. But there's still a lot of useful utilities in lodash, and potentially a new project could curate a collection of new, still relevant utilities.
Can you give some examples? I’ve written js / ts for 10-15 years and I’ve never reached for lodash in my life.
_.any, some, every, keyBy.
_.any doesn’t seem to exist.
Some and every are also in JS (with the same names even!).
keyBy is just Array.map -> Object.fromEntries
The problem with helper functions is that they're often very easy to write, but very hard to figure out the types for.
Take a generic function that recursively converts snake_case object keys to pascalCase. That's about 10 lines of Javascript, you can write that in 2 mins if you're a competent dev. Figuring out the types for it can be done, but you really need a lot of ts expertise to pull it off.
Not really familiar with TS, but what would be so weird with the typing? Wouldn't it be generic over `T -> U`, with T the type with snake_case fields and U the type with pascalCase fields?
Turns out in TypeScript you can model the conversion of the keys themselves from snake_case to pascalCase within the type system[0]. I assume they meant that this was the difficult part.
[0]: https://stackoverflow.com/questions/60269936/typescript-conv...
Your first sentence cheers that we're moving from NPM micropackages to LLM-generated code, and then you say this will result in people having to learn to code again.
I don't see how the conclusion follows from this.
There will be many LLM-generated functions purporting to do the same thing, and a bug in one of them that gets fixed means only one project gets fixed instead of every project using an NPM package as a dependency.
There will be many LLM-generated functions doing the same thing, in the same project, unless the human pays attention.
I've been playing a lot recently with various models, lately with the expensive claude models (API, for the large context windows), and in every attempt things are really impressive at the beginning, and start going south once the codebase reaches about 10k to 15k lines of code. Even with tools split out into a separate library and separate documentation at that point it has a tendency to generate tool functions again in the module it's currently working on over taking the already defined one in the helper library.
You mention that you don’t see how this could result in developers that write code. Then you immediately follow that with fixing bugs.
Yes, use of LLMs is still developers not writing original code, but it’s still an improvement (a minor one) of the copy/paste of micro dependencies.
Ultimately developers aren’t going to figure out writing original code until they are forced to do so from changed conditions within their employer.
Now you got vibe coder
I don't quite understand your argument. Wasn't the post about how users might replace transparent dependencies with transparent LLM drop-ins? I don't see how having an LLM to do the same job would enable someone to learn more. They're probably the kind of person who will ask the LLM to perform a refactor when problems arise, so they won't learn that much through osmosis.
Less incentive to write small libraries. Less incentive to write small tutorials on your own website. Unless you are a hacker or a spammer where your incentives have probably increased. We are entering the era of cheap spam of everything with little incentive for quality. All this for the best case outcome of most people being made unemployed and rolling the dice on society reorganising to that reality.
> or a spammer where your incentives have probably increased.
Slight pushback on this. The web has been spammed with subpar tutorials for ages now. The kind of medium "articles" that are nothing more than "getting started" steps + slop that got popular circa 2017-2019 is imo worse than the listy-boldy-emojy-filled articles that the LLMs come up with. So nothing gained, nothing lost imo. You still have to learn how to skim and get signals quickly.
I'd actually argue that now it's easier to winnow the slop. I can point my cc running in a devcontainer to a "tutorial" or lib / git repo and say something like "implement this as an example covering x and y, success condition is this and that, I want it to work like this, etc.", and come back and see if it works. It's like a litmus test of a tutorial/approach/repo. Can my cc understand it? Then it'll be worth my time looking into it. If it can't, well, find a different one.
I think we're seeing the "low hanging fruit" of slop right now, and there's an overcorrection of attitude against "AI". But I also see that I get more and more workflows working for me, more or less tailored, more or less adapted for me and my uses. That's cool. And it's powered by the same underlying tech.
The thing is, what is the actual point of this approach? Is it for leaning? I strongly believe there’s no learning without inmersion and practice. Is it for automation? The whole idea of automation is to not think about the thing again unless there’s a catastrophic error, it’s not about babysitting a machine. Is it about judgment? Judgment is something you hone by experiencing stuff then deciding whether it’s bad or not. It’s not something you delegate lightly.
The problem isn't that AI slop is doing something new. Phishing, blogspam, time wasting PRs, website scraping, etc have all existed before.
The problem is that AI makes all of that far, far easier.
Even using tooling to filter articles doesn't scale as slop grows to be a larger and larger percentage of content, and it means I'm going to have to consider prompt injections and running arbitrary code. All of this is a race to the bottom of suck.
The difference is that the cost of slop has decreased by orders of magnitude. What happens when only 1 in 10,000 of those tutorials you can find is any good, from someone actually qualified to write it?
What happens when the monkeys stop getting bananas to work on the typewriters? More stories?
One instance of definite benefit of AI is AI summary web search. Searching for answers to simple questions and not having to cut though SEO slop is such an improvement
The summary is often incorrect in at least some subtle details, which is invisible to a lot of people who do not understand LLM limitations.
Now, we can argue that a typical SEO-optimized garbage article is not better, but I feel like the trust score for them was lower on average from a typical person.
I don't think searching for answers to simple questions was a problem until Google nerfed their own search engine.
I don't understand this position, do you have direct evidence that Google actively made search worse? Before I'm misunderstood I do want to clarify that IMO, the end user experience for web searching on Google is much worse in 2025 than it was in say 2000. But, the web was also much much smaller, less commercial and the SNR was much better in general.
Sure, web search companies moved away from direct keyword matching to much more complex "semantics-adjacent" matching algorithms. But we don't have the counterfactual keyword-based Google search algorithm from 2000 on data from 2025 to claim that it's just search getting worse, or the problem simply getting much harder over time and Google failing to keep up with it.
In light of that, I'm much more inclined to believe that it's SEO spam becoming an industry that killed web search instead of companies "nerfing their own search engines".
Pretty sure Google attempting to curb SEO tactics is what led to whatever nerfing you are talking about.
There was a time before SEO slop that web search was really valuable
We're fighting slop with condensed slop
Hard disagree. AI summaries are useless for the same reason AI summaries from Google and DDG are useless: it's almost always missing the context. The AI page summaries typically take the form of "here's the type of message that the author of this page is trying to convey" instead of "here's what the page actually says". Just give me the fucking contents. If I wanted AI slop I'd ask my fucking doorknob.
I think you have some of your wires crossed, asking Google for "here's the type of message that the author of this page is trying to convey" is not what most people think is a simple question (also asking Google to reprint copyrighted material us also a non starter). Asking Google "what is the flag for persevering Metadata using scp" and getting the flag name instead of a SEO article with the a misleading title go on about so third party program that you can download that does exactly that and never actually tell you the answer is ridiculous and I am happy AI has help reduce the click bait
> We are entering the era of cheap spam of everything with little incentive for quality
Correction -- sadly, we're already well within this era
I was searching a specific niche on Youtube today, and scrolled endlessly trying to find something that wasn't AI generated. Youtube is being completely spammed.
But some webdev said they are 10x faster now so it cant be bad for humanity /s
Quick! Tell me what this does or you're not a real programmer, not a craftsman, and are a total hack who understands nothing about quality (may not even be a good person, want to lay off bread-and-butter red-blooded american programmers)
.global main
.text
main:
mflr 27
mr 13,3
mr 14,4
addi 3,3,48
bl putchar
li 3,10
bl putchar
next:
lwz 3,0(14)
bl puts
addi 14,14,4
addi 13,13,-1
cmpwi 13,0
bgt next
lis 3,zone@ha
addi 3,3,zone@l
bl puts
mtlr 27
mr 3,13
blr
.data
.align 2
zone:
.string "Bonjour"
Upvoting because it's a salient point and downvoters are mad. HN has a complex lately.
> the era of small, low-value libraries like blob-util is over.
Thankfully (not against blob-util specifically because I've never intentionally used it), I wouldn't completely blame llms either since languages like Go never had this dependency hell.
npm is a security nightmare not just because of npm the package manager, because the culture of the language rewards behavior such as "left-pad".
Instead of writing endless utilities for other project to re-use, write actual working things instead - that's where the value/fun is.
But as Go puts it:
“A little copying is better than a little dependency.”
https://go-proverbs.github.io/
Copying is just as much dependency, you just have to do maintenance through manual find-and-replace now.
> you just have to do maintenance through manual find-and-replace now
Do you? It doesn't seem even remotely like an apples-to-apples comparison to me.
If you're the author of a library, you have to cover every possible way in which your code might be used. Most of the "maintenance" ends up being due to some bug report coming from a user who is not doing things in the way you anticipated, and you have to adjust your library (possibly causing more bugs) to accommodate, etc.
If you instead imaging the same functionality being just another private thing within your application, you only need to make sure that functionality works in the one single way you're using it. You don't have to make it arbitrarily general purpose. You can do error handling elsewhere in your app. You can test it only against the range of inputs you've already ensured are the case in your app, etc. The amount of "maintenance" is tiny by comparison to what a library maintainer would have to be doing.
It seems obvious to me that "maintenance" means a much more limited thing when talking about some functionality that the rest of your app is using (and which you can test against the way you're using it), versus a public library that everyone is using and needs to work for everyone's usage of it.
> If you're the author of a library, you have to cover every possible way in which your code might be used.
You don't actually. You write the library for how you use it, and you accept pull requests that extend it if you feel it has merit.
If you don't, people are free to fork it and pull in your improvements periodically. Or their fork gets more popular, and you get to swap in a library that is now better-maintained by the community.
As long as you pin your package, you're better off. Replicating code pretty quickly stops making sense.
It's a rare developer (or human for that matter) who can just shrug and say "fork off" when asked for help with their library.
It would be healthy that it becomes more common, in fact the privately-owned public garden model of the Valetudo project [1] is the sanest way for FOSS maintainers to look at their projects.
[1]: https://github.com/Hypfer/Valetudo#valetudo-is-a-garden
Copied text does not inject bitcoin mining malware three months after I paste it.
Neither does a dependency you don't update, though, which is isomorphic to copied code you never update.
Usually these types if things never change. I understand that all code is a liability, but npm takes this way too far. Many utility functions can be left untouched for many years if not forever.
It's not NPM. It's JS culture. I've done a lot of time programming in TypeScript, and it never fails that in JS programmer circles they are constantly talking about updating all their packages, completely befuddled why I'd be using some multiple year old version of a library in production, etc.
Meanwhile Java goes the other way: twenty-year old packages that are serious blockers to improved readability. Running Java that doesn't even support Option (or Maybe or whatever it's called in Java).
Java writes to a bytecode spec that has failed to keep up with reality, to its detriment. Web development keeps up with an evolving spec pushed forward by compatibility with what users are actually using. This is "culture" only in the most distant, useless sense of the word. It is instead context, which welcomes it back into the world of just fucking developing software, no matter how grey-haired HN gets with rage while the world moves on.
EDIT: Obvious from the rest of your responses in this thread that this is trolling, leaving this up for posterity only
Most of these util libraries require basically no changes ever. The problem is the package maintainers getting hacked and malicious versions getting pushed out.
If you use an LLM to generate a function, it will never be updated.
So why not do the same thing with a dependency? Install it once and never update it (and therefore hacked and malicious versions can never arrive in your dependency tree).
You're a JS developer, right? That's the group who thinks a programmer's job includes constantly updating dependencies to the latest version constantly.
You're not a web developer, right? See my other comment about context if you want to learn more about the role of context in software development in general. If you keep repeating whatever point you're trying to make about some imaginary driving force to pointlessly update dependencies in web dev, you'll probably continue to embarrass yourself, but it's not hard to understand if you read about it instead of repeating the same drivel under every comment in this thread.
> Install it once and never update it (and therefore hacked and malicious versions can never arrive in your dependency tree).
Huh? What if your once-off installation or vendoring IS a hacked an malicious version and you never realise and never update it. That's worse.
Hardly worth responding to, from other comments they're defending Java. They're not used to updates.
Keyword: little.
Dependencies need to pull their own weight.
Shifting responsibilities is a risk that the value added needs to offset.
Yeah it's the main thing I really dislike about this - how do you make sure you know where it's from? (ie licensing) What if there are updates you need? Are you going to maintain it forever?
For some definition of "small piece of code" that may be ok, but also sometimes this is more than folks consider
Do you know that you can just add a small text file or a comment explaining that a module is vendored code. Ad updates is handled the same way as the rest of the code. And you will be “maintaining” it as long as you need to. Libraries are not “here be dragons” best left to adventurous ones.
If I vendor a dependency that currently works for what my program does, I only have to care about it again if a security hole is discovered in it or if my program changes and the dependency is insufficient in some way. I don't have to worry about the person I'm importing code from going weird or introducing a bug that affects me.
Yep something like blob util could be a blog post or a gist (or several stack overflow answers). And a lot of NPM library falls under that. They always have the anemic standard library of JavaScript forgetting that the C library is even smaller.
> since languages like Go never had this dependency hell
What is the feature of Go that this is referring to?
It's a cultural thing.
And libraries try harder not to have absurd dependencies, than finished products (correctly, IMO).
I am sure I am not the only one who thinks these micro-dependencies are worthless anyway. You'd be better off just listing the functions in a markdown file for people to copy over than ship an entire package for it.
This isn't "small" open source, "small" would be something you put together in a week or weekend. These are like "micro" projects, where more work goes into actually publishing and maintaining the repository than actually writing the library.
I like the approach C sometimes takes, with the "tiny header file" type of libraries. Though I guess that also stems from the lack of a central build system.
Why aren't those tiny header file libraries just part of the standard C library?
Wait sorry, I don't mean that. I read too many bog-standard HN comments about NPM above.
What's your copy& paste solution to security updates?
> I’m still trying to figure out what kinds of open source are worth writing in this new era
Is there any upside to opensourcing anything anymore? Anything published today becomes training data for the next model, with no attribution to the original work.
If the goal is to experiment, share ideas, or let others learn from the work, maybe the better default now is "source available", instead of FOSS in the classic sense. It gives people visibility while setting clearer boundaries on how the work can be used.
I learned most of what I know thanks to FOSS projects so I'm still on the fence on this.
I keep seeing this attitude and I don't really understand it at all; there's no upside to publishing open source work because it might be utilized by more people, is that correct?
Or is it the attribution? There are many many libraries I have used and continue to use and I don't know the author's internet handle or Christian name. Does that matter? Why?
I have written a lot of code that my name is no longer attached to. I don't care and I don't know why anyone does. If it were valuable I would have made more money off of it in the first place, and I don't have the ego to just care that people know it's my code either.
I want the things I do today to have an upside for people in the future. If that means I write code that gets incorporated into a model that people use to build things N number of years from now, that's great. That's awesome. Why the hell is that apparently so demotivating to some people?
You're not obligated to give away your mind for free. You're free to share, of course. But sharing implies reciprocity, a back and forth. The internet used to be like that, but if the environment changes, you adapt your behavior accordingly.
In the long run I think it's time to starve the system from input until it's attitude reverts to reciprocal. It's not what I'd want, but it seems necessary. People learn from consequences, not from words alone
Sorry but source-available is probably going to get slurped up for training data as well
Microsoft already did this for all code in every public repo.
When are they going to start doing it for private repos too...
Staying true to free software principles. It's unethical to publish nonfree code or binaries.
Code is only useful if it's used. I could write a ton of code and be buried with it, or publish it for people (or AI software, or dolphins or aliens) to use. Who has the energy to have Anubis measure whether my code, or yours, is ethical enough? I'm going to die someday!
This is kinda how I've felt for months. I don't have any interest in continuing existing open source projects and don't want to create any new ones.
What's the point?
All of my personal projects for the past few months have been entirely private, I don't even host them on Github anymore, I have a private Forgejo instance I use instead.
I also don't trust any new open source project I stumble upon anymore unless I know it was started at least a year ago.
I don’t think open source is going anywhere. It’s posed to get significantly stronger — as the devs which care about it learn how to leverage AI tools to make things that corporate greasemonkeys never had the inspiration to. Low quality code spammers are just marketing themselves for jobs where they can be themselves: soulless and devoid of creative impulse.
That’s the thing: open source is the only place where the true value (or lack of value) of these tools can be established — the only place where one can test mettle against metal in a completely unconstrained way.
Did you ever want to build a compiler (or an equally complex artifact) but got stuck on various details? Try now. It’s going to stand up something half-baked, and as you refine it, you will learn those details — but you’ll also learn that you can productively use AI to reach past the limits of your knowledge, to make what’s beyond a little more palatable.
All the things people say about AI is true to some degree: my take is that some people are rolling the slots to win a CRUD app, and others are trying to use it to do things that they could only imagine before —- and open source tends to be the home of the latter group.
True innovation will come from open source for sure. As the developers don't have the same economic incentives to be "safe", "ethical" "profitable" or whatever. large corporations know this and fear this development. That's why i expect a significant lobbying to take hold in USA that will try and make local AI systems illegal. And I think they will be very convincing to the government. Because the government also fears the "peasants" and giving them any true semblance of real AGI like systems. I bet very soon we will start seeing various classifications that will define what is legal and what is not for a citizen to possess or use.
> That's why i expect a significant lobbying to take hold in USA that will try and make local AI systems illegal.
I think they're going to be using porn and terrorism (as usual) to do that, but also child suicide. I also think they're going to leverage this rhetoric to lock down OSes in general, by making them uninstallable on legally-available hardware unless approved, because approved OSes will only be able to run approved LLMs.
Meaning that I think LLMs/generative AI will be the lever to eliminate general-purpose computing. As mobile went, so will desktop.
I think this is inevitable. The real question for me is whether China will partner with the west on this, or whether we will be trading Chinese CPUs with each other like contraband in order to run what we want.
> any true semblance of real AGI like systems.
This is the only part I don't agree with. This isn't going to happen, but I'm not even sure it would be more useful than what we have. We have billions of full AGI machines walking around, and most of them aren't great. I'm talking about restrictions on something technically barely better than what we have now; maybe only a significant bit more compute-efficient. Training techniques will probably be where we get the most improvements.
> It’s posed to get significantly stronger
It's really not. Every project of any significance is now fending off AI submissions from people who have not the slightest fucking clue about what is involved in working on long-running, difficult projects or how offensive it is to just slather some slop on a bug report and demand it is given scrutiny.
Even at the 10,000 feet view it has wasted people's time because they have to sit down and have a policy discussion about whether to accept AI submissions, which involves people reheating a lot of anecdotal claims about productivity.
Having learned a bit about how to write compilers I know enough to know that I can guarantee you that an AI cannot help you solve the difficult problems that compiler-building tools and existing libraries cannot solve.
It's the same as it is with any topic: the tools exist and they could be improved, but instead we have people shoehorning AI bollocks into everything.
This isn't an AI issue. It is a care issue. People shouldn't submit PRs to project where they don't care enough to understand the project they are submitting to or the code they are submitting. This has always been a problem, there is nothing new. The thing that is new is more people can get to a point where they can submit regardless of their care or understanding. A lot of people are trying to gild their resume by saying they contributed to a project. Blaming AI is blaming the wrong problem. AI is a a tool like a spreadsheet. Project owners should instead be working ways to filter out careless code more efficiently.
This is an AI issue because people, including the developers of AI tools, don't care enough.
The Tragedy Of The Commons is always about this: people want what they want, and they do not care to prevent the tragedy, if they even recognise it.
> Project owners should instead be working ways to filter out careless code more efficiently.
Great. So the industry creates a burden and then forces people to deal with it — I guess it's an opportunity to sell some AI detection tools.
Sounds like a lot of FUD to me — if major projects balk at the emergence of new classes of tools, perhaps the management strategy wasn’t resilient in the first place?
Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.
In such a setting, you’re working within a trusted party — and for a major project, that likely means extremely competent maintainers and contributors.
I don’t think these people will have any difficulty adapting to the usage of these tools …
> Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.
It is a waste of time for large-scale volunteer-led projects who now have to deal with tons of shit — when the very topic is "how do we fend off this stuff that we do not want, because our project relies on much deeper knowledge than these submissions ever demonstrate?"
yeah we are getting lots of "I don't know how to do this and AI gave me this code that doesn't work, can you fix it" or "AI said it can do this" and the feature doesn't exist... some people will even argue and say "but AI said it doesn't take long, why won't you add it"
It weaponises incompetence, carelessness and arrogance at every turn.
AI, to me, is a character test: I'm regularly fascinated by finding out who fails it.
For example, in my personal life I have been treated to AI-generated comms from someone that I would never have expected it from. They don't know I know, and they don't know that I think less of them, and I always will.
This author assumes that open sourcing a package only delivers value if is added as a dependency. Publicly sharing code with a permissive license is still useful and a radical idea.
Yeah if I find some (small) unmaintained code I need, I will just copy it (then add in my metrics and logging standards :)
It shouldn't be a radical idea, it is how science overall works.
Also, as per the educational side, I find in modern software ecosystem, I don't want to learn everything. Excellent new things or dominantly popular new things, sure, but there are a lot of branching paths of what to learn next, and having Claude code whip up a good enough solution is fine and lets me focus on less, more deeply.
(Note: I tried leaving this comment on the blog but my phone keyboard never opened despite a lot of clicking, and on mastodon but hit the length limit).
A copyleft license is much better because it ensures that it will remain open and in most cases large companies won't use it, making sure they will have to shell out the money to hire someone to do it instead.
Yep. Even just sharing code with any license is valuable. Much I have learned from reading an implementation of code I have never run even once. Solutions to tough problems are an under-recognized form of generosity.
This is a point where the lack of alignment between the free beer crowd and those they depend on is all too clear. The free beer enthusiast cannot imagine benefiting from anything other than a finished work. They are concerned about the efficient use of scarce development bandwidth without consciousness of why it is scarce or that it is not theirs to direct. They view solutions without a hot package cache as a form of waste, oblivious to how such solutions expedite the development of all other tools they depend on, commercial or free.
I do agree with this, but there are some caveats. At the end of the day it is time people invest into a project. And that is often unpaid time.
Now, that does not mean it has no value, but it is a trade-off. After about 14 years, for instance, I retired permanently from rubygems.org in 2024 due to the 100k download limit (and now I wouldn't use it after the shameful moves RubyCentral did, as well as the new shiny corporate rules with which I couldn't operate within anyway; it is now a private structure owned by Shopify. Good luck finding people who want to invest their own unpaid spare time into anything tainted by corporations here).
> Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.
Use of an AI to write your code is also a form of dependency. When the LLM spits out code and you just dump it in your project with limited vetting, that's not really that different from vendoring a dependency. It has a different set of risks, but it still has risks.
Part of the benefit over a dependency is that the code added will (hopefully) be narrowly tailored to your specific need, rather than the generic implementation from a library that likely has support for unused features.
Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about.
But in a lot of cases you can't know all the dependencies, so you lean on the community trusting that a package solves the problem well enough that you can abstract it.
You can pin the dependency and review the changes for security reasons, but fully grasping the logic is non-trivial.
Smaller dependencies are fine to copy at first, but at some point the codebase becomes too big, so you abstract it and at that point it becomes a self-maintained dependency. Which is a fair decision, but it is all about tradeoffs and sometimes too costly.
You'd get those benefits from traditional dependencies if you copy them in and never update. Is an AI dependency going to have the equivalent of "upstream fixes"?
Probably? LLMs will train on fixes, then if you run the code through the LLM again to fix it.
> Part of the benefit over a dependency is that the code added will (hopefully) be narrowly tailored to your specific need, rather than the generic implementation from a library that likely has support for unused features.
In decent ecosystems there should be low or zero overhead to that.
> Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about.
Maybe. I find generic code is often easier to read than specialised custom implementations, because there is necessarily a proper separation of concerns in the generic version.
Right, but you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?" or "will the author be compromised in a supply-chain attack, or do a deliberate protestware attack?" etc. As for performance, a lot of npm packages don't have proper tree-shaking, so you might be taking on extra bloat (or installation cost). Your point is well-taken, though.
You can avoid all those worries by vendoring the code anyway. you only 'need' to update it if you are pulling it in as a separate dependency.
> you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?
This is not a worry with NPM. You can just specify a specific version of a dependency in your package.json, and it'll never be updated ever.
I have noticed for years that the JS community is obsessed with updating every package to the latest version no matter what. It's maddening. If it's not broke, don't fix it!
Wouldn't call it a risk in itself, but part of the benefit of using a library, a good and tailored one at least, is that it'll get modernised without my intervention. Even if the code produced for you was state-of-the-art at the moment of inclusion, will it remain that way 5 years from now?
> and you just dump it in your project with limited vetting
Well yes there’s your problem. But people have been doing this with random snippets found on the internet for a while now. The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs
> The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs True. But the difference is the scale and ease of doing with code generators. With a few clicks you can add hundreds of lines of code which supposedly does the right thing. While in the past, you would get code snippets for a particular aspect of the problem that you are trying to solve. You still had to figure out how to add it to your code base and somehow make it “work”
Surely in any responsible development environment those hundreds of lines of code still have to be reviewed.
Or don't people do code review any more? I suppose one could outsource the code review to an AI, preferably not the one that wrote it though. But if you do that surely you will end up building systems that no one understands at all.
Agree. Any reasonable team should have code reviews in place, but an irresponsible coder would push the responsibility of code quality and correctness to code reviewers. They were doing it earlier too, but the scale and scope was much smaller.
The main point isn't about dependencies but loosing the mindset to learn from small domain problem
AI offering the solution for a small problem that probably doesn’t deserve yet another dependency suggests to me that there’s a middle ground that we’ve failed to sufficiently cover: how to socialize code snippets that you’re meant to just inline into your project. Stack Overflow is probably the closest we’ve gotten to a generalized solution and it doesn’t exactly feel like a good one.
I came across this once before in the form of a react hooks library that had no dependency to install. It was just a website and when you found the hook you wanted you were meant to paste it into your project.
More likely, what we will see is the decline of low effort projects. The JavaScript/ Typescript ecosystem has been plagued with such packages. But that’s more anomalous to the JS community than it is a systemic problem with open source in general.
So if fewer people are including silly dependencies like isEven or leftPad, then I see that as a positive outcome.
Right now I tend to not use an external library unless the code would be large e.g. http server and/or the library is extremely popular.
Otherwise, writing it myself is much better. It's more customizable and often much smaller in size. This is because the library has to generalize, and it comes with bloats.
Using AI to help write is great because I should understand that anyway whether AI writes it or not or whether it's in an external library.
One example recently is that I built a virtual list myself. The code is much smaller and simpler compared to other popular libraries. But of course it's not as generalizable.
Chances are, even if you deliberately and strategically pick to work on an OSS project that you are positively sure an LLM can’t just spit out on command, it will be capable of doing so by the time you are close to completion. In that sense, one has to either be not inclined to question “what’s the point” or have a bit of gambling mentality in order to work on anything substantial.
That’s not automatically a problem, however. The problem is that even if you do come up with a really cool idea that LLM is not capable of autocompleting, and you release it under a copyleft license (to ensure the project survives and volunteer contributor’s work is not adopted and extinguished by some commercial interest), it will get incorporated into its dataset regardless of the licensing, and thereafter the LLM will be capable of spitting it out and its large corporate operator will be able to monetise your code (allowing anyone with money wishing to build a commercial product based on it).
I suppose some people would see this as progress: fewer dependencies, more robust code (even if it’s a bit more verbose), quicker turnaround time than the old “search npm, find a package, read the docs, install it” approach.
Why would randomized code be more robust? Also, how is searching/reading the docs slower than checking the correctness of a randomized function?
> ….but I do think it’s a future where we prize instant answers over teaching and understanding
It depends. For stuff I don’t care about I’m happy to treat it as a black box. Conversely, AI now allows me to do deep dive on essentially anything I’m interested in, which has been a massive boon to my learning ability.
I don't buy the education angle.
If you're not learning to code, then you want efficient code, so the comments are wasted bytes (ok, not a huge expense, but still).
If you are learning to code, or just want to understand how this code works, then asking an LLM is going to get a lot better result. LLMs are fantastic tutors. Endlessly patient, go down any rabbit hole with you, will continue explaining a concept until you get it, etc. I think they're going to revolutionise education, especially for practical subjects like coding.
Respect to the author for trying to educate, though.
> so the comments are wasted bytes
Is there any modern compiler where the output code has anything to do with the comments in the source?
LLMs give us the opportunity to work on more complex projects and gain fuller understanding of the problem space and concepts. Or create tons of slop. Take your pick.
Small open source is still valuable, but the bar is higher. If your project is something that's trivial and nobody just thought to do it before you and bothered to do it after, that's probably not going to survive, but if your project is a small focused tool that handles something difficult really well, it's 100% got a future.
Earlier this year I wrote an interpreter for a niche, proprietary binary format. Someone asked if I could open source it so that they could (more easily) run it on NixOS. I declined as I'm just that strongly opposed to my work being used to train AI models and further entrench the enshittification of the internet.
TLDR: [AI promises] a future where we prize instant answers over teaching and understanding
But what this article and the comments don't say: open-source is mainly a quality metric. I re-use code from popular open-source repo's in part because others have used it without complaints (or document the bugs), in part because people are embarrassed to write poor-quality open-source so it's above-par code, and in part because if there are issues in this corner of the world, this dependency will solve them over time (and I can watch and wait when I don't have time to fix and contribute).
The quality aspect drives me to prefer dependencies over AI when I don't want full ownership, so I'll often ask AI to show open-source projects that do something well.
(As an aside, this article is about AI, but AI is so pervasive now as an issue that it doesn't even need saying in the title.)
> Even now there’s a movement toward putting documentation in an llms.txt file, so you can just point an agent at it and save your brain cells the effort of deciphering English prose. (Is this even documentation anymore? What is documentation?)
I look at it as why not have the best of both worlds? The docs for my JS framework all have the option of being returned as LLM-friendly text [1].
When I utilize this myself, it's to get help fleshing out skeleton code inside of an app built w/ my framework (e.g, Claude Sonnet w/ these docs in context build a nearly ~90-100% accurate implementation for most stuff I throw at it—anything from little lib stuff up to full-blown API endpoint design and even framework-level implementations of small stuff like helping to refactor the built-in middleware loading). It's not for a lack of desire to read, but rather a lack of desire to build every little thing from scratch when I know an LLM is perfectly capable (and much faster than me).
[1] https://docs.cheatcode.co/joystick/ui/component/dynamic-page...
> I don’t know which direction we’re going in with AI
Maybe programming languages will be designed for AIs in the future. Maybe they'll have features that make grafting unknown generated code easier.
Open source exists because coding was a significant effort and code was a thing of high value. Unsurprisingly companies hesitated to make the code public and free. All of this is changing now as coding has suddenly become trivial. So, yes, the mission of open source, in general, will be challenged.
In the U.S., anything machine generated is uncopyrightable.
Why would you put uncopyrightable code into your codebase?
Why wouldn't you? Your codebase (if you're a business) exists to make you money, people being able to copy some unknown portions of it without further license if they somehow legally get their hands on a copy of it seems entirely irrelevant.
PS. I think this is much less clear and much less settled law than you are suggesting.
It's more nuanced. If I even have a few lines I can prove are mine, those parts are copywritable in the same way Pride and Prejudice is public domain but pride and prejudice and zombies is copyrighted.
Even worse...unmaintained code. Only the human-written one has a maintainer. The other one plagiariased by AI is instant legacy code
> The other one plagiariased by AI is instant legacy code
I have used this "instant legacy code" concept before. It's absolutely true, IMO. But people really, really, really hate hearing it.
Autocomplete has been around for decades
I think if you dig a little deeper you will find that the answer is not so black and white.
I would have and did write this instead of including it anyway. These small npms you would spend more time to look up than write are a pest.
It would be a cool feature of AI to include only the subset of the library you use and nothing else.
Several issues:
1. Reducing dependencies is a wrong success metric. You just end up doing more work yourself, except you can't be an expert in everything, so your code is often strictly worse.
2. Regenerating the same solutions with a probabilistic machine will produce bugs a certain percentage of the time. Dependencies are always the same code (when versioned).
3. Cognitive overhead for human review is higher with LLM-generated libs, for no additional benefit.
> Reducing dependencies is a wrong success metric. You just end up doing more work yourself
Except it's just not true in many cases because of social systems we've built. If I want to ship software to Debian I have to make sure that every single of my 3rdparty dependencies is registered and packaged as a proper debian package - a lot of time it will take much less work to rewrite some code than to get 25 100-lines-of-code micro-libraries accepted into debian.
> Claude’s version is pretty close to the blob-util version (unsurprising, since it was probably trained on it!).
AI are thieves!
> I don’t know which direction we’re going in with AI (well, ~80% of us; to the remaining holdouts, I salute you and wish you godspeed!), but I do think it’s a future where we prize instant answers over teaching and understanding.
Google ruined its search engine years ago before AI already.
The big problem I see is that we have become WAY too dependent on these mega-corporations. Which browser are people using? Typically chrome. An evil company writes the code. And soon it will fire the remaining devs and replace them with AI. Which is kind of fitting.
> Even now there’s a movement toward putting documentation in an llms.txt file, so you can just point an agent at it and save your brain cells the effort of deciphering English prose. (Is this even documentation anymore? What is documentation?)
Documentation in general sucks. But documentation is also a hard problem.
I love examples. Small snippets. FAQs. Well, many projects barely have these.
Look at ruby webassembly/wasm or ruby opal. Their documentation is about 99% useless. Or, even worse - rack in ruby. And I did not notice this in the past, in part because e. g. StackOverflow still worked, there were many blogs which helped fill up missing information too. Well all of that is largely gone now or has been slurped up by AI spam.
> the era of small, low-value libraries like blob-util is over. They were already on their way out thanks to Node.js and the browser taking on more and more of their functionality (see node:glob, structuredClone, etc.), but LLMs are the final nail in the coffin.
I still think they have value, but looking at organisations such as rubygems.org disrupt the ecosystem and bleeding it dry by kicking out small hobbyists, I think there is indeed a trend towards eliminating the silly solo devs who think their unpaid spare time is not worthy of anything at all, yet the big organisations eagerly throw down more and more restrictions onto them (my favourite example is the arbitrary 100k download limit for gems hosted at rubygems.org, but look at the new shiny corporate rules on rubygems.org - this is when corporations take over the infrastructure and control it. Ironically this also happened to pypi and they admit this indirectly: https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2f... - of course they deny that corporations control pypi now, but by claiming otherwise they admit it, because this is how hobbyists get eliminated. Just throw more and more restrictions at them without paying them. Sooner or later they decide to do something better with their time.)
He almost got it right. It's not just the fate of small open source. It's the fate of all programmers now. Why hire a programmer when an LLM costs less, works faster and makes less mistakes (OP compliments better error handling, read the article).
Unless you are a product owner, you have paying clients that love you and your product and won't simply ditch it in favour of a new clone, you are really screwed.
"when an LLM costs less, works faster and makes less mistakes"... indeed, but it doesn't follow at all that it's the fate of all programmers _now_... at least in my experience none of these things are true ATM.
Well, at the very least it costs less than asking intern to look for a lib doing something particular and give some examples... still about as accurate as the intern tho.
How many time has it happened for a company to actually ask an intern for a library?
Um, isn't this what an intern is for? Or do you let them try to contribute to your core project?
So far I've never seen yet a non-programmer release production-grade code using only LLMs. There's just so much to care about (from security, deployments, secret management, event-driven architectures, and a large etc.) that "just" providing a prompt to create an "app" doesn't cut it. You need infra and non-engineers just don't know shit about infra (even if it's 99% managed), you need to deploy your llm-generated code in that infra; that should happen in a ci/cd probably. And what about migrations? Git? Who's setting up the api gateway? I don't mean to say that LLMs don't know how to do that, but you need to instruct them to do so, and even there, they will make silly mistakes and you need to re-instruct them or fix it.
Prompting is just 50% of the work (and the easy part actually). Ask the Head of Product or whoever is there to deploy something valuable to production and maintain it for 6 months while not losing money. It's just not going to happen, not even with truly AGI.
An LLM might be able to replace the majority of the code Sindre Sorhus has put out there, but it's probably a stretch to think that it could replace someone like John Carmack.
Trivial NPM libraries were never needed, but LLMs really are the nail in the coffin for them even when it comes to the most incompetent programmers because now they can literally just ask an LLM to spit out the exact same thing.
He also points out a pointless type check in a type checked language...
Your name is very accurate I must say.
That type check is honestly not pointless at all. You can never be certain of your inputs in a web app. The likelihood of that parameter being something other than an arraybuffer is non-zero, and you generally want to have code coverage for that kind of undefined behavior. TypeScript doesn't complain without a reason.