> The fear is that these [AI] tools are allowing companies to create much of the software they need themselves.
AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, support, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.
A lot of these companies are not small monthly fees. And if you’ve ever worked with them, you’ll know that many of the tools they sell are an exact match for almost nobody’s needs.
So what happens is a corporation ends up spending a lot of money for a square tool that they have to hammer into a circle hole. They do it because the alternative is worse.
AI coding does not allow you to build anything even mildly complex with no programmers yet. But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better.
Another thing AI enables is significantly lower switching costs. A friend of mine owned an in person and online retailer that was early to the game, having come online in the late 90s. I remember asking him, sometime around 2010, when his Store had become very difficult to use, why he didn’t switch to a more modern selling platform, and the answer was that it would have taken him years to get his inventory moved from one system to another. Modern AI probably could’ve done almost all of the work for him.
I can’t even imagine what would happen if somebody like Ford wanted to get off of their SAP or Oracle solution. A lot of these products don’t withhold access to your data but they also won’t provide it to you in any format that could be used without a ton of work that until recently would’ve required a large number of man hours
I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones. It's not just a converter, it's a gui with the tools needed to facilitate a quick manual conversion.
There is only one program that offers this ability, but you need to pay for the entire software suite, and the process is painfully convoluted anyway. We went from doing maybe 2-3 files a day to do doing 2-3 files an hour.
I have repeated ad-nausea that the magic of LLMs is the ability to built the exact tool you need for the exact job you are doing. No need for the expensive and complex 750k LOC full tool shed software suite.
Is this a tool deployed into an internal production environment? Or is it more of “dev” or “business user desktop” type app that isn’t deployed formally.
I’m working with a company now that thinks that AI is great until you need to deploy to Prod. Probably true in some cases, especially for tools built with Prod environments as targets.
But I’m using Claude Code for a tool that doesn’t absolutely require that sort of environment. It helps a company map data (insurance risk exposure data) to a predefined intermediate layout and column schema.
I know that I’ll run into resistance once I say “this could be deployed to Prod” but I think AI is a major win for Prod-like things.
My professional world largely lives in spreadsheets and relational databases. Neither going anywhere anytime soon. And spreadsheets are the currency of the business and industry in so many ways. They are very prod-like in my opinion.
Was the custom tool developed by copying how the existing software worked? Copying existing functionality is not always possible, and doesn't capture the real costs.
No, it is incredibly streamlined because it tailored specifically to achieve this modernization.
The paid program can do it because it can accept these files as an input, and then you can use the general toolset to work towards the same goal. But the program is clunky an convoluted as hell.
To give an example, imagine you had tens of thousands of pictures of people posing, and you needed to change everyone's eye color based on the shirt color they were wearing.
You can do this in Photoshop, but it's a tedious process and you don't need all $250/mo of Photoshop to do it.
Instead make a program that auto grabs the shirt color, auto zooms in on the pupils, shows a side window of where the object detection is registering, and tees up the human worker to quickly shade in the pupils.
Dramatically faster, dramatically cheaper, tuned exactly for the specific task you need to do.
I think use cases like that will be where "AI" has the biggest wins.
That's a task that I could automate as a developer, but other than LLM "vibe coding", I don't know that there's a good way for a lay person to automate it.
There are two forms of business software gen AI coding is 100% going to eat:
1. Simple CRUD apps
2. Long-tail / low-TAM apps
Because neither of these make economic sense for commercial companies to develop targeted products for.
Consequently, you got "bundled" generalized apps that sort of did what you wanted (GP's example) or fly-by-night one-off solutions that haven't been updated in decades.
The more interesting questions are (a) who is going to develop these new solutions and (b) who is going to maintain these new solutions? In-house dev/SRE or newly more-efficient (even cheaper) outsourced? I'd bet on in-housing, as requirements discovery / business problem debugging is going to quickly dominate delivery/update time. It already did and that was before we boosted simple app productivity.
> It's not just a converter, it's a gui with the tools needed to facilitate a quick manual conversion.
is this like a meta-joke?
> I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones.
The funny thing about examples like this is that they mostly show how dumb and inefficient the market is with many things. This has been possible for a long time with, you know, people, just a little more expensive than a Claude subscription, but would have paid for itself many times over through the years.
It's not just a joke, it's a meta-joke! To address the substance of your comment, it's probably an opportunity cost thing. Programmers on staff were likely engaged in what was at least perceived as higher value work, and replacing the $250/mo subscription didn't clear the bar for cost/benefit.
Now with Claude, it's easy to make a quick and dirty tool to do this without derailing other efforts, so it gets done.
> Programmers on staff were likely engaged in what was at least perceived as higher value work, and replacing the $250/mo subscription didn't clear the bar for cost/benefit.
Agreed absolutely, but that's also what I'm talking about. It's very clear it was a bad tradeoff. Not only $250/month x three seats, but also apparently whatever the opportunity cost just of personnel tied up doing "2-3 files a day" when they could have been doing "2-3 files an hour".
Even if we take at face value that there are no "programmers" at this company (with an employee commenting on hacker news, someone using Claude to iterate on a GUI frontend for this converter, and apparently enough confidence in Claude's output to move their production system to it), there are a million people you could have hired over the last decade to throw together a file conversion utility.
And this happens all the time in companies where they don't realize which side of https://xkcd.com/1205/ they're on.
It's great if, like personal projects people never get started on, AI shoves them over the edge and gets them to do it, but we can also be honest that they were being pretty dumb for continually spending that money in the first place.
The problem with this reasoning is it requires assuming that companies do things for no reason.
However possible it was to do this work in the past, it is now much easier to do it. When something is easier it happens more often.
No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
> There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
I wonder how much of the benefit of AI is just companies permitting it to bypass their process overhead. (And how many will soon be discovering why that process overhead was there)
Sure, there's a lot of process that is entirely justified, but there's also a whole lot of process that exists for reasons that are no longer relevant or simply because there are a lot more people whose job it is to make process than whose job it is to stop people from making too much process.
> No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
I mean, I'm absolutely familiar with how company decision making and inertia can lead to these things happening, it happens constantly, and the best time to plant a tree is today and all that, but the ex post facto rationalizations ring pretty hollow when the solution was apparently vibecoded with no programmers at the company, immediately saved them $750 a month and improved their throughput by 8x.
Clearly it was a very bad call not to have someone spend a couple of days looking into the feasibility of this 10 years ago.
Our company just went through an ERP transition and AI of all kinds was 0% helpful for the same reason it’s difficult for humans to execute: little to no documentation and data model mismatches.
Exploring a codebase tells you WHAT it's doing, but not WHY. In older codebases you'll often find weird sections of code that solved a problem that may or may not still exist. Like maybe there was an import process that always left three carriage returns at the end of each record, so now you got some funky "lets remove up to three carriage returns" function that probably isn't needed. But are you 100% sure it's not needed?
Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?
Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.
Adding that this just gets worse when databases are peppered with direct access by vibe-coded applications that don’t look at production data or gather these insights before deciding “yeah this sounds like the format of text that should go in the column with this name, and that’s the column I should use.”
And now there’s an example in the codebase of what not to do, and other AI sessions will see it, and follow that pattern blindly, and… well, we all know where this goes.
How is an AI supposed to create documentation, except the most useless box-ticking kind? It only sees the existing implementation, so the best it can do is describe what you can already see (maybe with some stupid guesses added in).
IMHO, if you're going to use AI to "write documentation," that's disposable text and not for distribution. Let the next guy generate his own, and he'll be under no illusions about where the text he's reading came from.
If you're going to write documentation to distribute, you had better type out words from your own damn mind based on your own damn understanding with your own damn hands. Sure, use an LLM to help understand something, but if you personally don't understand, you're in no position to document anything.
Whats with this assumption that there's no human involvement? I dont just say "hey scan this 2m loc repo and give me some docs'... that would be insane. T
he AI is there to do the easy part; scan a giant spaghetti bowl and label each noodle. The humans job is to attach descriptions to those noodles.
Sometimes I forget that people on this site simply assume the worst in any given situation.
I don't find this surprising. Code and data models encode the results of accumulated business decisions, but nothing about the decision making process or rationale. Most of the time, this information is stored only in people's heads, so any automated tool is necessary blind.
This captures succinctly the one of the key issues with (current) AI actually solving real problems outside of small "sandboxes" where it has all the information.
When an AI can email/message all the key people that have the institutional knowledge, ask them the right discovery questions (probably in a few rounds and working out which bits are human "hallucinations" that don't make sense). Collect that information and use it to create a solution. Then human jobs are in real trouble.
Until that AI is just a productivity boost for us.
The AI will also have to be trained to be diplomatic and maybe even cunning, because, as I can personally attest, answering questions from an AI is an extremely grating and disillusioning experience.
There are plenty of workers who refuse to answer questions from a human until it’s escalated far enough up the chain to affect their paycheck / reputation. I’m sure that the intelligence is artificial will only multiply the disdain / noncompliance.
But then maybe there will be strategies for masking from where requests are coming, like a system that anonymizes all requests for information. Even so, I feel like there would still be a way that people would ping / walk up to their colleague in meatspace and say “hey that request came from me, thanks!”
i love the assumption by default that "ai generated" automatically excludes "human verified".
see, i actually read and monitor the outputs. i check them against my own internal knowledge. i trial the results with real trouble shooting and real bug fixes/feature requests.
when its wrong, i fix it. when its right, great we now have documentation where none existed before.
dogfood the documentation and you'll know if its worth using or not.
If it is not a small fee, I do wonder - is there still advantage to having a provider which one may take out a lawsuit against if something goes wrong? To what extent might liability and security vetting by scaled usage still hedge against AI, in your view?
It’s generally a tradeoff decision between comparative advantage of a vendor versus {price cost and contracting cost}.
Contracting my cost is the difference in costs for a contract across companies versus a purely internal project. This could involve the lawyers on both sides, the time taken to negotiate which party is responsible for what deliverable / risk, the cost to enforce the contract, the time taken for negotiations/ iterations, etc.
One efficient company doing it internally is obviously efficient. Two inefficient companies negotiating a contract is obviously inefficient. The interesting questions are the other 2 quadrants, where the answer may change between the LLM case and non-LLM case.
Well in that case the provider is likely paying for insurance and charging you a mark up, so you could likely just buy the insurance and save the markup anyway.
I worked on a product that had to integrate with Salesforce because virtually all of our customers used it. It must have been a terrible match for their domain, because they had all integrated differently, and all the integrations were bad. There was virtually no consistency from one customer to next in how they used the Salesforce data model. Considering all of these customers were in the same industry and had 90% overlapping data models, I gave up trying to imagine how any of them benefited from it. Each one must have had to pay separately for bespoke integrations to third-party tools (as they did with us) because there was no commonality from one to the next.
One thing that's interesting is that their original Salesforce implementations were so badly done that I could imagine them being done with an LLM. The evergreen stream of work that requires human precision (so far, anyway) is all of the integration work that comes afterwards.
> So what happens is a corporation ends up spending a lot of money for a square tool [SaaS] that they have to hammer into a circle hole.
You are assuming that corporations have the capability to design the software they need.
There are many benefits to SaaS software, and some significant costs (e.g. integration).
One major benefit of SaaS is domain knowledge and most people underestimate the complexity of even well known domains (e.g. accounts).
Companies also underestimate the difficulty of aligning diverging political needs within the business, and they underestimate the expense of distraction on a non-core area that there is no business advantage to becoming competent at. As a vendor sometimes our job was simply to be the least worst solution.
this is true in many cases and not in many cases. Another true one is payments - it's complex AF and and no one will sit down and vibe code it. A CRM? Easy in many cases. Some workflow tool? Easy, they know the exact workflow.
So, sure, some products will go the way of the dodo and some will not.
>But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better
Could you share any data on this? Are there any case studies you could reference or at least personal experience? One order of magnitude is 10x improvement in cost, right?
I‘m not sure it’s a perfect example, but at least it’s a very realistic example from a company that really doesn’t have time and energy for hype or fluff:
We are currently sunsetting our use of Webflow for content management and hosting, and are replacing it with our own solution which Cursor & Claude Opus helped us build in around 10 days:
And the big advantage for us is two things: Our content marketers now have a "Cursor-light" experience when creating landingpages, as this is a "text-to-landingpage" LLM-powered tool with a chat interface from their point of view; no fumbling around in the Webflow WYSIWYG interface anymore.
And from the software engineering department's point of view, the results of the work done by the content marketers are simply changes/PR in a git repository, which we can work on in the IDE of our choice — again, no fumbling around in the Webflow WYSIWYG interface anymore.
Why waste your time on something that isn't your core business when, presumably, the SAASes of the world will use the new tech and lower prices as well?
Oh, but that doesn't matter. SaaS tools aren't bought by the people that have to use them. Entire groups in big companies (HR & co) are delegating the majority of their job to SaaS and all failures are blamed on the people who have to interact with them while they are entirely ancillary to their job.
Modern AI probably could’ve done almost all of the work for him.
no way. We're not talking a standalone AI created program for a single end-user, but entire integrated e-commerce enterprise system that needs to work at scale and volume. Way harder.
I also have pretty hefty skepticism that AI is going to magically account for the kinds of weird-ass edge cases that one encounters during a large data migration.
It's not that AI is magically going to do it, it's that the human running the migration now has better tools to generate code that does account for those one-off edge cases.
I was interviewing with a company that has done ETL migration, interop and management tools for the healthcare space, and is just dipping their toes in the "Could AI do this for us or help us?"
Their initial answer/efforts seem to be a qualified but very qualified "Possibly" (hah).
They talked of pattern matching and recognition being a very strong point, but yeah, the edge cases tripping things up, whether corrupt data or something very obscure.
Somewhat like the study of MRIs and CTs of people who had no cancer diagnosis but would later go on to develop cancer (i.e. they were sick enough that imaging and testing was being ordered but there were no/insufficient markers for a radiologist/oncologist to make the diagnosis, but in short order they did develop those markers). AI was very good at analyzing the data set and with high accuracy saying "this person likely went on to have cancer", but couldn't tell you why or what it found.
Bespoke software development is a huge market. It is incredibly expensive. AI coding agents are not perfect, but just usable enough to be coached through by a semi-IT-literate business person to write some scripts or a little crud site that's needed for a specific LOB task (typically a 30k-250k range project)
The alternatives before were propose the case to IT, and if lucky it gets put on the planning, outsourced to consultants, and delivered 18 months from now for an astronomical investment in both time and cost. Or go at it yourself with Excell and VBA.
The AI thing will be a just 'good enough' barely working clutch of ugly code. Then again, so was most of the consultant produced code.
This exactly. How many small businesses were running some sort of Access database built by someone who knew just enough to meet their business process needs? "Brenda always orders these parts on Tuesday so make sure there is a Tuesday ordering table."
Eventually some SaaS solution came in and they evolved their business process because of the proposed benefits.
I expect we are going to see a resurgence of good enough software created for the real world hodgepodge of business practices. Not a bad thing in the short term as it will creating adequate efficiency, but long term we are going to reap the technical debt.
My prediction is that the next SaaS evolution is going to be platforms for these random solutions. Just as Salesforce captured CMS market, someone is going to capture the AI agent market where users can write and host their LOB code agnostic to the rest of their infrastructure stack and 'slightly' reduce that technical debt.
Hmm, I wonder if it would be cheaper to hire a couple of software engineers to vibe-code custom SaaS apps on top of the company's existing data layer instead of paying for a hundred different SaaS subscriptions.
Financial considerations aside, one advantage of having in-house engineers is that you can get custom features built on-demand without having to be blocked on the roadmap of a SaaS company juggling feature requests from multiple customers...
I'm at a large company that is building connections between all of its different financial systems. The primary problem being faced is NOT speed to code things, the primary problem at large companies is getting business aligned with tech (communication) and getting alignment across all the different orgs on data ownership, access, and security. AI currently doesn't solve any of this. Throw in needing to deal with regulation/SOX compliance and all the progress you think AI might make, just doesn't align with the problem domains.
Agreed. The SWEs already receive a steady supply of conflicting demands from every possible business unit; the value add for these teams is a working PMO to prioritize the requests coming in.
Totally makes sense. Turns out that a lot of what Palantir's "Forward Deployed Engineers" do is navigating these bureaucratic and political obstacles to get access to the data: https://nabeelqu.co/reflections-on-palantir -- which may be Palantir's real secret sauce, rather than the tech itself.
I haven't seen what you're suggesting from a CEO at a large company that's primary business is non-software related. At some point in a businesses life theres an accumulation of so many disparate needs and systems that there can be many many layers of cross org needs for fulfilling business processes. This stuff is messy.
I think I saw it asserted that its easier for a new company, which definitely makes sense as you don't carry along all the baggage.
I work in large projects like this, the CEO doesn't get involved in the little "computer project" except during the project kickoff. Even then, it's just to "say a few words about the people I admire on this team". In large global companies these projects are delegated 3 or 4 levels below the CEO at the highest.
Makes me wonder if they are getting ripe for disruption. Not by a new business model, but a new operating model where a CEO will be tech/ai-aware and push through all these kinds of things.
This is also generally true for all mid to large businesses I've ever worked at.
The code they write is highly domain-specific, implementation speed is not the bottleneck, and their payroll for developers is nothing compared to the rest of the business.
> one advantage of having in-house engineers is that you can get custom features built on-demand without having to be blocked on the roadmap of a SaaS company juggling feature requests from multiple customers...
Many larger enterprises do both – buy multiple SaaS products, and then have an engineering team to integrate all those SaaS products together by calling their APIs, and build custom apps on the side for more bespoke requirements.
To give a real world example: the Australian government has all these complex APIs and file formats defined to integrate with enterprises for various purposes (educational institutions submitting statistics, medical records and billing, taxation, anti-money laundering for banks, etc). You can't just vibe code a client for them – the amount of testing and validation you have to do with your implementation is huge–and if you get it wrong, you are sending the government wrong data, which is a massive legal risk. And then, for some of them, the government won't let you even talk to the API unless you get your product certified through a compliance process which costs $$$. Or, you could just buy some off-the-shelf product which has already implemented all of that, and focus your internal engineering efforts on other stuff. And consider this is just one country, and dozens of other countries worldwide do the same thing in slightly different ways. But big SaaS vendors are used to doing all that, they'll have modules for dealing with umpteen different countries' specific regulations and associated government APIs/file formats, and they'll keep them updated since they are forever changing due to new regulations and government policies. And big vendors will often skip some of the smaller countries, but then you'll get local vendors who cover them instead.
AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, support, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.
But it's also much cheaper to develop an alternative SaaS offering, one that is perhaps more custom, nimble, cheaper than the general SaaS out there today.
In the past, maybe it might have taken 2,000 engineers to build a Figma equivalent. Today, it might take 20.
Software will be cheap to develop so competition will be extremely high. Therefore, SaaS companies should not command a high PE ratio in the stock market anymore.
It's physical companies that should command a higher PE ratio. Energy, materials, and chip companies to be exact. This was reversed pre-LLM era.
Which part of Figma can you build with 20 engineers? I think we're still not close to a small team building real-time collaboration software at scale that actually works at the quality level that customers expect.
>It is much cheaper to pay a small monthly fee to a SaaS company.
It's not that cut and dried - it all depends on what your company needs from SaaS and how big it is. SaaS companies like Salesforce don't charge a "small monthly fee" - they charge 10s of millions of dollars per month for large corporations. It's not hard at all to push that money towards AI development and have a better solution built in-house now. Yes, it still takes serious project management skills, but so does integrating Salesforce or other large SaaS software.
> It is much cheaper to pay a small monthly fee to a SaaS company.
How often do software projects underestimate their cost and timeframe?
The LLM-written replacement may prove to be more expensive, but may appear to be cheaper initially when all of the costs and budgets are hidden in the “fog of war”.
How effective will vendor messaging warning clients away from writing their own vendor replacements be? Of course a vendor will play up their strengths and will try to play up the clients’ weaknesses. It’s like asking a used car salesman to give you their opinion of a car on their lot — they aren’t seen as neutral observer.
Atlassian tools for a client like mine (hundreds of employees) can easily cover the expense of internalizing it. It's Jira plus confluence mostly, it's not rocket science.
And that's just atlassian.
Start adding stuff that costs many many many yearly salaries (special software for managing inventories and warehouses) it starts making sense to prototype alternatives internally.
I came to the conclusion that if it's not Teams/SharePoint or the moat is on the extreme legal complexity side (e.g. payrolls), you can at least think of building an alternative that is good enough without needing to be perfect.
Ugh you are aware that Atlassian earlier was providing on-perm edition for years.
You also know how neglected those on-perm instances were?
No one updated those, no one wanted to pay for more CPU/RAM. File storage, I know people who had some random requests to cleanup files from projects because company wouldn’t buy more hard drives. Everyone was nagging at sys admins that they do bad job and at Atlassian that JIRA sucks.
That is mostly why Atlassian pulled off on premise because companies would not update at all, would like to have all new features and also not pay for file storage,RAM, CPU to make it work well.
Don’t forget you still will need to have dedicated employees to deal with AI built solution - because existing employees have work to do.
What we pay for JIRA and Confluence would never offset fact that we pay and it works, NOT A SINGLE EMPLOYEE CARES as they have their job to do.
Don’t forget the salary for every dev team having the Atlassian Jira Jockey to mess around with the board all day and make sure the next 7 epics worth of tickets are in the 9 columns and in prioritised order.
jira premium is $15/mo/user for 300 users. you're saying $50k can cover developing the app inclusive of integrations, maintaining it, providing 24/7 service and 3 9s uptime (per the sla)? don't forget compliance and security. maybe the logic is everyone can be fired and replaced with agents?
All instances I remember seeing were neglected, not updated running on lowest amount of resources. Everyone in company nagging how slo it is but no one wanted to share budget to improve it.
So for me that experiment „it will be better and cheaper building our own JIRA” was already done. It is going to be cost center that no one will want to throw money at.
There is no way you would get anything close to as good as JIRA. Your best bet with that budget would be trying to integrate an existing open source on-prem solution (not sure what that alternative is for JIRA).
Yes, you wouldn't get something near as complicatedas JIRA, but that would be a good thing! Look, it's enterprise software, so I'm sure there's somewhere that needs to have the overcomplicated permissions system otherwise contractors are going to steal everything that isn't bolted down, but most places I've been don't need, and thus don't use most of all of that crap. If the ticket can only go from planned to done by a certain group of users, backed by LDAP... let's just say, I'm not going to miss configuring which group gets which permissions system.
JIRA's the perfect example of disruption, too. Everyone's got their bespoke workflow, and JIRA has to be customizable to suit all of them. Bespoke software just doesn't have to the same way.
I told to my colleague that it would take less time for me to vibe code jira that it would take him to configure it. Sounds crazy ? Not so much : factor the part of jira you use (maybe 10%), the many choices and dimensions you have to configure, the time it take and the complexity it bring. On the other side, the vibe code version have only the fields you want, most of the logic hard written in code (ie epic > story >task...), and that you could do anything any role, any authentication scheme.
This reminds me of all the "i could turn the spreadsheet into a webapp in a weekend" type comments. Sure, you could get CRUD and a datatable working but then a user is going to ask you for a custom field and you'll say "ok let me vibe code that, update the database, and then deploy" but the user will say "well in Jira I could just to that myself...". Then the next thing they're going to ask is some kind of custom workflow utility which you'll then goto work vibe coding that feature and they'll say again "...but in jira that was already there". Meanwhile they'll ask you why they can't change the validation criteria on the custom field from before, they said it should be required but now there's a case where it's optional.
Pretty soon you're just re-implementing Jira while your users wait and get pissed because they could have just been using Jira all along. It's just like turning a spreadsheet into a webapp, inevitably you just end up trying to re-implement Excel.
1. ai being able to code well seems like it would also get pretty close/good at doing basically everything else you described. If coding is a game of reasoning, if you can solve that, you have effectively solved reasoning and you can likely map it to most other problems provided you have a sufficiently good harness and toolcalling setup.
2. Lets assume AI won't replace everyone as point (1) assumes - and it just replaces _most_ people. Under this assumption, we will likely see large swathes of layoffs. Many SaaS companies have a pay per seat model. Less people employed at companies = less seats being paid for = less SaaS revenue.
So not only is there a threat of companies just vibe coding various SaaS-es in house, but there is also a threat that the TAM of many SaaS products (which is typically proportional to the # of employees there are) will actually _shrink_ in size.
I think the main class of SaaS company that will remain in the medium term are the ones in legally touchy or compliance heavy industries - think healthcare, finance and security (workday for example). But even Workday will be affected by point (2) from above. Overall, I think the mid-long term outlook for SaaS, especially "SaaS", is not great.
"Small Monthly Fee" is a very loaded term here.
Im in the negotiations for these platforms, the price that many of these companies command for their products will very often pay the salaries of a whole software department.
Add to this the quality of support being the lowest possible option above "nonexistant" and I would say the risk to these SaaS companies is real.
The real benefit of these types of SaaS offerings was their ubiquity across multiple industries and verticals. If a company bought Salesforce, they could very readily find employees that would be able to quickly onboard since they would likley have used it at previous companies.
AI software generation is changing this as more and more software being created is bespoke and increasingly one-of-a-kind with these tools allowing companies to create software that fits their unique and specific needs.
My hot take here is that the moats previously enjoyed by SaaS companies will increasingly vanish as smaller and smaller teams can assemble "good enough" solutions that companies will adopt instead of paying giant chunks of their budget on pre-built SaaS tools that will increasingly demand more training to Onboard.
There is one big argument against these "good enough" solutions: commercial business software providers need to put a lot of R&D into finding generalized workflows that apply to as many clients as possible. Effectively, they find and encode current standard practices into their products. This is valuable from a business operations perspective in two ways: it's a good bet that transitioning the customer's operations to match the software is cleaning up internal processes, and it makes onboarding new employees easier because the tools and workflows should be much more familiar right from the start.
Saas was always fueled by B2B buying through same investor circle. sequoia companies buying from other sequoia companies, softbank companies buying from other software companies. Without this circular buying and selling of the software, the whole B2B software market crashes.
>My hot take here is that the moats previously enjoyed by SaaS companies will increasingly vanish as smaller and smaller teams can assemble "good enough" solutions that companies will adopt instead of paying giant chunks of their budget on pre-built SaaS tools that will increasingly demand more training to Onboard.
why do people pay red hat/ibm for rhel? they earn pretty good margins too. to parent's point on software/=code
If AI can code, why do you think it cannot handle building, testing, debugging, securing, monitoring, supporting, incident handling, and so on?
Consider incident handling. What if your AI sets up monitoring that detects errors or outages, wakes up an agent, gives it the problem context, then sets it to work so it can debug the issue, produce a fix, then deploy it? You now have an end-to-end system that works 24/7. Many issues will probably be resolved before you've even noticed them.
If your response is, AIs won't ever be smart or capable enough to do this as well as humans, how has that same prediction worked out for coding?
The next generation of AI "coding" tools will essentially be SaaS companies in a box. Agents will code the app, but they'll also test it, debug it, support it, etc. And this will happen in months, not years.
> Consider incident handling. What if your AI sets up monitoring that detects errors or outages, wakes up an agent, gives it the problem context, then sets it to work so it can debug the issue, produce a fix, then deploy it? You now have an end-to-end system that works 24/7. Many issues will probably be resolved before you've even noticed them.
Have you ever been actually involved in trying to fix an error or outage? Like actually on an on-call rotation where you had to deal with reported issues?
Hyper custom software can allow your business flows to sync together a lot better than the alternative, using zapier to glue a bunch of mostly poor fits and ending up with Frankenstein processes.
Also, it allows you to pick and choose what you want from where.
We’ve just completed the first month of our internal CRM that has replaced about 500$ a month in subs with something that flows much better and enforces our own internal processes.
multiple of what? there is maybe one software company trading above 30x revenue - palantir. many companies growing at 20% trade at single digit revenue multiples.
that is really not the case in software, people commonly go between EV/S and EV/FCF for high growth names. also earnings could mean: GAAP earnings, non-GAAP earnings, ebitda, ebita, FCF, FCF ex share based comp.
or for a more charitable comment, I think the issue people struggle with right now is how much of non-AI software will be replaced by AI-native versions. and it's not even a 1:1 mapping. we may see 5 different small companies replaced by a single AI interface. all TBD, but there's merit to avoiding that risk right now if you can just allocate to NVDA and GOOG instead
Lol. ServiceNow, Oracle, Workday, etc are not small monthly fees. That's what the market is shitting on. (Oracle is different, given the corruption and OpenAI grift angle.)
My buddy works for a company like these. He landed a $5M contract last year, which netted him almost $800k. There's alot of fat to be cooked out of this stuff, and AI will help smaller entrants attack those margins.
AI-based startups like Vanta make it much easier for companies to meet the compliance bullshit the large companies require. Again, it will drive more competition == better values for customers.
> There's alot of fat to be cooked out of this stuff, and AI will help smaller entrants attack
truer words have never been spoken. I work in some of these platforms and lead teams of developers who write customizations. These customizations are not rocket surgery, basic CRUD with some logic requirements that can't be met ootb. It's very time consuming and therefore expensive to clients. The significant moat incumbents have is brand recognition and trust. On the other hand, a hot new "Agent First" consulting firm at 1/3 the price would be hard for a director to not at least experiment with.
AI "generated" code requires a large base of training data to draw from. If we all stop writing code then there will no new code written. Just rehashes of stolen ideas. There is no long tail to this industry or ideal.
> That's very expensive.
As long as you convince someone else to pay the bill who cares? The real problem is are you losing your competitive edge? If everyone else can crank out the same stolen crap you can then there is no reason for you to even exist.
We've let models do this before. They devolve into an incomprehensible mess in very short order. Errors multiply. Lossy weight compression on erroneous material makes it all that much more worse.
Foundational software requires that, but the foundation is pretty much completely built at this point. The workforce required to keep it running is but a tiny fraction of what was required to build it. The past has shown that innovation in hardware can push for the foundations to be rebuilt, but we've also already got computers basically everywhere now. There may not be some new innovation that requires the foundations to be completely rewritten again.
The little one-off programs that we thought would keep developers busy forevermore don't require engineers. They often don't even require code. LLMs can natively do a lot of things that historically would have required software.
Such black and white thinking. Even the little tools fall apart at first sight of an edge case if they are fully vibed. Neither Opus nor codex are good at architecture, and it’s not clear they ever will be.
You still seem to be thinking about code in the traditional sense. A lot of the software I am using now in my non-tech business isn't rooted in code at all. It is simply asking an LLM to carry out a task. Still programming, of course, but not with a traditional programming language. The LLM will produce the excepted results in realtime. The intermediary step of building an executable is totally unnecessary.
In the olden days it would have taken considerable engineering effort to produce a comparable tool. That is no longer the case.
Software developers programmed themselves out of a job. They created a huge and growing set of free, tested, high quality software in the form of open source, that can be use for pretty much anything. LLMs will automate a lot of the remaining pieces.
Stock prices are very forward looking, so if half the hype being sold about AI is true I would expect most software-centric companies to be devalued by wall-street (as the test, deploy, support should be automated in the coming years...according to the AI CEO's).
However, if I was a wall street analyst and believed the AI dreams I would further be concerned that software companies aren't taking advantage of the last remnants of value before software (and maybe labor) values go to zero.
If you've got a gold mine and have recently built the most efficient shovels in the world, why are they not bringing in mass amounts of workers to utilize these shovels before all the neighboring mines. Once all that gold is on the market, the price crashes so it's better to be one of the first mines to get in and dig out all possible value first.
I think you either don't believe in the AI hype, which means a lot of silicon valley companies are tremendously overvalued. Or you do, in which case another huge part of silicon valley is overvalued especially when they are not looking to out-innovate their peers (as evidenced by downsizing), but just riding the wave of AI until what they are selling has no marginal value over some guy coding alone in his bedroom. SV is putting itself into a weird position, but still has some time for financial buffoonery before the party stops.
>If you've got a gold mine and have recently built the most efficient shovels in the world, why are they not bringing in mass amounts of workers to utilize these shovels before all the neighboring mines
Because they are completely consumed by the need to increase margins, which they think they will be able to do it with AI by laying off a lot of people. But Saas economy is connected and based on per user pricing, so as layoffs continue, Saas economy is showing its biggest weakness. All of Saas companies also seem to embrace AI so much that they would rather add another summarise button rather than actually making something which cant be copied easily by competitors.
Haha, maybe… you can stick your head in the sand all you want, but everyone I know whose output is code is delegating 100% of their work to Claude Code today, I cannot see this magically drawing a line at people whose output is configs and emails…
> > The fear is that these [AI] tools are allowing companies to create much of the software they need themselves.
If that's their fear they don't know much how your typical big businesses functions.
You've dealt with a large, consumer bank? Many of them still run on IBM mainframes. The web front end is driven by pushing buttons and screen scraping 3270 terminal emulators. You would think a bank with all it's resources could easily build it's IT infrastructure and then manage all the technology transitions we've gone through over the past few decades. Clearly, they don't and can't. What they actually do is notice they have to adapt to the newfangled IT threat, hired hordes of contractors to do the work, then fire them when done. After it's done they go back to banking and forget all the lessons they've learnt about building and managing IT infrastructure.
If you want to see how banking and computers should be combined, look at the Fintech's, not banks. But for some reason I don't understand traditional banks still out compete Fintech's. Maybe it's getting your head around both banking and running an IT business it too much for one human mind?
That same pattern is repeated everywhere. Why was everyone so scared of Huawei? It wasn't because they built the gear. It's because the phone telco's have devolved into marketing and finance companies who purchase in the gear from companies like Huawei and rent it out. Amazingly they don't know how to run the gear they purchased, instead get the supplier to install it and maintain it. But that meant what some eyes viewed as an organ of the Chinese communist party was running the countries phones with full access to every SMS and voice call. (Interestingly, IBM pulled the same stunt with the banks back in the day: you didn't buy an mainframe, you leased / rented it from IBM, and they maintained it.)
It's the same story everywhere I look. These big firms stick to the knitting. If you want to see total, utter incompetence in IT go work whose core business doesn't revolve around IT for a while. These are the firms that still choose Microsoft, despite the fact they've seen Sony's Microsoft based IT infrastructure torn apart so badly by North Korea they didn't know who their employees were, how much they owed creditors or how much debtors owned them for a while. Why do they choose Microsoft? Look around - who else allows you to outsource the know how about connecting millions of computing devices in 1000's of offices to a redundant cloud infrastructure that allows them to share data while providing a centralised authentication / authorisation infrastructure. There is only one choice, apart from developing it themselves which is out of the question.
If those businesses did start using what passes for AI today to manage and develop their own IT infrastructure, the result would not be pretty. But for all the shit I'm throwing at them here, I'm confident they are smarter than that. They know their limitations, they haven't done it before, and they won't start doing it now.
I disagree. For a long time now, the business of banking has been very much related to software. Software companies need a license to fully manage their money.
Excel spreadsheets have little to no validation logic that you're actually getting a good result, unless you have a secondary check (most spreadsheets are structured as "single entry" accounting, so lack the checks)
A prime example of this was the Reinhart/Rogoff paper advocating austerity that was widely quoted, and then it was discovered that the spreadsheet used had errors that invalidated the conclusions:
The point is not that people will be using specifically Excel, but that most business only pay for software because it is the tool that gives them the most power to automate their processes. They don't need high availablility, they don't need standards compliance, they don't extensive automated tests, they won't need cloud engineeers and SRE... all you need is some tool that can get the results your are looking for right now.
Academia already works like this. Software wrtiten for academic purposes is notoriously "bad" because it is not engineerd, but that doesn't matter because it is good enough to deliver the results that researchers need. Corporate IT will also start looking like this even at mid-sized companies.
An academic paper needs to deliver its output once, for the research. Maybe someone will try to replicate it later but that's someone elses problem (and fairly often proves the output of the former to be wrong)
Some stuff in companies might be similar, but there's a lot of things that people use every day, in a lot of different ways, and the software needs to work correctly regardless. You can't just drop it like a hot potato once you've built processes around it.
As always, the first 80% takes 20% of the time/effort, the last 20% takes the other 80%.
I don't disagree with anything you say here - using a tool that lacks guardrails is fine for a lot of tasks, but if that's the only tool and used where those guardrails go from "nice to haves" to something more critical is where the problem is.
I've been in ops for a long time and have encountered far too many "our IP addressing plan is just a spreadsheet with manual reconciliation".
I truly wonder if Excel and all it's predecessors and direct clones (Google Sheets, etc.) are holding back industry from making something truly better and more reliable.
> holding back industry from making something truly better and more reliable.
What "industry"?
If you are talking about the software industry, then I'd say you are creating a circular reasoning. If you are talking about all the other things that we actually need to do and which only incidentally have become too reliant on software to do it, then see back my original point: people don't need "better and more reliable" software to keep running their businesses.
If running your business to '90s standards is acceptable, sure, you can use AI to automate your manual processes with the same error rate and keep doing the same thing indefinitely.
But if the competitors have real software engineers and have used them to actually improve reliability, you'll be left behind.
What software engineers are being hired to work on:
- A facilities management company
- A bar/restaurant with a staff of 8
- An Architecture office
- A Law Firm with 10 associates
- A day care
- A car repair shop
- A cement factory
- A family-owned hotel
- A conference/event organizer
- A video production crew
- A roofing company
Ok, but if your competitors are getting/using software from a supplier who has real software engineers, and using that to operate at a higher level of reliability, then the same argument goes through.
If you want to go down the value chain, then by definition the less valuable the software is and the easier to be commoditized. The automation is not going to help just the manager-turned-vibecoder, it's also going to help professionals to create FOSS alternatives that can be robust enough.
It's not going to happen overnight, but the trend is there.
> If you want to go down the value chain, then by definition the less valuable the software is and the easier to be commoditized.
I'm not sure that holds for what we're talking about - high-value software can afford to be somewhat flaky because it delivers enough value when it works to make up for it, software that's only marginally worthwhile needs to be reliable because if it isn't then it's not worth the bother. Commoditized fields are more competitive.
> The automation is not going to help just the manager-turned-vibecoder, it's also going to help professionals to create FOSS alternatives that can be robust enough.
Not convinced. In my experience these tools don't really help with creating high-quality software. Maybe they'll get there eventually (at which point we're all out of a job), but right now they can't "hit the high notes".
Doesn't that also lead to the conclusion that "software engineers" are going to lose their ability to command high salaries, if the real value is in the domain expertise and not in the ability of optimizing some part of the business process?
> Doesn't that also lead to the conclusion that "software engineers" are going to lose their ability to command high salaries, if the real value is in the domain expertise and not in the ability of optimizing some part of the business process?
I mean the job has always required both - just being good at leetcode isn't enough to get paid well (except perhaps where there is a dysfunctional interview process), the key skill is being able to translate back and forth between the world of software and the world of business. Regular folk seemingly still find it difficult to think rigorously, in the way that fully correct automation requires, and AI hasn't actually helped with that any, so I think people with that skill will still command a premium. Work that doesn't benefit from rigour - being able to slap together a quick marketing site on wordpress or what have you - will pay badly if at all, but that was already the low end of the industry I think.
That's a software engineer that is limited to an mostly untyped macro language, with worse version control and poor tooling. It's not that software can't be written as an Excel spreadsheet, it is that it is just inefficient and failure prune.
I guess that's technically true, because "most businesses" are sole proprietorships without any employees... but they could get by just fine with a checkbook and a note pad.
But the reasons the business software sector grew far beyond Excel of the 1990s is because of the inherent limitations in scaling solutions built by business analysts inside of Excel. There's a vague cutoff somewhere in the middle of the SMB market where software architecture starts to matter and the consequences for fuckup are higher than the cost of paying for professionally made software with, importantly, a vendor on the hook for making sure it doesn't fuck up.
Uh, no. The main reason the software sector grew in the 90s was a particularly potent combination of FOMO, kickbacks, and strategically deployed cocaine.
Reminds me of that time when Nintendo's stock exploded after the launch of Pokemon go, only for traders to discover a few hours later that Niantic actually owned the game and correct the situation.
Nintendo is one of the three 1/3 owners in The Pokemon Company and an early investor in Niantic. I'd be curious how much was actually confusion as to who is making money vs just a normal hype cycle. I'd also be curious how much the game itself made the various owners vs what Nintendo made on the general interest in other Pokemon games/merchandise from the launch.
At the very least, the stock looks to have shot up for most of the launch month with the peak not occuring until July 18th and the stock still being significantly higher at the end of the month https://finance.yahoo.com/quote/NTDOY/history/?period1=14673...
I think the funniest bit of pure confusions was "Zoom Technologies (ZOOM)" being mistaken for "Zoom Video (ZM)" at the start of the pandemic to the point the SEC halted its trading on concerns around the confusion being the only reasonable driver. https://markets.businessinsider.com/news/stocks/zoom-technol...
The Zoom story is even crazier. The idea of the SEC needing to step in because traders were absolute idiots throwing their money away is laughable.
Let the traders burn their cash if they're too stupid to Google a stock's name, I say. Let them prove that the markets are intelligently driven and not just gambling for those under the influence of cocaine on the job.
A quick way to make me not respect someone's economic opinion is to mention the "Efficient Market Hypothesis." This should be buried 7 palms underground.
I would say that MS here is undervalued. They do not offer some small software package for a given business problem but the whole shebang - the OS, mail, calendar, office suite, IAM, cloud, etc. + support for each and the whole integration.
You can't realistically replace that with some LLM solution (in the near-term at least) and they can use the AIs to reduce their costs which is mostly people.
Microsoft has consistently proven over the last five years that they have zero ability to execute. It's an astounding failure after failure to do anything right.
It was so ridiculously shortsighted of them to decide as a strategy to underpay all their employees compared to the industry standard, especially considering their ambitions are still fairly unbounded (meaning it's not like they said everything we do will be easier than Google or Meta so we don't need to compete for the same pool of talent).
But maybe such a decision was inevitable in their culture. And now it's very difficult to correct.
Most of those companies make huge margins by suckering large organizations into outrageous contracts. I don't see how AI moves the needle on this one way or the other.
ServiceNow isn't really an "AI" company - they're one of the silent ITSM and Security companies that are nigh impossible to tear out, and are making silent moves into the OT Security space.
And that makes their "AI" pivot much more sustainable imo - their are already such a giant from a cashflow perspective that if some sort of AI valuation shakeup occurs, they have the drypowder to execute on M&A.
SNOW's closest comparables are CROWD and PANW - basically an Arora style platformization play.
So one thing that hasn't changed is that the marginal cost of software is still effectively zero. That's where most of the money was being made b/c if you were a monopoly or oligopoly that each additional unit sold was an absolute increase in revenue and you spread out your fixed costs.
What has changed most dramatically is the "fixed" cost of writing the software to begin with. Given that the costs were being spread out over so many units beforehand, it's not entirely clear to me how that changes a lot of the economics.
For the comments about the "SaaS vs build your own", we can use a home services metaphor. Sure, I can do a lot of what my plumber does. But they do it faster, know all of the issues that go wrong with the work and I can pay them a yearly fee to check my boiler to make sure it doesn't fail etc. The time saved by calling the plumber can then be spent with kids, more work or a combo of the two.
The Value of software is going down, this much is clear to most people.
It will continue to demand proper engineering for its creation and operation.
But AI will lead to an increase of unique one-of-a-kind systems created by very small teams. And the world will increasingly rely on these unique systems.
SaaS companies need to start reading the writting on the wall, their massive valuations enjoyed when software was harder to create will need to be justified.
Everyone says that but I don't see anyone cooking up the next photoshop and selling it at $3/month. Why are we not seeing more options of every tool? Most Saas companies are sales companies at their core rather than software companies. And those sales people are so good that they can sell a todo list for millions.
In the case of Photoshop it is the software itself that is becoming useless. In a few years, using photoshop will be viewed the same as developing physical film, a process from a by-gone era that is still possible, but impractical.
This is an extremely bold claim and I think that it completely overlooks how Photoshop is used by professionals in practice. Professional users want extremely fine grained and precise control over their tools to achieve the specific results that they want. AI "image editing" is incapable of providing anything remotely similar.
I've recently re-instated a Photoshop subscription and its now part of my core AI generated asset workflow. AI is fantastic at art direction but it needs minor adjustments to make it production ready. E.g putting real screenshots in with correct placement, smoothing, editing out artefacts etc.
I can't imagine the lengths I'd have to go to to instruct an LLM to do these tasks with words.
How much of what you do with Photoshop could be done with open source tools instead (GIMP, ImageMagick, etc), versus how much do you really need Photoshop for?
One technique I’ve used for cleaning up AI-generated images, was a Python script driving ImageMagick-and an LLM helped write the Python script (although it took a few iterations, because the LLM’s first attempt didn’t actually work)
OpenAI releases an electron slog of an app, while they have basically unlimited computing power, compared to anyone else except their direct competitors. Why aren't they just pumping out proper software built by their own AI/Codex...
> I don't see anyone cooking up the next photoshop and selling it at $3/month.
That's not the situation we're talking about though. It's someone saying "hmm, I need to edit this picture. Can I get ChatGPT to do it?" where 3 years ago they would have had to buy Photoshop and learn how to use it.
Similarly, if they need a tool to batch-convert a thousand images, they're getting an LLM to construct the specific tool they need in a couple of hours and then running that, rather than buying a software product that can do it.
You don't need a whole dev team to build a one-off tool for a specific job, which is probably 90% of the demand for those software products. LLMs are becoming the general-purpose tool for a lot of use cases.
ChatGPT cant do the precise photoshop tasks, not even close, infact the quality of output is worse than quality of input almost always. Ofc we live in this low quality internet now, so you may already be used to terribly edited images by AI.
>You don't need a whole dev team to build a one-off tool for a specific job, which is probably 90% of the demand for those software products. LLMs are becoming the general-purpose tool for a lot of use cases.
No, all of these tools have 90+% revenue coming from B2B sales, consumers dont buy software products anyway. All of the software purchases are tax deductible so corporations buy even if they use very little of it.
>they're getting an LLM to construct the specific tool they need in a couple of hours and then running that
This is something I really hope takes off for the common person. ChatGPT is perfect for bespoke little programs that do one thing and can be discarded after use.
> bespoke little programs that do one thing and can be discarded after use.
That's my best-case scenario as well: LLMs are scripting languages for a broader audience. They just barely automate busywork, but are not a reliable foundation.
The output is good enough for your consumption (or internal consumption) but not good or general purpose enough to sell to anyone. Like DIY projects done by a new homeowner with access to YouTube.
> Everyone says that but I don't see anyone cooking up the next photoshop and selling it at $3/month.
Yup, same reason you can't throw manpower at a software project and expect a proportional outcome (Brooks's Law). AI amplifies what's already there; it doesn't conjure taste or product vision out of thin air.
> Everyone says that but I don't see anyone cooking up the next photoshop and selling it at $3/month. Why are we not seeing more options of every tool?
I expect the markets are reflecting that soon there will be more competition.
It'll take time, and as LLMs improve, it'll take even less time.
> I expect the markets are reflecting that soon there will be more competition.
> It'll take time, and as LLMs improve, it'll take even less time.
People have written great software in ed(1). We have tools like uxn[0] written on potato computers and billions and years later, we still have to hope for AI output.
I dont actually think it changes the economics of software as a service much. What's true for the small scale is true for the large scale. Sure, it's easier to build your own HR platform now but it's also easier to write and maintain it at scale with all your domain knowledge, legal infrastructure, etc. This seems true for inventory management, document signing, ecommerce, expensing, crm, training, accounting, etc. Why wouldn't the offerings from services providers get better and cheaper (relatively)?
The stuff you do in-house is probably still going to tied deeply to your internal processes. Admin dashboards, special workflows integrating with different systems, etc.
Consider it in the realm of supply and demand.
The economics of software will change simply because the tool enables more software to be written.
In a way, the barrier of entry into the space of selling software has lowered. It hasnt vanished, but there will be many more entrants and offerings as a result, thus more competition for the existing SaaS companies.
I don't see how the economics of SaaS will remain the same when their value is formed of capital and labor expended, both of which require less now, so please explain how this doesn't lead to an increase in supply and a downward pressure on value?
There are more computers now than there ever have been. More people in more parts of the world have them than ever before. If you have this perspective you may just be locked in a first-world corporate nightmare that has stolen from you all vision and imagination.
Perhaps the it would be better described as "commodification" of software, which still gets my point across.
Software is absolutely more ubiquitous than ever before, this I can agree upon. But now we have the tools to create more of it, and therefore software is less valuable simply as it is less rare.
I dont mean to say that software is valueless, but rather that it enjoyed inflated value as the amount of capital and effort required to build a software product was much greater.
> Of course, ai will continue to improve. But it is also likely to get more expensive…Microsoft’s share price took a beating last week as investors winced at its enormous spending on the data centres underpinning the technology. Eventually these companies will need to demonstrate a return on all that investment, which is bound to mean higher prices.
That’s not how it works, and I’m surprised this fallacy made it into the Economist.
Commodity producers don’t get to choose the price they charge by wishful thinking and aspirational margins on their sunk costs. Variable cost determines price. If all the cloud companies spend trillions on GPUs, GPU rental price (and model inference cost) will continue going down.
Indeed, cheaper and cheaper AI for the same level of performance has been even more consistent empirically than improvement in frontier model performance.
What an odd article that is just designed to hype the software creation aspect, which doesn't really affect MAGAF.
MSFT went down because of overexposure in AI and because it is clear that people do not want it.
AI weariness is a thing, and if people go off the Internet or advertisers question whether humans or AI swarms are "watching" their ads it is over for the big players.
Trying to salvage the situation by hyping the relatively small code generation (theft) aspect is quite a poor analysis.
Yeah the article doesn't make a lot of sense to me. Guess whose writing software with AI? Software companies.
They mention sites like Base44 and Lovable. Sure, if tons of business was rotating out of software into no code AI solutions the article would have a point. But has a large portion of market cap moved out of AI into a few little no-code startups? Is Salesforce, Service Now, and SAP being replaced with no code applications? No. Absolutely not. These are small, niche companies. It does not explain a large downward movement in an entire industry.
What an odd account that is just created to try sway opinions on hot button topics.
Where have I seen this before? Oh right, the entire site, for months now. Nothing suspicious about that at all, I'm sure a swarm of other brand new accounts will reassure me 0.1 microseconds after being created. You guys sure do type fast!
yes, its continue to grow at as problem. Doesn't detract from the site as a whole, but there is more very very low value noise infiltrating in a way there hasnt been in the past - even in the relatively short time I've been here.
I was one of the nay sayers but right now I am convinced.
That being said, it still requires some engineering background to come up with interesting ideas and solutions with the help of LLMs but even that might be replaced.
The iShares Expanded Tech-Software Sector ETF (IGV) seems to backup what the article is saying. It isolate software firms from the IT industry. It is down about 10% last week, and 20% down the past six months. The IT sector as a whole didn't lose much.
This article I think is about a very specific subset of software stocks.
If you're holding say a total market index fund, or s&p500, or even qqq or many tech index funds, this would get hidden by the so-far very good growth on other tech stocks.
Other than smaller SaaS companies who offer things easily replaceable, I don't think many of the bigger ones can be replaced by AI, if anything, it might make them better. For instance I can't see us replacing our ticket management/support software, hosting, manufacturing/sales/stock software, accounting software, etc but it would be great if we could leverage all those tools better via AI (some are already easy to leverage).
The interesting thing I've noticed is software library authors could take a beating though. Quite a few libs in the .NET world have gone down the monetized paths, for all of the ones I've been using, I've just got AI to remove them and implement native solutions. But none of these are large listed companies.
There are many stories on r/ClaudeCode of developers realizing the power of that particular model.
Think of all the solo developer webapp/mobile "hot" startups (FB, Craigslist, Instagram, SnapChat, Pinterest, etc.).
It is no joke that a seasoned developer can build similar apps with Claude in...days. Not just the app, but the entire infrastructure (e.g. scalable on AWS using Terraform). This includes setting up domain registration, Elastic IPs, provisions the instances, setting up keys/ELBs/email server/Twilio/etc.
What is astonishing is how well Claude can plan. You write a spec, and it will give you an entire plan including ASCII UML diagrams, infrastructure changes, database updates, the code itself, user stories, and test cases. It will then do all the work, including "tricky" things like SSHing over a bastion to run the scripts that it wrote on an instance behind a VPC.
The main obstacle now is the context window. If it were to increase 100x or so, Claude could probably manage an entire software company's codebase at scale.
I'm sorry to say, but there's no way you could use Claude for a month or so without feeling, "We are going to need way fewer software engineers."
There’s only so much investable capital available, if it is going to hardware stocks it’s got to be coming from somewhere else. It’s just a substitution toward hardware tech stocks. Economics 101.
If I were confident it were AI as the cause I'd be seriously tempted to buy right now
Take Adobe for example: they're getting pummeled and there's no way it's due to AI
No-one's building an in-house Photoshop clone to replace them
AI also isn't letting any competitors in. Geez if it were as easy as cloning their product you'd have a mile high mountain of VC money to fund it even pre-AI
It’s not that AI will replace softwares. It’s that AI will replace people using softwares. Less workers means lower sales. There will be exceptions but a big chunk of B2B space is basically going away.
Some 7-15% down in a trading day is a lot for an established corporation. I consider Salesforce dropping 7% without some obvious trigger to be at least somewhat newsworthy, and from the first sentences in the article I get the impression that The Economist is sitting on more examples like that.
A lot of people are tense about the AI venture ouroboros and what it might mean for future software, especially people with money and little to no experience actually deploying software.
Edit: At the time I saw some memes claiming that roughly 1.5 trillion dollars in market value had evaporated, which if true is not a small sum.
This article crystallizes something I witnessed firsthand last week.
Overheard a guy at a restaurant explaining how he builds phone apps with AI and no coding experience. When asked how he verifies the code works, he said he pastes it into a different AI to explain it.
That's the "slopware" problem in action. The code compiles. It might even work. But there's no understanding of what it's actually doing, no ability to debug when it breaks in production, no awareness of the technical cruft accumulating with every prompt. That's a problem for people creating software for others and is a huge opportunity for software developers to take prototypes and build real stuff.
Does anyone remember the RAD days of the 90s?
On the flip side, for people making software to solve THEIR problems, they don't need to make anything production quality. Its for a single user, themselves! Maybe the LLMs are good enough now that people don't need to buy or subscribe to software that solves trivial problems as they can build their own solutions. Maybe the dream of smalltalk, hypercard, and even early web where anyone can use the computer of what it was meant for is finally here?
Meanwhile, in the real world, as a software developer who uses every possible AI coding agent I can get my hands on, I still have to watch it like a hawk. The problem is one of trust. There are some things it does well, but its often times impossible to tell when it will make some mistake. So you have to treat every piece of code produced as suspect and with skepticism. If I could have automated my job by now and been on a beach, I would have done it. Instead of writing code by hand, I now largely converse with LLMs, but I still have to be present and watching them and verifying their outputs.
Yeah but just look at what happened within the last 2 years. I was not convinced about the AI revolution but I bet in another 2 years, we won't be looking at the output..
Not so sure, there are indiosyncracies now within the various models, I suspect all this is the result of RLHF, and they cause side.effects. I'm not sure that more attention-is-all-you-need is necessarily going to give us another step change, maybe more general intelligence, but not more focus. Possibly also we soon end up with grokked AI's on all side: pushing their agenda whatever you asked... Gemini: "no this won't work with Cloudflare, I created your GCP account, there you go" OpenAI: "I am certain you really wanted me to do all these other tasks and I have done them, you should upgrade your tokens plan" etc (you know how to fill in for DeepSeek and Grok already, right)
I've been coming around to the view that the time spent code-reviewing LLM output is better spent creating evaluation/testing rigs for the product you are building. If you're able to highlight errors in tests (unit, e2e, etc.) and send the detailed error back to the LLM, it will generally do a pretty good job of correcting itself. Its a hill-climbing system, you just have to build the hill.
the crash is indiscriminate, which is really disheartening. even infra software is getting demolished, no llm is going to replace something like mongodb, but it's all traded under the same umbrella.
It already started, freelancer developers are making a lot of money already.
I remember watching a developer having to fix the company entire code because they went full AI and now nobody knows shit anymore.
If you are a good senior developer indeed, now is the best time for you to become millionaire.
Company after company can no longer fix things, they are bringing external developers to keep afloat.
Remember the tweet popular last year with the AI bot deleting the production database setting the company back to zero???
Google "Project Genie" which allegedly can take your input and the AI will make it rain, drove investors into panic mode.
They think that you can create GTA6 like that.
It was the perfect storm: clueless investors + the whole AI bubble already bursting if you are following non-biased news.
Or it could be just the good old cyclical stock market correction after years of entire segments of the indecies growth being purely driven by software.
Of course such an angle would require much more research on behalf of the journalists for much less spotlight than what you would get for free by just singing to the tune of AI-doomerism.
... because they've been driven by years of bad leadership, monopolistic scheming, and investor speculation?
AI is just the latest symptom, IMO.
We normalized growth over revenue. Governments around the world have been pressured by Big Tech to dismantle anti-trust and regulation. We glorified shipping slop, suppressing unions, and pretending like programmers were temporarily embarrassed founders.
The stocks are dropping because our system can't sustain these practices, IMO.
AI replacing vendors feels like a strange risk, though I'm not sure if vendors view things through a technical lens. Security concerns and service maintenance alone, IMO, makes writing internal software a large proposition - one that I would want a trusted vendor if it wasn't a hobby project and I could just afford that. Particularly if that data being lost or broken would severely harm a business.
There are also already frameworks in languages like Python that make putting up an internal website very, very simple. If you don't need production grade, you might have already had a pretty low barrier to entry, if you have the skills to figure out how to host the service you just vibe coded, you can probably figure out some basic django to throw data in its ORM, or find libraries that do the work for you.
AI does feel in those technical ways to be an overstated risk, to me at least.
Far more worrying to me is the breakdown of the USA and its role. We are going to have blocs of software and hardware entirely from competing geopolitical regions, which may not be able or authorized to communicate with one another. Any businesses in the USA with significant CA or EU marketshare right now will decline in value to the degree client companies choose, or are told, to stop using USA systems.
(My own governor in California outright antagonized the Europeans at Davos calling them "pathetic" while telling them to get tough on Trump, which means in practice, stop using US, meaning yes California, tech goods and services. A lot of revenue from tech comes from overseas, and we are going to lose at least some portion of that. Particularly in California which already has budget problems with what revenue it's got. Stunning how even The Guardian treated those remarks as "tough" and not insane and self-destructive... sadly it's nothing compared to the worst of the US right now.)
So, where do you throw investment right now? To the US where the marketshares will likely decline, and the political and trade environment is insanely uncertain, but there is momentum on AI and generally decent hardware design, and the existing software companies and knowledge? To the EU or Canada where maybe a nascent software industry will take hold, or perhaps American companies will relocate talent if the USA collapses into civil conflict? To China, if they end up becoming a hegemon, given their strength in hardware and their growing efforts to invest in software alternatives?
I suppose I read markets don't react to "tensions," and maybe it is unprecedented to modern memory, but I think about these things more than AI.
I would add: open source throws additional curveballs. The EU wants to push for open source, and that is admirable, but I wonder what the sustainable funding model would be, and how that could attract attention. I wonder about business models and ability to generate return on investment.
I would think the saner solution is allowing proprietary companies, but imposing technical standards which companies collaborate on, enabling interoperation. Am I mistaken, that the EU is trying to do this with the DMA? I have heard general overtones, but I haven't looked at it very closely, and our media doesn't cover EU tech regulations in much detail in the US, though in a decent world it would, I wish it would.
It always amuses me because the people complaining about stocks going down are always the same people who are causing them to go down. Losing money was a choice that those people collectively made. They could have chosen to act differently, in light of the optimistic long-term future.
No doubt! Collective action is a solved problem! Why do people do things other than the obvious Right Things we can all agree on? Must be some kind of mass psychosis…
Software will be easy to create, which will kill moats and margins on existing products. The game is up for pure saas. Smart money started pricing this in one year ago
For a lot of SaaS firms, a big part of their value is the domain knowledge and best practices encoded in the software.
Current AIs often do a bad job of that. Sure, they know a lot of it. But they also get a lot of it wrong, and can’t tell the difference between genuinely good advice, and advice that sounds good but is practically worthless or even harmful.
(Of course I’m biased since I work for a SaaS firm. But I’m talking about them in general, not just my current employer.)
I'm not sure how realistic it is to expect AIs to get detailed hands-on domain knowledge. A lot of this stuff humans learn by doing and by experience. AI models don't learn anything by doing and experience. A model vendor can't possibly encode all that experience into their training data, and even if they try, the problem is a lot of it will be vertical-specific, country/region-specific, and it is forever changing. SaaS firms have professional services and sales consulting teams who are constantly talking to customers about their actual business problems, and they feed that accumulated wisdom back to product management and data science, who in turn help engineering encode it into the product.
From what I've personally seen in SaaS AI agent development – if you try to build an AI agent to give customers advice in a particular business domain, you need to do a huge amount of work validating the answer quality with actual domain experts, and adjusting the prompts / RAG documents / tool design / etc to make sure it is giving genuinely useful advice. It is really easy to build a system which generates output which sounds superficially good, but an actual domain expert will consider wrong or worthless.
Was the hard part ever really the software, though? It's the Service part of SaaS that seems to provide the moat. Lock-in, habits, workflows, integrations, and trust. And don't discount the appeal of making some part of your operations "someone else's problem." Could you hire engineers or use an LLM to make your own Google Docs? Probably, yeah, but would that be worth the headache of being responsible for a bespoke internal document system?
You might think you can, for a while. Been there, done that. But you probably can not do so sustainably in most cases. Even if you could, would you really be better off building vs. buying? Outsourcing development, operations, and maintenance is almost always the better choice, letting you focus on the things you do uniquely, differentiably, or meaningfully better.
"We have this awesome internal version of Docs that we're responsible for fixing, upgrading, and doing support for" is not the flex "AI can code anything!" aficionados think it is. Especially when you also have similar internal versions of Sheets, Jira, Slack, GitHub, Linux, Postgres, and 100 other tools.
Making your own Google Docs is stupid unless your company's core business is document management.
OTOH Replacing SAP with a bespoke system will make a lot of sense for many companies.
SAP is already the worst of both worlds. It'll have been highly customized for your flow so you've got all of the headaches of bespoke software and all of the headaches of SaaS. And unlike Google Docs, it'll be highly integral to your core business.
Companies pay millions and millions to get away from bespoke software, but not simply because of the costs. Companies want to do their core business, they don't want to also be a software enterprise, and assume all the risks that entails. Even if AI makes creating software 10 times less expensive, that doesn't really change.
you are aware of the long history of organizations being absolutely screwed by bad erp implementations right? nike's 2001 issue, the horrific birmingham oracle implementation, avon, etc.
the problem is an AI can figure out habits and workflows pretty seamlessly. lock-in is artificial and loses power when it's really easy to make a competing app for large swaths of web apps.
integration is likely the most valuable part of the puzzle, but it's also prone to disruption
I think all that's left are like <50 apps each with their own very bespoke and "power user"-ready interface
Even then, I would expect most orgs would want to contract out to a company that manages an instance of that open source software. That management company could undercut bigger players because they don't need as many engineers working on features. I don't see where the LLM comes in and shifts the calculus here.
Can't wait for every hospital to create their own patient record system, every accounting office to create their own accounting software, every car service to create their own timebooking solution, etc.
I wonder when a "virtual person" will be able to replace carefully-coded business software?
IE, before software automates a business process, it's typically done by hand, by a real person.
What if someone sells a "virtual person" that's capable of doing the job? What if that "virtual person" is harder to train than a real person, but orders of magnitudes easier than writing custom software or custom business rules?
More importantly: What if the "virtual person" can explain the job they do much better than trying to read source code? That's very useful in ~30ish years when the "virtual person" understands the business process better than the people in the company, and someone is trying to update / streamline processes.
No, they dont distrust AI, they now may start to distrust all the big service providers that are likelty to eventually be eradicated by AI now that everyone can prompt a browser. This perhaps will also finally kill Microsoft Access, which is the closest to AI doing the work instead of you for so long. Then all the do-it-yourself enterprise-grade systems became SAAS, so its right for SalesForce and friends to go fck themselves once in a lifetime for standing in the way of actual software ownership.
I know, you are saying - they will adopt. Perhaps, while also cutting 40% (if not more) personnel during the pivot, and perhaps also by facing more challenges by faster moving competition.
Like, look for a second - why didn't Google create what the perplexity newsfeed is, given they actually like did 10 years ago and then close to nobody was using it. The equilibrium seems super unstable. What happens if a smart kid devices way to compress this information 10x times faster. This immediately means neural chips stall.
This volatility is something, not a joke. The second order effects may be unforseeable in an unparalleled way. Besides, the Luddites organize much better in 2026 given reddit etc.
https://archive.ph/37Hwn
> The fear is that these [AI] tools are allowing companies to create much of the software they need themselves.
AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, support, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.
A lot of these companies are not small monthly fees. And if you’ve ever worked with them, you’ll know that many of the tools they sell are an exact match for almost nobody’s needs.
So what happens is a corporation ends up spending a lot of money for a square tool that they have to hammer into a circle hole. They do it because the alternative is worse.
AI coding does not allow you to build anything even mildly complex with no programmers yet. But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better.
Another thing AI enables is significantly lower switching costs. A friend of mine owned an in person and online retailer that was early to the game, having come online in the late 90s. I remember asking him, sometime around 2010, when his Store had become very difficult to use, why he didn’t switch to a more modern selling platform, and the answer was that it would have taken him years to get his inventory moved from one system to another. Modern AI probably could’ve done almost all of the work for him.
I can’t even imagine what would happen if somebody like Ford wanted to get off of their SAP or Oracle solution. A lot of these products don’t withhold access to your data but they also won’t provide it to you in any format that could be used without a ton of work that until recently would’ve required a large number of man hours
I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones. It's not just a converter, it's a gui with the tools needed to facilitate a quick manual conversion.
There is only one program that offers this ability, but you need to pay for the entire software suite, and the process is painfully convoluted anyway. We went from doing maybe 2-3 files a day to do doing 2-3 files an hour.
I have repeated ad-nausea that the magic of LLMs is the ability to built the exact tool you need for the exact job you are doing. No need for the expensive and complex 750k LOC full tool shed software suite.
Is this a tool deployed into an internal production environment? Or is it more of “dev” or “business user desktop” type app that isn’t deployed formally.
I’m working with a company now that thinks that AI is great until you need to deploy to Prod. Probably true in some cases, especially for tools built with Prod environments as targets.
But I’m using Claude Code for a tool that doesn’t absolutely require that sort of environment. It helps a company map data (insurance risk exposure data) to a predefined intermediate layout and column schema.
I know that I’ll run into resistance once I say “this could be deployed to Prod” but I think AI is a major win for Prod-like things.
My professional world largely lives in spreadsheets and relational databases. Neither going anywhere anytime soon. And spreadsheets are the currency of the business and industry in so many ways. They are very prod-like in my opinion.
Was the custom tool developed by copying how the existing software worked? Copying existing functionality is not always possible, and doesn't capture the real costs.
No, it is incredibly streamlined because it tailored specifically to achieve this modernization.
The paid program can do it because it can accept these files as an input, and then you can use the general toolset to work towards the same goal. But the program is clunky an convoluted as hell.
To give an example, imagine you had tens of thousands of pictures of people posing, and you needed to change everyone's eye color based on the shirt color they were wearing.
You can do this in Photoshop, but it's a tedious process and you don't need all $250/mo of Photoshop to do it.
Instead make a program that auto grabs the shirt color, auto zooms in on the pupils, shows a side window of where the object detection is registering, and tees up the human worker to quickly shade in the pupils.
Dramatically faster, dramatically cheaper, tuned exactly for the specific task you need to do.
I think use cases like that will be where "AI" has the biggest wins.
That's a task that I could automate as a developer, but other than LLM "vibe coding", I don't know that there's a good way for a lay person to automate it.
There are two forms of business software gen AI coding is 100% going to eat:
Because neither of these make economic sense for commercial companies to develop targeted products for.Consequently, you got "bundled" generalized apps that sort of did what you wanted (GP's example) or fly-by-night one-off solutions that haven't been updated in decades.
The more interesting questions are (a) who is going to develop these new solutions and (b) who is going to maintain these new solutions? In-house dev/SRE or newly more-efficient (even cheaper) outsourced? I'd bet on in-housing, as requirements discovery / business problem debugging is going to quickly dominate delivery/update time. It already did and that was before we boosted simple app productivity.
> It's not just a converter, it's a gui with the tools needed to facilitate a quick manual conversion.
is this like a meta-joke?
> I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones.
The funny thing about examples like this is that they mostly show how dumb and inefficient the market is with many things. This has been possible for a long time with, you know, people, just a little more expensive than a Claude subscription, but would have paid for itself many times over through the years.
It's not just a joke, it's a meta-joke! To address the substance of your comment, it's probably an opportunity cost thing. Programmers on staff were likely engaged in what was at least perceived as higher value work, and replacing the $250/mo subscription didn't clear the bar for cost/benefit.
Now with Claude, it's easy to make a quick and dirty tool to do this without derailing other efforts, so it gets done.
> Programmers on staff were likely engaged in what was at least perceived as higher value work, and replacing the $250/mo subscription didn't clear the bar for cost/benefit.
Agreed absolutely, but that's also what I'm talking about. It's very clear it was a bad tradeoff. Not only $250/month x three seats, but also apparently whatever the opportunity cost just of personnel tied up doing "2-3 files a day" when they could have been doing "2-3 files an hour".
Even if we take at face value that there are no "programmers" at this company (with an employee commenting on hacker news, someone using Claude to iterate on a GUI frontend for this converter, and apparently enough confidence in Claude's output to move their production system to it), there are a million people you could have hired over the last decade to throw together a file conversion utility.
And this happens all the time in companies where they don't realize which side of https://xkcd.com/1205/ they're on.
It's great if, like personal projects people never get started on, AI shoves them over the edge and gets them to do it, but we can also be honest that they were being pretty dumb for continually spending that money in the first place.
We have no programers on staff, we are not a tech company.
I know we are in a bubble here, but AI has definitely made its way out of silicon valley.
The problem with this reasoning is it requires assuming that companies do things for no reason.
However possible it was to do this work in the past, it is now much easier to do it. When something is easier it happens more often.
No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
> There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
I wonder how much of the benefit of AI is just companies permitting it to bypass their process overhead. (And how many will soon be discovering why that process overhead was there)
Sure, there's a lot of process that is entirely justified, but there's also a whole lot of process that exists for reasons that are no longer relevant or simply because there are a lot more people whose job it is to make process than whose job it is to stop people from making too much process.
> No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
I mean, I'm absolutely familiar with how company decision making and inertia can lead to these things happening, it happens constantly, and the best time to plant a tree is today and all that, but the ex post facto rationalizations ring pretty hollow when the solution was apparently vibecoded with no programmers at the company, immediately saved them $750 a month and improved their throughput by 8x.
Clearly it was a very bad call not to have someone spend a couple of days looking into the feasibility of this 10 years ago.
>The problem with this reasoning is it requires assuming that companies do things for no reason
Experience shows that that's the case at least 50% of the time
Our company just went through an ERP transition and AI of all kinds was 0% helpful for the same reason it’s difficult for humans to execute: little to no documentation and data model mismatches.
surprising considering you just listed two primary use cases (exploring codebases/data models + creating documentation)
Exploring a codebase tells you WHAT it's doing, but not WHY. In older codebases you'll often find weird sections of code that solved a problem that may or may not still exist. Like maybe there was an import process that always left three carriage returns at the end of each record, so now you got some funky "lets remove up to three carriage returns" function that probably isn't needed. But are you 100% sure it's not needed?
Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?
Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.
Adding that this just gets worse when databases are peppered with direct access by vibe-coded applications that don’t look at production data or gather these insights before deciding “yeah this sounds like the format of text that should go in the column with this name, and that’s the column I should use.”
And now there’s an example in the codebase of what not to do, and other AI sessions will see it, and follow that pattern blindly, and… well, we all know where this goes.
> creating documentation
How is an AI supposed to create documentation, except the most useless box-ticking kind? It only sees the existing implementation, so the best it can do is describe what you can already see (maybe with some stupid guesses added in).
IMHO, if you're going to use AI to "write documentation," that's disposable text and not for distribution. Let the next guy generate his own, and he'll be under no illusions about where the text he's reading came from.
If you're going to write documentation to distribute, you had better type out words from your own damn mind based on your own damn understanding with your own damn hands. Sure, use an LLM to help understand something, but if you personally don't understand, you're in no position to document anything.
Whats with this assumption that there's no human involvement? I dont just say "hey scan this 2m loc repo and give me some docs'... that would be insane. T
he AI is there to do the easy part; scan a giant spaghetti bowl and label each noodle. The humans job is to attach descriptions to those noodles.
Sometimes I forget that people on this site simply assume the worst in any given situation.
I don't find this surprising. Code and data models encode the results of accumulated business decisions, but nothing about the decision making process or rationale. Most of the time, this information is stored only in people's heads, so any automated tool is necessary blind.
This captures succinctly the one of the key issues with (current) AI actually solving real problems outside of small "sandboxes" where it has all the information.
When an AI can email/message all the key people that have the institutional knowledge, ask them the right discovery questions (probably in a few rounds and working out which bits are human "hallucinations" that don't make sense). Collect that information and use it to create a solution. Then human jobs are in real trouble.
Until that AI is just a productivity boost for us.
The AI will also have to be trained to be diplomatic and maybe even cunning, because, as I can personally attest, answering questions from an AI is an extremely grating and disillusioning experience.
There are plenty of workers who refuse to answer questions from a human until it’s escalated far enough up the chain to affect their paycheck / reputation. I’m sure that the intelligence is artificial will only multiply the disdain / noncompliance.
But then maybe there will be strategies for masking from where requests are coming, like a system that anonymizes all requests for information. Even so, I feel like there would still be a way that people would ping / walk up to their colleague in meatspace and say “hey that request came from me, thanks!”
Please don't feed people LLM generated docs
i love the assumption by default that "ai generated" automatically excludes "human verified".
see, i actually read and monitor the outputs. i check them against my own internal knowledge. i trial the results with real trouble shooting and real bug fixes/feature requests.
when its wrong, i fix it. when its right, great we now have documentation where none existed before.
dogfood the documentation and you'll know if its worth using or not.
If it is not a small fee, I do wonder - is there still advantage to having a provider which one may take out a lawsuit against if something goes wrong? To what extent might liability and security vetting by scaled usage still hedge against AI, in your view?
It’s generally a tradeoff decision between comparative advantage of a vendor versus {price cost and contracting cost}.
Contracting my cost is the difference in costs for a contract across companies versus a purely internal project. This could involve the lawyers on both sides, the time taken to negotiate which party is responsible for what deliverable / risk, the cost to enforce the contract, the time taken for negotiations/ iterations, etc.
One efficient company doing it internally is obviously efficient. Two inefficient companies negotiating a contract is obviously inefficient. The interesting questions are the other 2 quadrants, where the answer may change between the LLM case and non-LLM case.
Well in that case the provider is likely paying for insurance and charging you a mark up, so you could likely just buy the insurance and save the markup anyway.
I worked on a product that had to integrate with Salesforce because virtually all of our customers used it. It must have been a terrible match for their domain, because they had all integrated differently, and all the integrations were bad. There was virtually no consistency from one customer to next in how they used the Salesforce data model. Considering all of these customers were in the same industry and had 90% overlapping data models, I gave up trying to imagine how any of them benefited from it. Each one must have had to pay separately for bespoke integrations to third-party tools (as they did with us) because there was no commonality from one to the next.
One thing that's interesting is that their original Salesforce implementations were so badly done that I could imagine them being done with an LLM. The evergreen stream of work that requires human precision (so far, anyway) is all of the integration work that comes afterwards.
> So what happens is a corporation ends up spending a lot of money for a square tool [SaaS] that they have to hammer into a circle hole.
You are assuming that corporations have the capability to design the software they need.
There are many benefits to SaaS software, and some significant costs (e.g. integration).
One major benefit of SaaS is domain knowledge and most people underestimate the complexity of even well known domains (e.g. accounts).
Companies also underestimate the difficulty of aligning diverging political needs within the business, and they underestimate the expense of distraction on a non-core area that there is no business advantage to becoming competent at. As a vendor sometimes our job was simply to be the least worst solution.
At least that's what I saw.
this is true in many cases and not in many cases. Another true one is payments - it's complex AF and and no one will sit down and vibe code it. A CRM? Easy in many cases. Some workflow tool? Easy, they know the exact workflow.
So, sure, some products will go the way of the dodo and some will not.
Only the still meek vibe code payments, the truly brave simply install the Stripe MCP directly in their on site chat window
>But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better
Could you share any data on this? Are there any case studies you could reference or at least personal experience? One order of magnitude is 10x improvement in cost, right?
I‘m not sure it’s a perfect example, but at least it’s a very realistic example from a company that really doesn’t have time and energy for hype or fluff:
We are currently sunsetting our use of Webflow for content management and hosting, and are replacing it with our own solution which Cursor & Claude Opus helped us build in around 10 days:
https://dx-tooling.org/sitebuilder/
https://github.com/dx-tooling/sitebuilder-webapp
Thanks for the link.
So, basically you made a replacement for webflow for your use case in 10 days, right?
That's fair to say, yes, with the important caveat that it isn't a 1:1 replacement of Webflow, which is exactly the point.
I’m not sure the world needed yet another CMS
It doesn't. The person is saying they built just the functionality they needed. Probably 25% of a CMS. That's the point.
Exactly.
And the big advantage for us is two things: Our content marketers now have a "Cursor-light" experience when creating landingpages, as this is a "text-to-landingpage" LLM-powered tool with a chat interface from their point of view; no fumbling around in the Webflow WYSIWYG interface anymore.
And from the software engineering department's point of view, the results of the work done by the content marketers are simply changes/PR in a git repository, which we can work on in the IDE of our choice — again, no fumbling around in the Webflow WYSIWYG interface anymore.
This is the benefit few understand properly. The storage layer is where you get a lot of benefits.
Why waste your time on something that isn't your core business when, presumably, the SAASes of the world will use the new tech and lower prices as well?
In most cases they can't. The cost they face is sales and marketing. Acquiring a customer costs money. Churn happens.
Oh, but that doesn't matter. SaaS tools aren't bought by the people that have to use them. Entire groups in big companies (HR & co) are delegating the majority of their job to SaaS and all failures are blamed on the people who have to interact with them while they are entirely ancillary to their job.
Modern AI probably could’ve done almost all of the work for him.
no way. We're not talking a standalone AI created program for a single end-user, but entire integrated e-commerce enterprise system that needs to work at scale and volume. Way harder.
I also have pretty hefty skepticism that AI is going to magically account for the kinds of weird-ass edge cases that one encounters during a large data migration.
Just like coding the AI can reach out to a human for clarification on what to do.
It's not that AI is magically going to do it, it's that the human running the migration now has better tools to generate code that does account for those one-off edge cases.
I was interviewing with a company that has done ETL migration, interop and management tools for the healthcare space, and is just dipping their toes in the "Could AI do this for us or help us?"
Their initial answer/efforts seem to be a qualified but very qualified "Possibly" (hah).
They talked of pattern matching and recognition being a very strong point, but yeah, the edge cases tripping things up, whether corrupt data or something very obscure.
Somewhat like the study of MRIs and CTs of people who had no cancer diagnosis but would later go on to develop cancer (i.e. they were sick enough that imaging and testing was being ordered but there were no/insufficient markers for a radiologist/oncologist to make the diagnosis, but in short order they did develop those markers). AI was very good at analyzing the data set and with high accuracy saying "this person likely went on to have cancer", but couldn't tell you why or what it found.
Bespoke software development is a huge market. It is incredibly expensive. AI coding agents are not perfect, but just usable enough to be coached through by a semi-IT-literate business person to write some scripts or a little crud site that's needed for a specific LOB task (typically a 30k-250k range project)
The alternatives before were propose the case to IT, and if lucky it gets put on the planning, outsourced to consultants, and delivered 18 months from now for an astronomical investment in both time and cost. Or go at it yourself with Excell and VBA.
The AI thing will be a just 'good enough' barely working clutch of ugly code. Then again, so was most of the consultant produced code.
This exactly. How many small businesses were running some sort of Access database built by someone who knew just enough to meet their business process needs? "Brenda always orders these parts on Tuesday so make sure there is a Tuesday ordering table." Eventually some SaaS solution came in and they evolved their business process because of the proposed benefits. I expect we are going to see a resurgence of good enough software created for the real world hodgepodge of business practices. Not a bad thing in the short term as it will creating adequate efficiency, but long term we are going to reap the technical debt. My prediction is that the next SaaS evolution is going to be platforms for these random solutions. Just as Salesforce captured CMS market, someone is going to capture the AI agent market where users can write and host their LOB code agnostic to the rest of their infrastructure stack and 'slightly' reduce that technical debt.
Hmm, I wonder if it would be cheaper to hire a couple of software engineers to vibe-code custom SaaS apps on top of the company's existing data layer instead of paying for a hundred different SaaS subscriptions.
Financial considerations aside, one advantage of having in-house engineers is that you can get custom features built on-demand without having to be blocked on the roadmap of a SaaS company juggling feature requests from multiple customers...
I'm at a large company that is building connections between all of its different financial systems. The primary problem being faced is NOT speed to code things, the primary problem at large companies is getting business aligned with tech (communication) and getting alignment across all the different orgs on data ownership, access, and security. AI currently doesn't solve any of this. Throw in needing to deal with regulation/SOX compliance and all the progress you think AI might make, just doesn't align with the problem domains.
Agreed. The SWEs already receive a steady supply of conflicting demands from every possible business unit; the value add for these teams is a working PMO to prioritize the requests coming in.
Totally makes sense. Turns out that a lot of what Palantir's "Forward Deployed Engineers" do is navigating these bureaucratic and political obstacles to get access to the data: https://nabeelqu.co/reflections-on-palantir -- which may be Palantir's real secret sauce, rather than the tech itself.
> getting business aligned with tech (communication) and getting alignment across all the different orgs
This is what a CEO is supposed to do. I wonder if CEOs are the ones OK with their data being used and sent to large corps like MS, Oracle, etc.
I haven't seen what you're suggesting from a CEO at a large company that's primary business is non-software related. At some point in a businesses life theres an accumulation of so many disparate needs and systems that there can be many many layers of cross org needs for fulfilling business processes. This stuff is messy.
I think I saw it asserted that its easier for a new company, which definitely makes sense as you don't carry along all the baggage.
I work in large projects like this, the CEO doesn't get involved in the little "computer project" except during the project kickoff. Even then, it's just to "say a few words about the people I admire on this team". In large global companies these projects are delegated 3 or 4 levels below the CEO at the highest.
Makes me wonder if they are getting ripe for disruption. Not by a new business model, but a new operating model where a CEO will be tech/ai-aware and push through all these kinds of things.
There's definitely a market for on-prem solutions that don't involve sending all your data to someone else, while reaping the benefits.
This is also generally true for all mid to large businesses I've ever worked at.
The code they write is highly domain-specific, implementation speed is not the bottleneck, and their payroll for developers is nothing compared to the rest of the business.
AI would just increase risk for no reward.
> one advantage of having in-house engineers is that you can get custom features built on-demand without having to be blocked on the roadmap of a SaaS company juggling feature requests from multiple customers...
Many larger enterprises do both – buy multiple SaaS products, and then have an engineering team to integrate all those SaaS products together by calling their APIs, and build custom apps on the side for more bespoke requirements.
To give a real world example: the Australian government has all these complex APIs and file formats defined to integrate with enterprises for various purposes (educational institutions submitting statistics, medical records and billing, taxation, anti-money laundering for banks, etc). You can't just vibe code a client for them – the amount of testing and validation you have to do with your implementation is huge–and if you get it wrong, you are sending the government wrong data, which is a massive legal risk. And then, for some of them, the government won't let you even talk to the API unless you get your product certified through a compliance process which costs $$$. Or, you could just buy some off-the-shelf product which has already implemented all of that, and focus your internal engineering efforts on other stuff. And consider this is just one country, and dozens of other countries worldwide do the same thing in slightly different ways. But big SaaS vendors are used to doing all that, they'll have modules for dealing with umpteen different countries' specific regulations and associated government APIs/file formats, and they'll keep them updated since they are forever changing due to new regulations and government policies. And big vendors will often skip some of the smaller countries, but then you'll get local vendors who cover them instead.
In the past, maybe it might have taken 2,000 engineers to build a Figma equivalent. Today, it might take 20.
Software will be cheap to develop so competition will be extremely high. Therefore, SaaS companies should not command a high PE ratio in the stock market anymore.
It's physical companies that should command a higher PE ratio. Energy, materials, and chip companies to be exact. This was reversed pre-LLM era.
Which part of Figma can you build with 20 engineers? I think we're still not close to a small team building real-time collaboration software at scale that actually works at the quality level that customers expect.
>It is much cheaper to pay a small monthly fee to a SaaS company.
It's not that cut and dried - it all depends on what your company needs from SaaS and how big it is. SaaS companies like Salesforce don't charge a "small monthly fee" - they charge 10s of millions of dollars per month for large corporations. It's not hard at all to push that money towards AI development and have a better solution built in-house now. Yes, it still takes serious project management skills, but so does integrating Salesforce or other large SaaS software.
> It is much cheaper to pay a small monthly fee to a SaaS company.
How often do software projects underestimate their cost and timeframe?
The LLM-written replacement may prove to be more expensive, but may appear to be cheaper initially when all of the costs and budgets are hidden in the “fog of war”.
How effective will vendor messaging warning clients away from writing their own vendor replacements be? Of course a vendor will play up their strengths and will try to play up the clients’ weaknesses. It’s like asking a used car salesman to give you their opinion of a car on their lot — they aren’t seen as neutral observer.
Atlassian tools for a client like mine (hundreds of employees) can easily cover the expense of internalizing it. It's Jira plus confluence mostly, it's not rocket science.
And that's just atlassian.
Start adding stuff that costs many many many yearly salaries (special software for managing inventories and warehouses) it starts making sense to prototype alternatives internally.
I came to the conclusion that if it's not Teams/SharePoint or the moat is on the extreme legal complexity side (e.g. payrolls), you can at least think of building an alternative that is good enough without needing to be perfect.
Ugh you are aware that Atlassian earlier was providing on-perm edition for years.
You also know how neglected those on-perm instances were?
No one updated those, no one wanted to pay for more CPU/RAM. File storage, I know people who had some random requests to cleanup files from projects because company wouldn’t buy more hard drives. Everyone was nagging at sys admins that they do bad job and at Atlassian that JIRA sucks.
That is mostly why Atlassian pulled off on premise because companies would not update at all, would like to have all new features and also not pay for file storage,RAM, CPU to make it work well.
Don’t forget you still will need to have dedicated employees to deal with AI built solution - because existing employees have work to do.
What we pay for JIRA and Confluence would never offset fact that we pay and it works, NOT A SINGLE EMPLOYEE CARES as they have their job to do.
Don’t forget the salary for every dev team having the Atlassian Jira Jockey to mess around with the board all day and make sure the next 7 epics worth of tickets are in the 9 columns and in prioritised order.
Where would we be without them!?
jira premium is $15/mo/user for 300 users. you're saying $50k can cover developing the app inclusive of integrations, maintaining it, providing 24/7 service and 3 9s uptime (per the sla)? don't forget compliance and security. maybe the logic is everyone can be fired and replaced with agents?
Atlassian had on premise option already.
All instances I remember seeing were neglected, not updated running on lowest amount of resources. Everyone in company nagging how slo it is but no one wanted to share budget to improve it.
So for me that experiment „it will be better and cheaper building our own JIRA” was already done. It is going to be cost center that no one will want to throw money at.
Yes. $50k goes a long way outside of the Bay Area.
There is no way you would get anything close to as good as JIRA. Your best bet with that budget would be trying to integrate an existing open source on-prem solution (not sure what that alternative is for JIRA).
> There is no way you would get anything close to as good as JIRA.
It would be hard to do worse. A packet of crayons and a scrap of paper is better than JIRA.
How many Post-it Notes will $50K buy?
Jira is crap, so the bar is low.
Oh please.
Yes, you wouldn't get something near as complicatedas JIRA, but that would be a good thing! Look, it's enterprise software, so I'm sure there's somewhere that needs to have the overcomplicated permissions system otherwise contractors are going to steal everything that isn't bolted down, but most places I've been don't need, and thus don't use most of all of that crap. If the ticket can only go from planned to done by a certain group of users, backed by LDAP... let's just say, I'm not going to miss configuring which group gets which permissions system.
JIRA's the perfect example of disruption, too. Everyone's got their bespoke workflow, and JIRA has to be customizable to suit all of them. Bespoke software just doesn't have to the same way.
I told to my colleague that it would take less time for me to vibe code jira that it would take him to configure it. Sounds crazy ? Not so much : factor the part of jira you use (maybe 10%), the many choices and dimensions you have to configure, the time it take and the complexity it bring. On the other side, the vibe code version have only the fields you want, most of the logic hard written in code (ie epic > story >task...), and that you could do anything any role, any authentication scheme.
This reminds me of all the "i could turn the spreadsheet into a webapp in a weekend" type comments. Sure, you could get CRUD and a datatable working but then a user is going to ask you for a custom field and you'll say "ok let me vibe code that, update the database, and then deploy" but the user will say "well in Jira I could just to that myself...". Then the next thing they're going to ask is some kind of custom workflow utility which you'll then goto work vibe coding that feature and they'll say again "...but in jira that was already there". Meanwhile they'll ask you why they can't change the validation criteria on the custom field from before, they said it should be required but now there's a case where it's optional.
Pretty soon you're just re-implementing Jira while your users wait and get pissed because they could have just been using Jira all along. It's just like turning a spreadsheet into a webapp, inevitably you just end up trying to re-implement Excel.
> but the user will say "well in Jira I could just to that myself..."
> "...but in jira that was already there"
Must be a different Jira from the one I'm used to, where obvious features are never there and even if you can find the button it doesn't work.
two things:
1. ai being able to code well seems like it would also get pretty close/good at doing basically everything else you described. If coding is a game of reasoning, if you can solve that, you have effectively solved reasoning and you can likely map it to most other problems provided you have a sufficiently good harness and toolcalling setup. 2. Lets assume AI won't replace everyone as point (1) assumes - and it just replaces _most_ people. Under this assumption, we will likely see large swathes of layoffs. Many SaaS companies have a pay per seat model. Less people employed at companies = less seats being paid for = less SaaS revenue.
So not only is there a threat of companies just vibe coding various SaaS-es in house, but there is also a threat that the TAM of many SaaS products (which is typically proportional to the # of employees there are) will actually _shrink_ in size.
I think the main class of SaaS company that will remain in the medium term are the ones in legally touchy or compliance heavy industries - think healthcare, finance and security (workday for example). But even Workday will be affected by point (2) from above. Overall, I think the mid-long term outlook for SaaS, especially "SaaS", is not great.
"Small Monthly Fee" is a very loaded term here. Im in the negotiations for these platforms, the price that many of these companies command for their products will very often pay the salaries of a whole software department. Add to this the quality of support being the lowest possible option above "nonexistant" and I would say the risk to these SaaS companies is real.
The real benefit of these types of SaaS offerings was their ubiquity across multiple industries and verticals. If a company bought Salesforce, they could very readily find employees that would be able to quickly onboard since they would likley have used it at previous companies. AI software generation is changing this as more and more software being created is bespoke and increasingly one-of-a-kind with these tools allowing companies to create software that fits their unique and specific needs.
My hot take here is that the moats previously enjoyed by SaaS companies will increasingly vanish as smaller and smaller teams can assemble "good enough" solutions that companies will adopt instead of paying giant chunks of their budget on pre-built SaaS tools that will increasingly demand more training to Onboard.
There is one big argument against these "good enough" solutions: commercial business software providers need to put a lot of R&D into finding generalized workflows that apply to as many clients as possible. Effectively, they find and encode current standard practices into their products. This is valuable from a business operations perspective in two ways: it's a good bet that transitioning the customer's operations to match the software is cleaning up internal processes, and it makes onboarding new employees easier because the tools and workflows should be much more familiar right from the start.
Saas was always fueled by B2B buying through same investor circle. sequoia companies buying from other sequoia companies, softbank companies buying from other software companies. Without this circular buying and selling of the software, the whole B2B software market crashes.
>My hot take here is that the moats previously enjoyed by SaaS companies will increasingly vanish as smaller and smaller teams can assemble "good enough" solutions that companies will adopt instead of paying giant chunks of their budget on pre-built SaaS tools that will increasingly demand more training to Onboard.
why do people pay red hat/ibm for rhel? they earn pretty good margins too. to parent's point on software/=code
> why do people pay red hat/ibm for rhel?
Because the guy who signed the purchase order had a good time golfing?
If AI can code, why do you think it cannot handle building, testing, debugging, securing, monitoring, supporting, incident handling, and so on?
Consider incident handling. What if your AI sets up monitoring that detects errors or outages, wakes up an agent, gives it the problem context, then sets it to work so it can debug the issue, produce a fix, then deploy it? You now have an end-to-end system that works 24/7. Many issues will probably be resolved before you've even noticed them.
If your response is, AIs won't ever be smart or capable enough to do this as well as humans, how has that same prediction worked out for coding?
The next generation of AI "coding" tools will essentially be SaaS companies in a box. Agents will code the app, but they'll also test it, debug it, support it, etc. And this will happen in months, not years.
> Consider incident handling. What if your AI sets up monitoring that detects errors or outages, wakes up an agent, gives it the problem context, then sets it to work so it can debug the issue, produce a fix, then deploy it? You now have an end-to-end system that works 24/7. Many issues will probably be resolved before you've even noticed them.
Have you ever been actually involved in trying to fix an error or outage? Like actually on an on-call rotation where you had to deal with reported issues?
Other reason could be that investors think companies are going to lay off a lot of their staff and then that will decrease Saas revenue anyway.
yes, but this does fit into the head of MBA-bobo-management stylers, who believe ChatGPT will replace everyone :)
Hyper custom software can allow your business flows to sync together a lot better than the alternative, using zapier to glue a bunch of mostly poor fits and ending up with Frankenstein processes.
Also, it allows you to pick and choose what you want from where.
We’ve just completed the first month of our internal CRM that has replaced about 500$ a month in subs with something that flows much better and enforces our own internal processes.
Yes, it's just that some companies will fail to adapt, but there will be new jobs.
That would justify a good multiple of 5 to 10. Not 30 or above as for high growth companies.
multiple of what? there is maybe one software company trading above 30x revenue - palantir. many companies growing at 20% trade at single digit revenue multiples.
Unqualified it almost always means earnings (profits).
that is really not the case in software, people commonly go between EV/S and EV/FCF for high growth names. also earnings could mean: GAAP earnings, non-GAAP earnings, ebitda, ebita, FCF, FCF ex share based comp.
yes but investors don't know that
or for a more charitable comment, I think the issue people struggle with right now is how much of non-AI software will be replaced by AI-native versions. and it's not even a 1:1 mapping. we may see 5 different small companies replaced by a single AI interface. all TBD, but there's merit to avoiding that risk right now if you can just allocate to NVDA and GOOG instead
Lol. ServiceNow, Oracle, Workday, etc are not small monthly fees. That's what the market is shitting on. (Oracle is different, given the corruption and OpenAI grift angle.)
My buddy works for a company like these. He landed a $5M contract last year, which netted him almost $800k. There's alot of fat to be cooked out of this stuff, and AI will help smaller entrants attack those margins.
AI-based startups like Vanta make it much easier for companies to meet the compliance bullshit the large companies require. Again, it will drive more competition == better values for customers.
> There's alot of fat to be cooked out of this stuff, and AI will help smaller entrants attack
truer words have never been spoken. I work in some of these platforms and lead teams of developers who write customizations. These customizations are not rocket surgery, basic CRUD with some logic requirements that can't be met ootb. It's very time consuming and therefore expensive to clients. The significant moat incumbents have is brand recognition and trust. On the other hand, a hot new "Agent First" consulting firm at 1/3 the price would be hard for a director to not at least experiment with.
> AI-generated code
AI "generated" code requires a large base of training data to draw from. If we all stop writing code then there will no new code written. Just rehashes of stolen ideas. There is no long tail to this industry or ideal.
> That's very expensive.
As long as you convince someone else to pay the bill who cares? The real problem is are you losing your competitive edge? If everyone else can crank out the same stolen crap you can then there is no reason for you to even exist.
That's something i wonder as well. AI needs data to be usable. But what if all the data it gets is just generated by AI?
It has to annotate it too.
We've let models do this before. They devolve into an incomprehensible mess in very short order. Errors multiply. Lossy weight compression on erroneous material makes it all that much more worse.
https://www.youtube.com/watch?v=QEzhxP-pdos
Foundational software requires that, but the foundation is pretty much completely built at this point. The workforce required to keep it running is but a tiny fraction of what was required to build it. The past has shown that innovation in hardware can push for the foundations to be rebuilt, but we've also already got computers basically everywhere now. There may not be some new innovation that requires the foundations to be completely rewritten again.
The little one-off programs that we thought would keep developers busy forevermore don't require engineers. They often don't even require code. LLMs can natively do a lot of things that historically would have required software.
Such black and white thinking. Even the little tools fall apart at first sight of an edge case if they are fully vibed. Neither Opus nor codex are good at architecture, and it’s not clear they ever will be.
You still seem to be thinking about code in the traditional sense. A lot of the software I am using now in my non-tech business isn't rooted in code at all. It is simply asking an LLM to carry out a task. Still programming, of course, but not with a traditional programming language. The LLM will produce the excepted results in realtime. The intermediary step of building an executable is totally unnecessary.
In the olden days it would have taken considerable engineering effort to produce a comparable tool. That is no longer the case.
AI-generated code is not copyrightable and therefore cannot be protected through a conventional licensing scheme.
Support for this claim?
In the US, you can only copyright expressions made by humans. Which part of it do you have trouble with?
You can use AI for simple stuff.
For everything else, there’s open source.
Open source doesn't implement, host, and support itself. Some of these software companies stocks are companies selling open source software.
Software developers programmed themselves out of a job. They created a huge and growing set of free, tested, high quality software in the form of open source, that can be use for pretty much anything. LLMs will automate a lot of the remaining pieces.
Programmers programmed themselves out of a job. Software engineers will be just fine.
Programming yourself out of a job, is the final, most important command of a job.
cost will go down 70-90%
SaaS margins too.
Stock prices are very forward looking, so if half the hype being sold about AI is true I would expect most software-centric companies to be devalued by wall-street (as the test, deploy, support should be automated in the coming years...according to the AI CEO's).
However, if I was a wall street analyst and believed the AI dreams I would further be concerned that software companies aren't taking advantage of the last remnants of value before software (and maybe labor) values go to zero.
If you've got a gold mine and have recently built the most efficient shovels in the world, why are they not bringing in mass amounts of workers to utilize these shovels before all the neighboring mines. Once all that gold is on the market, the price crashes so it's better to be one of the first mines to get in and dig out all possible value first.
I think you either don't believe in the AI hype, which means a lot of silicon valley companies are tremendously overvalued. Or you do, in which case another huge part of silicon valley is overvalued especially when they are not looking to out-innovate their peers (as evidenced by downsizing), but just riding the wave of AI until what they are selling has no marginal value over some guy coding alone in his bedroom. SV is putting itself into a weird position, but still has some time for financial buffoonery before the party stops.
>If you've got a gold mine and have recently built the most efficient shovels in the world, why are they not bringing in mass amounts of workers to utilize these shovels before all the neighboring mines
Because they are completely consumed by the need to increase margins, which they think they will be able to do it with AI by laying off a lot of people. But Saas economy is connected and based on per user pricing, so as layoffs continue, Saas economy is showing its biggest weakness. All of Saas companies also seem to embrace AI so much that they would rather add another summarise button rather than actually making something which cant be copied easily by competitors.
Haha, maybe… you can stick your head in the sand all you want, but everyone I know whose output is code is delegating 100% of their work to Claude Code today, I cannot see this magically drawing a line at people whose output is configs and emails…
> > The fear is that these [AI] tools are allowing companies to create much of the software they need themselves.
If that's their fear they don't know much how your typical big businesses functions.
You've dealt with a large, consumer bank? Many of them still run on IBM mainframes. The web front end is driven by pushing buttons and screen scraping 3270 terminal emulators. You would think a bank with all it's resources could easily build it's IT infrastructure and then manage all the technology transitions we've gone through over the past few decades. Clearly, they don't and can't. What they actually do is notice they have to adapt to the newfangled IT threat, hired hordes of contractors to do the work, then fire them when done. After it's done they go back to banking and forget all the lessons they've learnt about building and managing IT infrastructure.
If you want to see how banking and computers should be combined, look at the Fintech's, not banks. But for some reason I don't understand traditional banks still out compete Fintech's. Maybe it's getting your head around both banking and running an IT business it too much for one human mind?
That same pattern is repeated everywhere. Why was everyone so scared of Huawei? It wasn't because they built the gear. It's because the phone telco's have devolved into marketing and finance companies who purchase in the gear from companies like Huawei and rent it out. Amazingly they don't know how to run the gear they purchased, instead get the supplier to install it and maintain it. But that meant what some eyes viewed as an organ of the Chinese communist party was running the countries phones with full access to every SMS and voice call. (Interestingly, IBM pulled the same stunt with the banks back in the day: you didn't buy an mainframe, you leased / rented it from IBM, and they maintained it.)
It's the same story everywhere I look. These big firms stick to the knitting. If you want to see total, utter incompetence in IT go work whose core business doesn't revolve around IT for a while. These are the firms that still choose Microsoft, despite the fact they've seen Sony's Microsoft based IT infrastructure torn apart so badly by North Korea they didn't know who their employees were, how much they owed creditors or how much debtors owned them for a while. Why do they choose Microsoft? Look around - who else allows you to outsource the know how about connecting millions of computing devices in 1000's of offices to a redundant cloud infrastructure that allows them to share data while providing a centralised authentication / authorisation infrastructure. There is only one choice, apart from developing it themselves which is out of the question.
If those businesses did start using what passes for AI today to manage and develop their own IT infrastructure, the result would not be pretty. But for all the shit I'm throwing at them here, I'm confident they are smarter than that. They know their limitations, they haven't done it before, and they won't start doing it now.
Banks don't write software for the same reason that software companies don't store their own money
I disagree. For a long time now, the business of banking has been very much related to software. Software companies need a license to fully manage their money.
> AI-generated code still requires software engineers
No, they don't.
A domain expert armed with an Excel spreadsheet and the ability to write VBA macros will be enough for most business.
Excel spreadsheets have little to no validation logic that you're actually getting a good result, unless you have a secondary check (most spreadsheets are structured as "single entry" accounting, so lack the checks)
A prime example of this was the Reinhart/Rogoff paper advocating austerity that was widely quoted, and then it was discovered that the spreadsheet used had errors that invalidated the conclusions:
https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...
Just because technology is in use and "works" doesn't mean it's always correct.
You are taking my comment way too literatlly.
The point is not that people will be using specifically Excel, but that most business only pay for software because it is the tool that gives them the most power to automate their processes. They don't need high availablility, they don't need standards compliance, they don't extensive automated tests, they won't need cloud engineeers and SRE... all you need is some tool that can get the results your are looking for right now.
Academia already works like this. Software wrtiten for academic purposes is notoriously "bad" because it is not engineerd, but that doesn't matter because it is good enough to deliver the results that researchers need. Corporate IT will also start looking like this even at mid-sized companies.
An academic paper needs to deliver its output once, for the research. Maybe someone will try to replicate it later but that's someone elses problem (and fairly often proves the output of the former to be wrong)
Some stuff in companies might be similar, but there's a lot of things that people use every day, in a lot of different ways, and the software needs to work correctly regardless. You can't just drop it like a hot potato once you've built processes around it.
As always, the first 80% takes 20% of the time/effort, the last 20% takes the other 80%.
I don't disagree with anything you say here - using a tool that lacks guardrails is fine for a lot of tasks, but if that's the only tool and used where those guardrails go from "nice to haves" to something more critical is where the problem is.
I've been in ops for a long time and have encountered far too many "our IP addressing plan is just a spreadsheet with manual reconciliation".
I truly wonder if Excel and all it's predecessors and direct clones (Google Sheets, etc.) are holding back industry from making something truly better and more reliable.
> holding back industry from making something truly better and more reliable.
What "industry"?
If you are talking about the software industry, then I'd say you are creating a circular reasoning. If you are talking about all the other things that we actually need to do and which only incidentally have become too reliant on software to do it, then see back my original point: people don't need "better and more reliable" software to keep running their businesses.
If running your business to '90s standards is acceptable, sure, you can use AI to automate your manual processes with the same error rate and keep doing the same thing indefinitely.
But if the competitors have real software engineers and have used them to actually improve reliability, you'll be left behind.
What software engineers are being hired to work on:
Ok, but if your competitors are getting/using software from a supplier who has real software engineers, and using that to operate at a higher level of reliability, then the same argument goes through.
Sorry, but that logic is pure cope.
If you want to go down the value chain, then by definition the less valuable the software is and the easier to be commoditized. The automation is not going to help just the manager-turned-vibecoder, it's also going to help professionals to create FOSS alternatives that can be robust enough.
It's not going to happen overnight, but the trend is there.
> If you want to go down the value chain, then by definition the less valuable the software is and the easier to be commoditized.
I'm not sure that holds for what we're talking about - high-value software can afford to be somewhat flaky because it delivers enough value when it works to make up for it, software that's only marginally worthwhile needs to be reliable because if it isn't then it's not worth the bother. Commoditized fields are more competitive.
> The automation is not going to help just the manager-turned-vibecoder, it's also going to help professionals to create FOSS alternatives that can be robust enough.
Not convinced. In my experience these tools don't really help with creating high-quality software. Maybe they'll get there eventually (at which point we're all out of a job), but right now they can't "hit the high notes".
> Commoditized fields are more competitive.
Doesn't that also lead to the conclusion that "software engineers" are going to lose their ability to command high salaries, if the real value is in the domain expertise and not in the ability of optimizing some part of the business process?
> Doesn't that also lead to the conclusion that "software engineers" are going to lose their ability to command high salaries, if the real value is in the domain expertise and not in the ability of optimizing some part of the business process?
I mean the job has always required both - just being good at leetcode isn't enough to get paid well (except perhaps where there is a dysfunctional interview process), the key skill is being able to translate back and forth between the world of software and the world of business. Regular folk seemingly still find it difficult to think rigorously, in the way that fully correct automation requires, and AI hasn't actually helped with that any, so I think people with that skill will still command a premium. Work that doesn't benefit from rigour - being able to slap together a quick marketing site on wordpress or what have you - will pay badly if at all, but that was already the low end of the industry I think.
That's a software engineer that is limited to an mostly untyped macro language, with worse version control and poor tooling. It's not that software can't be written as an Excel spreadsheet, it is that it is just inefficient and failure prune.
I guess that's technically true, because "most businesses" are sole proprietorships without any employees... but they could get by just fine with a checkbook and a note pad.
But the reasons the business software sector grew far beyond Excel of the 1990s is because of the inherent limitations in scaling solutions built by business analysts inside of Excel. There's a vague cutoff somewhere in the middle of the SMB market where software architecture starts to matter and the consequences for fuckup are higher than the cost of paying for professionally made software with, importantly, a vendor on the hook for making sure it doesn't fuck up.
Uh, no. The main reason the software sector grew in the 90s was a particularly potent combination of FOMO, kickbacks, and strategically deployed cocaine.
Do you forget how much different offices looked back in the 90s?
No, no I don't. I also remember what tech sales looked like in the 90s and aughts. Jokes about strippers and blow were a cliche for a reason.
Yeah, because they were in popular TV shows and movies.
Maybe just maybe, the markets are not as rational as these people think they are
Reminds me of that time when Nintendo's stock exploded after the launch of Pokemon go, only for traders to discover a few hours later that Niantic actually owned the game and correct the situation.
Nintendo is one of the three 1/3 owners in The Pokemon Company and an early investor in Niantic. I'd be curious how much was actually confusion as to who is making money vs just a normal hype cycle. I'd also be curious how much the game itself made the various owners vs what Nintendo made on the general interest in other Pokemon games/merchandise from the launch.
At the very least, the stock looks to have shot up for most of the launch month with the peak not occuring until July 18th and the stock still being significantly higher at the end of the month https://finance.yahoo.com/quote/NTDOY/history/?period1=14673...
I think the funniest bit of pure confusions was "Zoom Technologies (ZOOM)" being mistaken for "Zoom Video (ZM)" at the start of the pandemic to the point the SEC halted its trading on concerns around the confusion being the only reasonable driver. https://markets.businessinsider.com/news/stocks/zoom-technol...
I was referring to the day Nintendo stock fell 17% after they published a notice on their website (outside of trading hours) that they did not in fact make Pokemon Go: https://www.theguardian.com/technology/2016/jul/25/pokemon-g...
The Zoom story is even crazier. The idea of the SEC needing to step in because traders were absolute idiots throwing their money away is laughable.
Let the traders burn their cash if they're too stupid to Google a stock's name, I say. Let them prove that the markets are intelligently driven and not just gambling for those under the influence of cocaine on the job.
A quick way to make me not respect someone's economic opinion is to mention the "Efficient Market Hypothesis." This should be buried 7 palms underground.
It is really a type of self serving religious belief in the face of bounded rationality.
On the other hand, these type of companies are really hard to value too.
Is it AI or just the market realizing that some of these companies were ridiculously overvalued to begin with.
Here are the p/e ratios of companies mentioned in the article, after the said "pummeling":
* ServiceNow - 70.66
* SAP - 28.70
* Salesforce - 28.15
* Workday - 73.16
* Microsoft - 26.53
So they range from "a bit high" to "still completely bonkers".
I would say that MS here is undervalued. They do not offer some small software package for a given business problem but the whole shebang - the OS, mail, calendar, office suite, IAM, cloud, etc. + support for each and the whole integration.
You can't realistically replace that with some LLM solution (in the near-term at least) and they can use the AIs to reduce their costs which is mostly people.
Microsoft has consistently proven over the last five years that they have zero ability to execute. It's an astounding failure after failure to do anything right.
All my Microsoft friends constantly berate the state of their hiring pipelines. Sprinkle on a paltry comp and this is hardly surprising.
It was so ridiculously shortsighted of them to decide as a strategy to underpay all their employees compared to the industry standard, especially considering their ambitions are still fairly unbounded (meaning it's not like they said everything we do will be easier than Google or Meta so we don't need to compete for the same pool of talent).
But maybe such a decision was inevitable in their culture. And now it's very difficult to correct.
> I would say that MS here is undervalued.
Windows 11 though...
Yeah, that one is a real gem. BeST wInD0Wz EvAr
Most of those companies make huge margins by suckering large organizations into outrageous contracts. I don't see how AI moves the needle on this one way or the other.
ServiceNow isn't really an "AI" company - they're one of the silent ITSM and Security companies that are nigh impossible to tear out, and are making silent moves into the OT Security space.
And that makes their "AI" pivot much more sustainable imo - their are already such a giant from a cashflow perspective that if some sort of AI valuation shakeup occurs, they have the drypowder to execute on M&A.
SNOW's closest comparables are CROWD and PANW - basically an Arora style platformization play.
So one thing that hasn't changed is that the marginal cost of software is still effectively zero. That's where most of the money was being made b/c if you were a monopoly or oligopoly that each additional unit sold was an absolute increase in revenue and you spread out your fixed costs.
What has changed most dramatically is the "fixed" cost of writing the software to begin with. Given that the costs were being spread out over so many units beforehand, it's not entirely clear to me how that changes a lot of the economics.
For the comments about the "SaaS vs build your own", we can use a home services metaphor. Sure, I can do a lot of what my plumber does. But they do it faster, know all of the issues that go wrong with the work and I can pay them a yearly fee to check my boiler to make sure it doesn't fail etc. The time saved by calling the plumber can then be spent with kids, more work or a combo of the two.
I think the idea is that SAP/ServiceNow will be subject to more competition, if/when software is cheaper to write.
Their lead has shrunk, and thus so has their value.
The Value of software is going down, this much is clear to most people. It will continue to demand proper engineering for its creation and operation. But AI will lead to an increase of unique one-of-a-kind systems created by very small teams. And the world will increasingly rely on these unique systems.
SaaS companies need to start reading the writting on the wall, their massive valuations enjoyed when software was harder to create will need to be justified.
Everyone says that but I don't see anyone cooking up the next photoshop and selling it at $3/month. Why are we not seeing more options of every tool? Most Saas companies are sales companies at their core rather than software companies. And those sales people are so good that they can sell a todo list for millions.
In the case of Photoshop it is the software itself that is becoming useless. In a few years, using photoshop will be viewed the same as developing physical film, a process from a by-gone era that is still possible, but impractical.
This is an extremely bold claim and I think that it completely overlooks how Photoshop is used by professionals in practice. Professional users want extremely fine grained and precise control over their tools to achieve the specific results that they want. AI "image editing" is incapable of providing anything remotely similar.
Yes, "professional users" need this. The problem is that the group of professional users who need that will shrink really fast in the next few years.
I've recently re-instated a Photoshop subscription and its now part of my core AI generated asset workflow. AI is fantastic at art direction but it needs minor adjustments to make it production ready. E.g putting real screenshots in with correct placement, smoothing, editing out artefacts etc. I can't imagine the lengths I'd have to go to to instruct an LLM to do these tasks with words.
Some of the LLM crowd is living in lala land.
How much of what you do with Photoshop could be done with open source tools instead (GIMP, ImageMagick, etc), versus how much do you really need Photoshop for?
One technique I’ve used for cleaning up AI-generated images, was a Python script driving ImageMagick-and an LLM helped write the Python script (although it took a few iterations, because the LLM’s first attempt didn’t actually work)
OpenAI releases an electron slog of an app, while they have basically unlimited computing power, compared to anyone else except their direct competitors. Why aren't they just pumping out proper software built by their own AI/Codex...
> I don't see anyone cooking up the next photoshop and selling it at $3/month.
That's not the situation we're talking about though. It's someone saying "hmm, I need to edit this picture. Can I get ChatGPT to do it?" where 3 years ago they would have had to buy Photoshop and learn how to use it.
Similarly, if they need a tool to batch-convert a thousand images, they're getting an LLM to construct the specific tool they need in a couple of hours and then running that, rather than buying a software product that can do it.
You don't need a whole dev team to build a one-off tool for a specific job, which is probably 90% of the demand for those software products. LLMs are becoming the general-purpose tool for a lot of use cases.
ChatGPT cant do the precise photoshop tasks, not even close, infact the quality of output is worse than quality of input almost always. Ofc we live in this low quality internet now, so you may already be used to terribly edited images by AI.
>You don't need a whole dev team to build a one-off tool for a specific job, which is probably 90% of the demand for those software products. LLMs are becoming the general-purpose tool for a lot of use cases.
No, all of these tools have 90+% revenue coming from B2B sales, consumers dont buy software products anyway. All of the software purchases are tax deductible so corporations buy even if they use very little of it.
>they're getting an LLM to construct the specific tool they need in a couple of hours and then running that
This is something I really hope takes off for the common person. ChatGPT is perfect for bespoke little programs that do one thing and can be discarded after use.
> bespoke little programs that do one thing and can be discarded after use.
That's my best-case scenario as well: LLMs are scripting languages for a broader audience. They just barely automate busywork, but are not a reliable foundation.
"Where is the output?" remains the giant elephant in the AI room. You can tell because people get mad when you ask that.
The output is good enough for your consumption (or internal consumption) but not good or general purpose enough to sell to anyone. Like DIY projects done by a new homeowner with access to YouTube.
> Everyone says that but I don't see anyone cooking up the next photoshop and selling it at $3/month.
Yup, same reason you can't throw manpower at a software project and expect a proportional outcome (Brooks's Law). AI amplifies what's already there; it doesn't conjure taste or product vision out of thin air.
> Everyone says that but I don't see anyone cooking up the next photoshop and selling it at $3/month. Why are we not seeing more options of every tool?
I expect the markets are reflecting that soon there will be more competition.
It'll take time, and as LLMs improve, it'll take even less time.
> I expect the markets are reflecting that soon there will be more competition.
> It'll take time, and as LLMs improve, it'll take even less time.
People have written great software in ed(1). We have tools like uxn[0] written on potato computers and billions and years later, we still have to hope for AI output.
[0]: https://100r.co/site/uxn.html
Affinity. GIMP. Just to name two.
Those were made long before LLMs, I was specifically asking for photoshop altenatives vibecoded in last 5 months.
I dont actually think it changes the economics of software as a service much. What's true for the small scale is true for the large scale. Sure, it's easier to build your own HR platform now but it's also easier to write and maintain it at scale with all your domain knowledge, legal infrastructure, etc. This seems true for inventory management, document signing, ecommerce, expensing, crm, training, accounting, etc. Why wouldn't the offerings from services providers get better and cheaper (relatively)?
The stuff you do in-house is probably still going to tied deeply to your internal processes. Admin dashboards, special workflows integrating with different systems, etc.
Consider it in the realm of supply and demand. The economics of software will change simply because the tool enables more software to be written. In a way, the barrier of entry into the space of selling software has lowered. It hasnt vanished, but there will be many more entrants and offerings as a result, thus more competition for the existing SaaS companies.
I don't see how the economics of SaaS will remain the same when their value is formed of capital and labor expended, both of which require less now, so please explain how this doesn't lead to an increase in supply and a downward pressure on value?
People could simply expect more of the software.
> this much is clear to most people.
There are more computers now than there ever have been. More people in more parts of the world have them than ever before. If you have this perspective you may just be locked in a first-world corporate nightmare that has stolen from you all vision and imagination.
TRUE! Actually the world around is controlled already by computers ~ chips: Your car, your dishwasher, your metro, your holidayjettravel etc.
And it becomes "worse": Billions and billions of chips ~ compusters are produced every year, the number is increasing.
Billions of people will get access to the stuff that was around for us "since ever" for the first time in their whole life.
Perhaps the it would be better described as "commodification" of software, which still gets my point across. Software is absolutely more ubiquitous than ever before, this I can agree upon. But now we have the tools to create more of it, and therefore software is less valuable simply as it is less rare. I dont mean to say that software is valueless, but rather that it enjoyed inflated value as the amount of capital and effort required to build a software product was much greater.
> Of course, ai will continue to improve. But it is also likely to get more expensive…Microsoft’s share price took a beating last week as investors winced at its enormous spending on the data centres underpinning the technology. Eventually these companies will need to demonstrate a return on all that investment, which is bound to mean higher prices.
That’s not how it works, and I’m surprised this fallacy made it into the Economist.
Commodity producers don’t get to choose the price they charge by wishful thinking and aspirational margins on their sunk costs. Variable cost determines price. If all the cloud companies spend trillions on GPUs, GPU rental price (and model inference cost) will continue going down.
Indeed, cheaper and cheaper AI for the same level of performance has been even more consistent empirically than improvement in frontier model performance.
What an odd article that is just designed to hype the software creation aspect, which doesn't really affect MAGAF.
MSFT went down because of overexposure in AI and because it is clear that people do not want it.
AI weariness is a thing, and if people go off the Internet or advertisers question whether humans or AI swarms are "watching" their ads it is over for the big players.
Trying to salvage the situation by hyping the relatively small code generation (theft) aspect is quite a poor analysis.
Yeah the article doesn't make a lot of sense to me. Guess whose writing software with AI? Software companies.
They mention sites like Base44 and Lovable. Sure, if tons of business was rotating out of software into no code AI solutions the article would have a point. But has a large portion of market cap moved out of AI into a few little no-code startups? Is Salesforce, Service Now, and SAP being replaced with no code applications? No. Absolutely not. These are small, niche companies. It does not explain a large downward movement in an entire industry.
HN crowd did not like AI coding until it got better. How much of the AI hate is because of poor implementation?
I guess you can classify over-implementation as poor implementation.
What an odd account that is just created to try sway opinions on hot button topics.
Where have I seen this before? Oh right, the entire site, for months now. Nothing suspicious about that at all, I'm sure a swarm of other brand new accounts will reassure me 0.1 microseconds after being created. You guys sure do type fast!
yes, its continue to grow at as problem. Doesn't detract from the site as a whole, but there is more very very low value noise infiltrating in a way there hasnt been in the past - even in the relatively short time I've been here.
I was one of the nay sayers but right now I am convinced.
That being said, it still requires some engineering background to come up with interesting ideas and solutions with the help of LLMs but even that might be replaced.
So you disagree with the article? Could you explain your reasoning?
The iShares Expanded Tech-Software Sector ETF (IGV) seems to backup what the article is saying. It isolate software firms from the IT industry. It is down about 10% last week, and 20% down the past six months. The IT sector as a whole didn't lose much.
I used to work for ServiceNow.
Market Cap over doubled between 2021 and 2025.
But since the start of 2025, it has lost all of that.
13% in the last week. 20% in the last month. Six months is definitely bleaker than those numbers, 37% down.
Is the pummeling in the room with us right now? YoY looks phenomenal in my portfolio.
This article I think is about a very specific subset of software stocks.
If you're holding say a total market index fund, or s&p500, or even qqq or many tech index funds, this would get hidden by the so-far very good growth on other tech stocks.
I'd look at things beyond the magnificent seven. Economists and traders have noticed that the SP 500 now has a K shape similar to our class wealth distribution. https://fortune.com/2025/11/10/markets-k-shaped-economy-apol...
Other than smaller SaaS companies who offer things easily replaceable, I don't think many of the bigger ones can be replaced by AI, if anything, it might make them better. For instance I can't see us replacing our ticket management/support software, hosting, manufacturing/sales/stock software, accounting software, etc but it would be great if we could leverage all those tools better via AI (some are already easy to leverage).
The interesting thing I've noticed is software library authors could take a beating though. Quite a few libs in the .NET world have gone down the monetized paths, for all of the ones I've been using, I've just got AI to remove them and implement native solutions. But none of these are large listed companies.
There are many stories on r/ClaudeCode of developers realizing the power of that particular model.
Think of all the solo developer webapp/mobile "hot" startups (FB, Craigslist, Instagram, SnapChat, Pinterest, etc.).
It is no joke that a seasoned developer can build similar apps with Claude in...days. Not just the app, but the entire infrastructure (e.g. scalable on AWS using Terraform). This includes setting up domain registration, Elastic IPs, provisions the instances, setting up keys/ELBs/email server/Twilio/etc.
What is astonishing is how well Claude can plan. You write a spec, and it will give you an entire plan including ASCII UML diagrams, infrastructure changes, database updates, the code itself, user stories, and test cases. It will then do all the work, including "tricky" things like SSHing over a bastion to run the scripts that it wrote on an instance behind a VPC.
The main obstacle now is the context window. If it were to increase 100x or so, Claude could probably manage an entire software company's codebase at scale.
I'm sorry to say, but there's no way you could use Claude for a month or so without feeling, "We are going to need way fewer software engineers."
There’s only so much investable capital available, if it is going to hardware stocks it’s got to be coming from somewhere else. It’s just a substitution toward hardware tech stocks. Economics 101.
If I were confident it were AI as the cause I'd be seriously tempted to buy right now
Take Adobe for example: they're getting pummeled and there's no way it's due to AI
No-one's building an in-house Photoshop clone to replace them
AI also isn't letting any competitors in. Geez if it were as easy as cloning their product you'd have a mile high mountain of VC money to fund it even pre-AI
Code has never been much of a barrier to entry
It’s not that AI will replace softwares. It’s that AI will replace people using softwares. Less workers means lower sales. There will be exceptions but a big chunk of B2B space is basically going away.
QQQ is up 20% over the last year.
GOOG is up 70% over the last year.
"Pummelled" seems extremely sensational...
GOOG is now also an AI company, so not exactly a fair comp as it doesn't fit neatly into the "software" bucket
MSFT is only up 3% over the last year
Some 7-15% down in a trading day is a lot for an established corporation. I consider Salesforce dropping 7% without some obvious trigger to be at least somewhat newsworthy, and from the first sentences in the article I get the impression that The Economist is sitting on more examples like that.
A lot of people are tense about the AI venture ouroboros and what it might mean for future software, especially people with money and little to no experience actually deploying software.
Edit: At the time I saw some memes claiming that roughly 1.5 trillion dollars in market value had evaporated, which if true is not a small sum.
You didn't read the article. The first graph plots Workday, Salesforce, SAP, and ServiceNow. Google isn't mentioned.
Maybe they shouldve said ERP stocks are getting "pummelled"
It basically does
> The value of listed American enterprise-software companies is down by 10% over the past year.
This article crystallizes something I witnessed firsthand last week.
Overheard a guy at a restaurant explaining how he builds phone apps with AI and no coding experience. When asked how he verifies the code works, he said he pastes it into a different AI to explain it.
That's the "slopware" problem in action. The code compiles. It might even work. But there's no understanding of what it's actually doing, no ability to debug when it breaks in production, no awareness of the technical cruft accumulating with every prompt. That's a problem for people creating software for others and is a huge opportunity for software developers to take prototypes and build real stuff.
Does anyone remember the RAD days of the 90s?
On the flip side, for people making software to solve THEIR problems, they don't need to make anything production quality. Its for a single user, themselves! Maybe the LLMs are good enough now that people don't need to buy or subscribe to software that solves trivial problems as they can build their own solutions. Maybe the dream of smalltalk, hypercard, and even early web where anyone can use the computer of what it was meant for is finally here?
Thats fine Ill charge him 500 bucks an hour to fix it if it has success and he runs into not being able to maintain it
Meanwhile, in the real world, as a software developer who uses every possible AI coding agent I can get my hands on, I still have to watch it like a hawk. The problem is one of trust. There are some things it does well, but its often times impossible to tell when it will make some mistake. So you have to treat every piece of code produced as suspect and with skepticism. If I could have automated my job by now and been on a beach, I would have done it. Instead of writing code by hand, I now largely converse with LLMs, but I still have to be present and watching them and verifying their outputs.
Yeah but just look at what happened within the last 2 years. I was not convinced about the AI revolution but I bet in another 2 years, we won't be looking at the output..
Not so sure, there are indiosyncracies now within the various models, I suspect all this is the result of RLHF, and they cause side.effects. I'm not sure that more attention-is-all-you-need is necessarily going to give us another step change, maybe more general intelligence, but not more focus. Possibly also we soon end up with grokked AI's on all side: pushing their agenda whatever you asked... Gemini: "no this won't work with Cloudflare, I created your GCP account, there you go" OpenAI: "I am certain you really wanted me to do all these other tasks and I have done them, you should upgrade your tokens plan" etc (you know how to fill in for DeepSeek and Grok already, right)
Tech can always hit a plateau, here's to hoping anthropic & openAI run out of money.
I've been coming around to the view that the time spent code-reviewing LLM output is better spent creating evaluation/testing rigs for the product you are building. If you're able to highlight errors in tests (unit, e2e, etc.) and send the detailed error back to the LLM, it will generally do a pretty good job of correcting itself. Its a hill-climbing system, you just have to build the hill.
take a look at 5-year trend
It's just correction.
But it is not THE correction.
the crash is indiscriminate, which is really disheartening. even infra software is getting demolished, no llm is going to replace something like mongodb, but it's all traded under the same umbrella.
Whatever the reason, the result is probably more layoffs.
investors are panicking.
however those system of record apps - will outlast most "A.I" companies - since the effective data is within their systems
The cleanup needed after this by senior developers will be epic.
Needed??
It already started, freelancer developers are making a lot of money already.
I remember watching a developer having to fix the company entire code because they went full AI and now nobody knows shit anymore. If you are a good senior developer indeed, now is the best time for you to become millionaire.
Company after company can no longer fix things, they are bringing external developers to keep afloat. Remember the tweet popular last year with the AI bot deleting the production database setting the company back to zero???
So....
Most of the these companies wont survive because of thier stupidity anyway.
Investors do not understand how coding works.
Google "Project Genie" which allegedly can take your input and the AI will make it rain, drove investors into panic mode. They think that you can create GTA6 like that.
It was the perfect storm: clueless investors + the whole AI bubble already bursting if you are following non-biased news.
Or it could be just the good old cyclical stock market correction after years of entire segments of the indecies growth being purely driven by software.
Of course such an angle would require much more research on behalf of the journalists for much less spotlight than what you would get for free by just singing to the tune of AI-doomerism.
... because they've been driven by years of bad leadership, monopolistic scheming, and investor speculation?
AI is just the latest symptom, IMO.
We normalized growth over revenue. Governments around the world have been pressured by Big Tech to dismantle anti-trust and regulation. We glorified shipping slop, suppressing unions, and pretending like programmers were temporarily embarrassed founders.
The stocks are dropping because our system can't sustain these practices, IMO.
Certainly no investor, but my own feelings:
AI replacing vendors feels like a strange risk, though I'm not sure if vendors view things through a technical lens. Security concerns and service maintenance alone, IMO, makes writing internal software a large proposition - one that I would want a trusted vendor if it wasn't a hobby project and I could just afford that. Particularly if that data being lost or broken would severely harm a business.
There are also already frameworks in languages like Python that make putting up an internal website very, very simple. If you don't need production grade, you might have already had a pretty low barrier to entry, if you have the skills to figure out how to host the service you just vibe coded, you can probably figure out some basic django to throw data in its ORM, or find libraries that do the work for you.
AI does feel in those technical ways to be an overstated risk, to me at least.
Far more worrying to me is the breakdown of the USA and its role. We are going to have blocs of software and hardware entirely from competing geopolitical regions, which may not be able or authorized to communicate with one another. Any businesses in the USA with significant CA or EU marketshare right now will decline in value to the degree client companies choose, or are told, to stop using USA systems.
(My own governor in California outright antagonized the Europeans at Davos calling them "pathetic" while telling them to get tough on Trump, which means in practice, stop using US, meaning yes California, tech goods and services. A lot of revenue from tech comes from overseas, and we are going to lose at least some portion of that. Particularly in California which already has budget problems with what revenue it's got. Stunning how even The Guardian treated those remarks as "tough" and not insane and self-destructive... sadly it's nothing compared to the worst of the US right now.)
So, where do you throw investment right now? To the US where the marketshares will likely decline, and the political and trade environment is insanely uncertain, but there is momentum on AI and generally decent hardware design, and the existing software companies and knowledge? To the EU or Canada where maybe a nascent software industry will take hold, or perhaps American companies will relocate talent if the USA collapses into civil conflict? To China, if they end up becoming a hegemon, given their strength in hardware and their growing efforts to invest in software alternatives?
I suppose I read markets don't react to "tensions," and maybe it is unprecedented to modern memory, but I think about these things more than AI.
I would add: open source throws additional curveballs. The EU wants to push for open source, and that is admirable, but I wonder what the sustainable funding model would be, and how that could attract attention. I wonder about business models and ability to generate return on investment.
I would think the saner solution is allowing proprietary companies, but imposing technical standards which companies collaborate on, enabling interoperation. Am I mistaken, that the EU is trying to do this with the DMA? I have heard general overtones, but I haven't looked at it very closely, and our media doesn't cover EU tech regulations in much detail in the US, though in a decent world it would, I wish it would.
Because the bubble has began to burst.
It always amuses me because the people complaining about stocks going down are always the same people who are causing them to go down. Losing money was a choice that those people collectively made. They could have chosen to act differently, in light of the optimistic long-term future.
No doubt! Collective action is a solved problem! Why do people do things other than the obvious Right Things we can all agree on? Must be some kind of mass psychosis…
Software will be easy to create, which will kill moats and margins on existing products. The game is up for pure saas. Smart money started pricing this in one year ago
For a lot of SaaS firms, a big part of their value is the domain knowledge and best practices encoded in the software.
Current AIs often do a bad job of that. Sure, they know a lot of it. But they also get a lot of it wrong, and can’t tell the difference between genuinely good advice, and advice that sounds good but is practically worthless or even harmful.
(Of course I’m biased since I work for a SaaS firm. But I’m talking about them in general, not just my current employer.)
ai will know the domain knowledge
I'm not sure how realistic it is to expect AIs to get detailed hands-on domain knowledge. A lot of this stuff humans learn by doing and by experience. AI models don't learn anything by doing and experience. A model vendor can't possibly encode all that experience into their training data, and even if they try, the problem is a lot of it will be vertical-specific, country/region-specific, and it is forever changing. SaaS firms have professional services and sales consulting teams who are constantly talking to customers about their actual business problems, and they feed that accumulated wisdom back to product management and data science, who in turn help engineering encode it into the product.
From what I've personally seen in SaaS AI agent development – if you try to build an AI agent to give customers advice in a particular business domain, you need to do a huge amount of work validating the answer quality with actual domain experts, and adjusting the prompts / RAG documents / tool design / etc to make sure it is giving genuinely useful advice. It is really easy to build a system which generates output which sounds superficially good, but an actual domain expert will consider wrong or worthless.
not if we don't tell it.
Was the hard part ever really the software, though? It's the Service part of SaaS that seems to provide the moat. Lock-in, habits, workflows, integrations, and trust. And don't discount the appeal of making some part of your operations "someone else's problem." Could you hire engineers or use an LLM to make your own Google Docs? Probably, yeah, but would that be worth the headache of being responsible for a bespoke internal document system?
You might think you can, for a while. Been there, done that. But you probably can not do so sustainably in most cases. Even if you could, would you really be better off building vs. buying? Outsourcing development, operations, and maintenance is almost always the better choice, letting you focus on the things you do uniquely, differentiably, or meaningfully better.
"We have this awesome internal version of Docs that we're responsible for fixing, upgrading, and doing support for" is not the flex "AI can code anything!" aficionados think it is. Especially when you also have similar internal versions of Sheets, Jira, Slack, GitHub, Linux, Postgres, and 100 other tools.
The article is about SAP, Salesforce, etc.
Making your own Google Docs is stupid unless your company's core business is document management.
OTOH Replacing SAP with a bespoke system will make a lot of sense for many companies.
SAP is already the worst of both worlds. It'll have been highly customized for your flow so you've got all of the headaches of bespoke software and all of the headaches of SaaS. And unlike Google Docs, it'll be highly integral to your core business.
Companies pay millions and millions to get away from bespoke software, but not simply because of the costs. Companies want to do their core business, they don't want to also be a software enterprise, and assume all the risks that entails. Even if AI makes creating software 10 times less expensive, that doesn't really change.
SAP is bespoke software.
you are aware of the long history of organizations being absolutely screwed by bad erp implementations right? nike's 2001 issue, the horrific birmingham oracle implementation, avon, etc.
Those are examples of very good reasons to ditch i2, Oracle, SAP, et cetera.
what do you mean SAP? like the ERP system?
I would absolutely NEVER steal or rewrite that. So much finanical stuff is baked into the business logic that impacts finance, regulations, hr, etc.
No do not roll your own ERP core.
Roll everything else
the problem is an AI can figure out habits and workflows pretty seamlessly. lock-in is artificial and loses power when it's really easy to make a competing app for large swaths of web apps.
integration is likely the most valuable part of the puzzle, but it's also prone to disruption
I think all that's left are like <50 apps each with their own very bespoke and "power user"-ready interface
yes, if it takes one month to build something that took 9 months previously, it completely changes your go to market strategy
> Could you hire engineers or use an LLM to make your own Google Docs
Or you can just ask your LLM to install https://github.com/CollaboraOnline/online
Between open source, LLMs, and SaaS vendors getting greedy and privacy invasive, the total pain minimization calc might shift for some orgs.
Even then, I would expect most orgs would want to contract out to a company that manages an instance of that open source software. That management company could undercut bigger players because they don't need as many engineers working on features. I don't see where the LLM comes in and shifts the calculus here.
What would be some examples of some current software products for which you expect the game is up?
Are there any software products that you think will survive?
I totally agree with you and I think it's weird how people are downvoting you.
Software is much easier to create. It used to take 100 people to create a competitor SaaS product. Now it might take 3 people.
Traditional SaaS companies that don't have a data moat is in trouble.
Physical companies will dominate more than SaaS companies in the future. By physical, I mean energy and chips companies.
Can't wait for every hospital to create their own patient record system, every accounting office to create their own accounting software, every car service to create their own timebooking solution, etc.
We go from left-pad to everything vibe coded. No one vets deps, no one vets vibes. Zero common sense.
> Can't wait for every hospital to create their own patient record system
Having worked in healthcare, this is the current state (per provider, not physical building).
I wonder when a "virtual person" will be able to replace carefully-coded business software?
IE, before software automates a business process, it's typically done by hand, by a real person.
What if someone sells a "virtual person" that's capable of doing the job? What if that "virtual person" is harder to train than a real person, but orders of magnitudes easier than writing custom software or custom business rules?
More importantly: What if the "virtual person" can explain the job they do much better than trying to read source code? That's very useful in ~30ish years when the "virtual person" understands the business process better than the people in the company, and someone is trying to update / streamline processes.
No, they dont distrust AI, they now may start to distrust all the big service providers that are likelty to eventually be eradicated by AI now that everyone can prompt a browser. This perhaps will also finally kill Microsoft Access, which is the closest to AI doing the work instead of you for so long. Then all the do-it-yourself enterprise-grade systems became SAAS, so its right for SalesForce and friends to go fck themselves once in a lifetime for standing in the way of actual software ownership.
I know, you are saying - they will adopt. Perhaps, while also cutting 40% (if not more) personnel during the pivot, and perhaps also by facing more challenges by faster moving competition.
Like, look for a second - why didn't Google create what the perplexity newsfeed is, given they actually like did 10 years ago and then close to nobody was using it. The equilibrium seems super unstable. What happens if a smart kid devices way to compress this information 10x times faster. This immediately means neural chips stall.
This volatility is something, not a joke. The second order effects may be unforseeable in an unparalleled way. Besides, the Luddites organize much better in 2026 given reddit etc.