Is this article an accurate reflection of peoples experience, or more generic LinkedIn click bait? I'm assuming the later with content like
>Substantial and ongoing improvements in AI’s capabilities, along with its broad applicability to a large fraction of the cognitive tasks in the economy and its ability to spur complementary innovations, offer the promise of significant improvements to productivity and implications for workforce dynamics.
I keep waiting for the industry shifting changes to materialise, or at least begin to materialise. I see promise with the coding tools, and personally find Claude and Cursor like tools to warrant some of the general hype, but when I look around for similar changes in other tangentially related roles I draw a blank. Some of the Microsoft meeting minute summaries are good, while the transcripts are abysmal. These are helpful, but not necessarily game changing.
Hallucinations, or even the risk of hallucinations, seem like a fundamental show stopper for some domains where this could otherwise be useful. Is this likely to be overcome in the near future? I'd assume it's a core area of research, but I know nothing of this area, so any insights would be enlightening.
What other domains are currently being uplifted in the same way as coding?
> This technical progress is likely to continue in coming years, with the potential to complement or replace human labor in certain tasks and reshape job markets. However, it is difficult to predict exactly which new AI capabilities might emerge, and when these advances might occur.
The "small" benefits you list are in fact unprecedented and periodically improving (in my experience).
The generality and breadth of information these models are incorporating was science fiction level fantasy just two years or so ago. The expanding generality and context windows, would seem to be a credible worker threat indicator.
So it is not unsensible to worry about where all this is quickly going.
> The "small" benefits you list are in fact unprecedented and periodically improving (in my experience).
It's only the mechanism that's unprecedented, cementing these new approaches as a state of the art evolution for code completion, automatic summarizing/transcription/translation, image analysis, music generation, etc -- all of which were already commercialized and making regular forward strides for a long while already. You may not have been aware of the state of all those things before, but that doesn't make them unprecedented.
We actually haven't seen many radical or unprecedented acheivements at commercial scale at all yet, with reliability proving to be the the biggest impediment to commercializing anything that can't rely on immediate human supervision.
Even if we get stuck here, where human engagement remains needed, there's a lot of of fun engineering to do and a number of industries we can expect to see reconfigured. So it's not nothing. But there's really no evidence towards revolution or catastrophe just yet.
Neural networks, deep learning modes, have been reliably improving year to year for a very long time. Even in the 90's on CPUs, the combination of CPU improvements and training algorithm improvements translated into a noticeable upward arc.
However, they were not yet suitable for anything but small boutique problems. The computing power, speed, RAM, etc. just wasn't there until GPU computing took off.
Since then, compounding GPU power, and relatively simple changes in architecture have let deep learning rapidly become relevant in ... well every field where data is relevant. And progress has not just been reliable, but noticeably accelerated every few years over the last two decades.
So while you are right, today's AI varies from interesting, to moderately helpful but not Earth shattering in many areas, that is what happens when a new wave of technology crosses the threshold of usability.
Past example: "Cars are really not much better than horses, and very finicky." But the cars were on a completely different arc of improvement.
The limitations of current AI models aside, their generality of expertise (flawed as it might be), is unprecedented. Multi-modal systems, longer context windows, and systems for improving glitchy behavior are a given, and will make big quality differences. Those are obvious requirements with relatively obvious means.
We are going to get more than that going forward, just as these models have often been surprisingly useful (at much lower levels and narrower contexts) in the far and recent past.
This train has been accelerating for over three and a half decades. It isn't going to suddenly slow down because it just passed "Go". The opposite.
As a layman (as with most people here), I think this is a good article that summarises the current research on AI's impact on labour markets. The website itself seems like a reliable source.
These points made sense to me: it is impossible to predict what will actually happen, we need better pro-level tools for AI assistance (e.g. Copilot, writer autocomplete, ControlNet) rather than AI as a full replacement, and we need better and clearer paths to retraining and job mobility.
I disagreed with only one point in there: that research is needed for ways to compensate people for the use of their creative works, but that is solely because of my pro-free-cultural moral views. The rest of the article is still good.
Hmm. I recognise the similarities ("pay me") but I also see differences ("pay me for work done", as in the video vs "pay me for replicating my work", as in IP laws).
Free culture isn't against the former (therefore this video doesn't actually address the point), but is against the latter, as being restricted from replicating work harms culture and innovation as a whole (e.g. memes and fan art being technically illegal), and imposes a large cost on the public.
That said, I'm not fully against IP laws, just that it should be limited to 14 years and only in situations where it is necessary for the production of it in the first place (e.g. articles behind paywalls). I believe I have a right to an opinion on this as a member of the public, as IP laws are a compromise between the public and the creators. It's not some natural human right.
In this moral view, if AI trains on my HN comment for example, copyright shouldn't come into play because I didn't require it to produce this comment. I had other incentives to write this comment.
As a counter-example, no one cares about statistical analysis (what AI is) when it's just building a corpus, doing classification, or even generating GPT-2 level text etc. It's only when it becomes a threat to jobs when people panic. This reveals the real problem: it is about jobs, not data. And so the solution: financial support, equal education and job retraining. Not expanding copyright laws to cover analysis as well.
I'm pretty certain that one of the first things we'll see is more jobs recording worker activity (computer activity, calls, video recording) as training data for future automation. Data from teleoperation of robots would be especially useful for physical tasks.
Speculation can be enjoyable, but given the rapid pace of AI advancements, where today's capabilities may be obsolete within a year, it's wise to approach any claims with a healthy dose of skepticism.
Are any products using LLMs on the horizon, except for code completion?
I have been a power user, hoping my workflows would improve. About every workflow got slower with statistical AI, and I am back to using logical AI like wolframalpha and bayesians.
There are entire categories of saas and enterprise vendors that are about to be completely blown away.
For example -- not long ago, when you wanted to do l10n/i18n for your business, you'd have to go through a pretty painful process of integrating with eg translations.com. If you're running an ecommerce site with a lot of new products (and product descriptions) coming online quickly, that whole process would be painful and expensive.
Fast forward to today -- a well-crafted prompt to Llama3.1 within a product pipeline makes that vendor completely obsolete. Now, you could argue that this kind of automation isn't new, you could have done it with an api call to Google translate or something similar, and sure, that's possible, but now you have one single interface into a very broad, capable brain to carry out any number of tasks.
If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
That's not the state of today at all, and probably doesn't represent the near or medium future.
Using the unmonitored output of a LLM-translation service for your commercial content, outputting in languages you can't read, represents a big reduction in quality assurance and greatly increases the risk of brand embarrassment and possibly even product misrepresentation, while leaving you with no recourse to blame or shift liability.
> If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
The more likely future is that existing translation houses will increasingly turn to LLM-assistance to raise the efficiency and lower the skill threshold for their staff, who still deliver the actual key values of quality assurance and accountability. This will likely drive prices down and greatly reduce how many people are working as translators in these firms, but it's an opportunity for them, not a threat.
LLM's don't seem to be on track to be the foolproof end-user tools that the earyl hype promised. They don't let us magically do everything ourselves and (like crypto being imcompatible with necessary regulations), they don't offer all the other assurances that orgs need when they hire vendors. But they can very likely accelerate trained people in certain cases and still have an impact on industry through specialty vendors that build internal workflows around them.
You may be right, but I would approach this with an open mind. Whether the trajectory of AI development remains an asymptote to human intelligence or surpasses it entirely, the increasing investment, involvement of diverse stakeholders, and growing stakes suggest that virtually every job role may face disruption or, at the very least, re-evaluation.
<< That's not the state of today at all, and probably doesn't represent the near or medium future.
Thank you for saying this. I briefly wondered if my particular company is just way behind or particularly dysfunctional and disorganized ( a possibility for sure ). I do agree with you observation on LLMs effectively lowering entry level skill ( yesterday I was able to dissect xml file despite it not being something I could normally do without any prep work and despite mildly unusual - I thought - formatting choices by vendor ). There was still a fair amount of back and forth for what some enthusiast would call 'perfect prompt' and interesting bugs that had to be addressed, but, having seen the daily mess at my company does not exactly make me a full blown evangelist. I see it more as a get to the wrong answer faster. That is the part that concerns me.
This sums up my view on AI and machine autonomy in general. The human added value is accountability. For a similar reason, outsourcing to faceless off shore companies usually does not work out.
There is nothing to suggest that AI will not require an expert in the loop in the future. Every single one of these products has a disclaimer that it will produce false and misleading results.
Of course, there are only so many experts needed for a given problem domain to fulfill all the demand, but that is true even without automation.
As part of a hilariously bad set of actions by a corporation that I had to threaten with legal action, I decided to try seeing what ChatGPT had to say, knowing in advance all the problems with it in this field, and… it was pretty much what I expected — enough to be interesting and get the general vibe right, but basically every specific detail that I could look up independently without legal training of my own, were citations of things that didn't exist.
I'd just about trust them on language tutoring, but even then only on the best supported languages.
Use them as enthusiastic free interns-to-juniors depending on the field. At some point, they'll be better, but not in predictable ways or on a predictable schedule.
But they are pretty general in their abilities — not perfectly general, but pretty general — so when they can do any of what you've suggested to a standard where they can make those categories unemployable, they're likely to perform all other text-based (not language-based, has to be text but doesn't have to be words) economic activity to a similar standard.
AI has absolutely revolutionized spam and spam detection. Spammers can now generate absolutely unheard of amounts of complete bullshit. And on the other side, spam detection services and algorithms are getting better and better at detecting it, sorting it, and filtering it based on user preferences. Tons of people are enjoying openly AI generated content; and the content that isn't enjoyed by people is instead enjoyed by other AI bots, driving up the engagement rates. That behavior too though is being monitored by other AI, which then prompts spammers to improve their AI so they can avoid that AI and get their stuff seen by engagement AI.
So we have server farms full of computers that are making complete shit that is then thoroughly enjoyed by other server farms full of computers to drive up engagement numbers while still other server farms full of computers are working to detect the fraud and remove it.
Meanwhile, in the real world, we're still hurtling towards climate collapse. But that's okay, we're finally looking into building nuclear reactors again. To power more data centers.
AI is just one part of a larger and longstanding conversation about the future of work in an era of automation. We've long speculated that at some point we won't need the entire population to do all the work. Economists have talked about 20% of the population doing the work.
This can go one of two ways:
1. Fewer jobs will be used to further suppress wages. What little wages people earn will be used for essentially subsistence living. The extreme end of this is like the brick kiln workers in Pakistan, India and Bangladesh. A lot of people, myself included, call this neofeudalism because you will be a modern day serf. The welath concentration here will be even more extreme than it is now. We're also starting to see this play out in South Korea; or
2. The created wealth will elevate the lowest among us so work becomes not required but a bonus if you want extra. The key element here is the removal of the coercive element of capitalism.
To put this in perspective, total US corporate profits are rapidly approaching $4T per quarter. That's roughly $60,000 per US adult. Some would call that the exploited surplus labor value.
Here's another number: we've spent something like $10T on the War on Terror since 9/11. What could $10T buy? Quite literally everything in the United States of America other than the land.
What's depressing is that roughly half the country is championing and celebrating our neofeudalist future even though virtually none of them will benefit from it.
Is this article an accurate reflection of peoples experience, or more generic LinkedIn click bait? I'm assuming the later with content like
>Substantial and ongoing improvements in AI’s capabilities, along with its broad applicability to a large fraction of the cognitive tasks in the economy and its ability to spur complementary innovations, offer the promise of significant improvements to productivity and implications for workforce dynamics.
I keep waiting for the industry shifting changes to materialise, or at least begin to materialise. I see promise with the coding tools, and personally find Claude and Cursor like tools to warrant some of the general hype, but when I look around for similar changes in other tangentially related roles I draw a blank. Some of the Microsoft meeting minute summaries are good, while the transcripts are abysmal. These are helpful, but not necessarily game changing.
Hallucinations, or even the risk of hallucinations, seem like a fundamental show stopper for some domains where this could otherwise be useful. Is this likely to be overcome in the near future? I'd assume it's a core area of research, but I know nothing of this area, so any insights would be enlightening.
What other domains are currently being uplifted in the same way as coding?
I think the analysis is forward looking.
> This technical progress is likely to continue in coming years, with the potential to complement or replace human labor in certain tasks and reshape job markets. However, it is difficult to predict exactly which new AI capabilities might emerge, and when these advances might occur.
The "small" benefits you list are in fact unprecedented and periodically improving (in my experience).
The generality and breadth of information these models are incorporating was science fiction level fantasy just two years or so ago. The expanding generality and context windows, would seem to be a credible worker threat indicator.
So it is not unsensible to worry about where all this is quickly going.
> The "small" benefits you list are in fact unprecedented and periodically improving (in my experience).
It's only the mechanism that's unprecedented, cementing these new approaches as a state of the art evolution for code completion, automatic summarizing/transcription/translation, image analysis, music generation, etc -- all of which were already commercialized and making regular forward strides for a long while already. You may not have been aware of the state of all those things before, but that doesn't make them unprecedented.
We actually haven't seen many radical or unprecedented acheivements at commercial scale at all yet, with reliability proving to be the the biggest impediment to commercializing anything that can't rely on immediate human supervision.
Even if we get stuck here, where human engagement remains needed, there's a lot of of fun engineering to do and a number of industries we can expect to see reconfigured. So it's not nothing. But there's really no evidence towards revolution or catastrophe just yet.
> It's only the mechanism that's unprecedented
I think this is correct and also the point.
Neural networks, deep learning modes, have been reliably improving year to year for a very long time. Even in the 90's on CPUs, the combination of CPU improvements and training algorithm improvements translated into a noticeable upward arc.
However, they were not yet suitable for anything but small boutique problems. The computing power, speed, RAM, etc. just wasn't there until GPU computing took off.
Since then, compounding GPU power, and relatively simple changes in architecture have let deep learning rapidly become relevant in ... well every field where data is relevant. And progress has not just been reliable, but noticeably accelerated every few years over the last two decades.
So while you are right, today's AI varies from interesting, to moderately helpful but not Earth shattering in many areas, that is what happens when a new wave of technology crosses the threshold of usability.
Past example: "Cars are really not much better than horses, and very finicky." But the cars were on a completely different arc of improvement.
The limitations of current AI models aside, their generality of expertise (flawed as it might be), is unprecedented. Multi-modal systems, longer context windows, and systems for improving glitchy behavior are a given, and will make big quality differences. Those are obvious requirements with relatively obvious means.
We are going to get more than that going forward, just as these models have often been surprisingly useful (at much lower levels and narrower contexts) in the far and recent past.
This train has been accelerating for over three and a half decades. It isn't going to suddenly slow down because it just passed "Go". The opposite.
As a layman (as with most people here), I think this is a good article that summarises the current research on AI's impact on labour markets. The website itself seems like a reliable source.
These points made sense to me: it is impossible to predict what will actually happen, we need better pro-level tools for AI assistance (e.g. Copilot, writer autocomplete, ControlNet) rather than AI as a full replacement, and we need better and clearer paths to retraining and job mobility.
I disagreed with only one point in there: that research is needed for ways to compensate people for the use of their creative works, but that is solely because of my pro-free-cultural moral views. The rest of the article is still good.
> pro-free-cultural moral views
Mike Montero would like a word
"Who in this room is now, or has at some time, been in creative services?
"Who here has, at some time, had trouble getting paid by a client for work they were doing?
"Raise your hand if any of these are familiar to you:
"'We ended up not using the work.'
"'It's really not what we wanted after all.'
"Alright. Who's familiar with Goodfellas?
"Alright. 'We got somebody internal to do it instead.'
"'Fuck you. Pay me.'"
"'Fuck you. Pay me.'"
https://www.youtube.com/watch?v=jVkLVRt6c1U
Hmm. I recognise the similarities ("pay me") but I also see differences ("pay me for work done", as in the video vs "pay me for replicating my work", as in IP laws).
Free culture isn't against the former (therefore this video doesn't actually address the point), but is against the latter, as being restricted from replicating work harms culture and innovation as a whole (e.g. memes and fan art being technically illegal), and imposes a large cost on the public.
That said, I'm not fully against IP laws, just that it should be limited to 14 years and only in situations where it is necessary for the production of it in the first place (e.g. articles behind paywalls). I believe I have a right to an opinion on this as a member of the public, as IP laws are a compromise between the public and the creators. It's not some natural human right.
In this moral view, if AI trains on my HN comment for example, copyright shouldn't come into play because I didn't require it to produce this comment. I had other incentives to write this comment.
As a counter-example, no one cares about statistical analysis (what AI is) when it's just building a corpus, doing classification, or even generating GPT-2 level text etc. It's only when it becomes a threat to jobs when people panic. This reveals the real problem: it is about jobs, not data. And so the solution: financial support, equal education and job retraining. Not expanding copyright laws to cover analysis as well.
I'm pretty certain that one of the first things we'll see is more jobs recording worker activity (computer activity, calls, video recording) as training data for future automation. Data from teleoperation of robots would be especially useful for physical tasks.
That data is valuable beyond just future training. You can automate a lot of management using that information.
I think it’s not management but the source of that data that will get eliminated first.
Speculation can be enjoyable, but given the rapid pace of AI advancements, where today's capabilities may be obsolete within a year, it's wise to approach any claims with a healthy dose of skepticism.
Are any products using LLMs on the horizon, except for code completion? I have been a power user, hoping my workflows would improve. About every workflow got slower with statistical AI, and I am back to using logical AI like wolframalpha and bayesians.
There are entire categories of saas and enterprise vendors that are about to be completely blown away.
For example -- not long ago, when you wanted to do l10n/i18n for your business, you'd have to go through a pretty painful process of integrating with eg translations.com. If you're running an ecommerce site with a lot of new products (and product descriptions) coming online quickly, that whole process would be painful and expensive.
Fast forward to today -- a well-crafted prompt to Llama3.1 within a product pipeline makes that vendor completely obsolete. Now, you could argue that this kind of automation isn't new, you could have done it with an api call to Google translate or something similar, and sure, that's possible, but now you have one single interface into a very broad, capable brain to carry out any number of tasks.
If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
That's not the state of today at all, and probably doesn't represent the near or medium future.
Using the unmonitored output of a LLM-translation service for your commercial content, outputting in languages you can't read, represents a big reduction in quality assurance and greatly increases the risk of brand embarrassment and possibly even product misrepresentation, while leaving you with no recourse to blame or shift liability.
> If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.
The more likely future is that existing translation houses will increasingly turn to LLM-assistance to raise the efficiency and lower the skill threshold for their staff, who still deliver the actual key values of quality assurance and accountability. This will likely drive prices down and greatly reduce how many people are working as translators in these firms, but it's an opportunity for them, not a threat.
LLM's don't seem to be on track to be the foolproof end-user tools that the earyl hype promised. They don't let us magically do everything ourselves and (like crypto being imcompatible with necessary regulations), they don't offer all the other assurances that orgs need when they hire vendors. But they can very likely accelerate trained people in certain cases and still have an impact on industry through specialty vendors that build internal workflows around them.
You may be right, but I would approach this with an open mind. Whether the trajectory of AI development remains an asymptote to human intelligence or surpasses it entirely, the increasing investment, involvement of diverse stakeholders, and growing stakes suggest that virtually every job role may face disruption or, at the very least, re-evaluation.
<< That's not the state of today at all, and probably doesn't represent the near or medium future.
Thank you for saying this. I briefly wondered if my particular company is just way behind or particularly dysfunctional and disorganized ( a possibility for sure ). I do agree with you observation on LLMs effectively lowering entry level skill ( yesterday I was able to dissect xml file despite it not being something I could normally do without any prep work and despite mildly unusual - I thought - formatting choices by vendor ). There was still a fair amount of back and forth for what some enthusiast would call 'perfect prompt' and interesting bugs that had to be addressed, but, having seen the daily mess at my company does not exactly make me a full blown evangelist. I see it more as a get to the wrong answer faster. That is the part that concerns me.
This sums up my view on AI and machine autonomy in general. The human added value is accountability. For a similar reason, outsourcing to faceless off shore companies usually does not work out.
There is nothing to suggest that AI will not require an expert in the loop in the future. Every single one of these products has a disclaimer that it will produce false and misleading results.
Of course, there are only so many experts needed for a given problem domain to fulfill all the demand, but that is true even without automation.
phone self-service systems, tutoring services, contract review, recruiting. Just to name a few
> contract review
Yeah, no.
As part of a hilariously bad set of actions by a corporation that I had to threaten with legal action, I decided to try seeing what ChatGPT had to say, knowing in advance all the problems with it in this field, and… it was pretty much what I expected — enough to be interesting and get the general vibe right, but basically every specific detail that I could look up independently without legal training of my own, were citations of things that didn't exist.
I'd just about trust them on language tutoring, but even then only on the best supported languages.
Use them as enthusiastic free interns-to-juniors depending on the field. At some point, they'll be better, but not in predictable ways or on a predictable schedule.
But they are pretty general in their abilities — not perfectly general, but pretty general — so when they can do any of what you've suggested to a standard where they can make those categories unemployable, they're likely to perform all other text-based (not language-based, has to be text but doesn't have to be words) economic activity to a similar standard.
Have you tried Claude (3.5 Sonnet)?
No, just Haiku 3.5, but I do like Anthropic's trained choice of personality more than the one I get from ChatGPT.
One word: spam.
AI has absolutely revolutionized spam and spam detection. Spammers can now generate absolutely unheard of amounts of complete bullshit. And on the other side, spam detection services and algorithms are getting better and better at detecting it, sorting it, and filtering it based on user preferences. Tons of people are enjoying openly AI generated content; and the content that isn't enjoyed by people is instead enjoyed by other AI bots, driving up the engagement rates. That behavior too though is being monitored by other AI, which then prompts spammers to improve their AI so they can avoid that AI and get their stuff seen by engagement AI.
So we have server farms full of computers that are making complete shit that is then thoroughly enjoyed by other server farms full of computers to drive up engagement numbers while still other server farms full of computers are working to detect the fraud and remove it.
Meanwhile, in the real world, we're still hurtling towards climate collapse. But that's okay, we're finally looking into building nuclear reactors again. To power more data centers.
The future is fucking stupid.
AI is spam itself. If you go reddit, you can see lot of bots working for an Agenda.
AI is just one part of a larger and longstanding conversation about the future of work in an era of automation. We've long speculated that at some point we won't need the entire population to do all the work. Economists have talked about 20% of the population doing the work.
This can go one of two ways:
1. Fewer jobs will be used to further suppress wages. What little wages people earn will be used for essentially subsistence living. The extreme end of this is like the brick kiln workers in Pakistan, India and Bangladesh. A lot of people, myself included, call this neofeudalism because you will be a modern day serf. The welath concentration here will be even more extreme than it is now. We're also starting to see this play out in South Korea; or
2. The created wealth will elevate the lowest among us so work becomes not required but a bonus if you want extra. The key element here is the removal of the coercive element of capitalism.
To put this in perspective, total US corporate profits are rapidly approaching $4T per quarter. That's roughly $60,000 per US adult. Some would call that the exploited surplus labor value.
Here's another number: we've spent something like $10T on the War on Terror since 9/11. What could $10T buy? Quite literally everything in the United States of America other than the land.
What's depressing is that roughly half the country is championing and celebrating our neofeudalist future even though virtually none of them will benefit from it.