I'm an extremely pro-AI person and use it heavily at home (I actually just finished the most recent rewrite of my agent software that uses MCP. It's really amazing, a lot of people say the AI bubble is popping, I think it's only just starting, people have no idea how how far this stuff can go.)
I suspect I was just laid off for not using any of the AI tools at work. Here's why I didn't.
1) They were typically very low quality. Often just more hosted chatbots (and of course they pick the cheapest hosted models) with bad RAG on a good day.
2) It wasn't clear to me that my boss wasn't able to read corespondance with chatbots the way he could with my other coworkers which creates a kind of chilling effect. I don't reflexively ask it casual questions the way I do at home.
3) Most of my blockers were administrative, not technical. Not only could AI tools not help me with that but in typical corporate fashion trying to use the few sanctioned tools actually generated more administrative work for me.
Oh well. I'm kind of over corporate employment anyway and moving onto my own thing. Just another insane misfeature of that mode of socialization at that scale.
"It’s hard to find programmers these days who aren’t using AI coding assistants in some capacity"
I don't think this is true at all. In fact there are major tech companies that ban the use of AI when coding and those folks do it for their job everyday without an llm.
A mature CEO would characterize the "not good reasons" engineers had for not onboarding.
I think most people would agree that engineers outright refusing to comply with what was asked of them would be a "not good" reason for not onboarding.
But Brian Armstrong is playing Strong CEO for the podcast circuit. So he can't admit that engineers were let go for potentially justifiable reasons. He has to leave room for speculation. Speculation that maybe some engineers were let go for trivial reasons, because Brian is tough, and tough Brian demands complaint employees.
The people who didn't comply because they were on vacation and then had to go to a Saturday meeting to explain themselves think Brian is something -- but I guarantee it's not that he's tough.
We've all seen this playbook before. This is the incredibly dumb, Idiocracy-emulating world in which we now live.
If I had money entrusted to Coinbase, I'd be concerned about:
1. The idea of them using AI coding tools in a forced way like this. (Meticulous code quality, and perfect understanding of every detail, are critical.)
2. The culture implications of insta-firing someone whose explanation you didn't like, for why they hadn't started using AI tools yet.
3. Scheduling the firing call for a Saturday. Are they in some kind of whip-cracking forced march, and staff going to be fatigued and stressed and sick and making mistakes?
Going to agree somewhat with the sibling comment on this: (2) & (3) are of significantly greater concern here than (1).
Sure, code quality is important everywhere, & even moreso in finance, but if you're going through this world believing the mean standard across financial tech is high, even before considering the likely rot of coin-brained companies on their engineer's standards, then you need to readjust your skepticism.
On the other hand, the cultural implication of feeling my superiors even have any level of granular interest in monitoring the individual tools I personally use to generate outputs that benefit the company... outside of obvious security/endpoint concerns, there's no world in which that's an environment conductive to quality.
4. He probably fired the best engineers who were concerned that a financial application actually works correctly instead of toying and vibing around with Copilot.
I'd describe myself as a crypto "true believer" - but have been mostly quiet about that since ~2014. That was about the time I interviewed at a couple of startup exchanges. That was enough for me to realize that none of the exchanges that were popular at that time at least were anywhere near "professional".
While I think that's probably true for many of them, Coinbase is the most prominent crypto company, obviously a major target, and yet we haven't seen any real meaningful hacks of them. I think that says something.
I've worked on the triaging side of large corporate bug bounty programmes & trust me when I say that security-by-obscurity is far more impactful in keeping our world (incidentally) secure than any active measure. Absence of exploit does not equal absence of vulnerability.
I think it's unfair to say that Coinbase has effectively "gotten lucky" in not being hacked. Security by obscurity makes sense when you have a low traffic low stakes site, but a place like Coinbase can't rely on that. They are a huge target, they have to take into account the possibility of disgruntled/bribed ex-or-current-employees abusing their knowledge of CB's security systems. They host coins like Monero and others which can obscure where the money is sent.
Because consumer investments is where we should "appreciate mediocrity." And yes, that's a real quote from a pro-vibe coding blog post whose URL ends with /youre-all-nuts/ [1]. Absolutely wild, lol
This is really, well... douchey. Emptying anything I have in Coinbase asap (and yes I read the whole thing)
I wonder how likely it is for CEO roles to get taken over by a sophisticated LLM at this point. I’d wager we’d see a 20x increase in value. I use and value llms in my coding and research workflows already but to fire people for careful and slow adoption speaks very poorly to individual and company maturity.
I could have understood it in a month - onboard, try something out and decide if it helps. But a week and then a meeting on a Saturday to explain why or get fired?
Management is hard so I’m generally a little more patient with managerial missteps. But this is a different level of unreasonable. Heck a lot of developers in the finance world adopt slowly because they’ve worked with compliance departments and it becomes a habit.
One week is a big part of the idiocy here. Anyone half busy with anything is going to miss miscellaneous bullshit from management like this, especially when management is prone to these "flights of fancy" with random tooling or the new hot thing.
I personally ignore or delay things all the time because of actual work I'm doing. If running some random AI tool is more important than "keep the company working," that's a really sick and fragile culture.
I would guess that something similar could exist in the Terminator / Skynet timeline, but I am not aware of the religious beliefs of the humans struggling there.
I tried AI for coding in my own time since the very early days of GPT 3.5 but I didn't try it in my day job until much later when Cursor came out. Now I use Claude Code frequently.
IMO this was the optimal approach, trying in my own time and not risking damaging the company codebase until it was safe... But I might have been fired, by the sounds of it.
So, Armstrong, how did your company get off the ground without "AI" in the first place?
Is he returning a favor for all the goodies that "crypto" is getting from this administration? Like Tether being legitimized in El Salvador by best friend forever Bukele and having its finances and (alleged) USTD backing handled by Lutnick's Cantor Fitzgerald?
> “It’s clear that it is very helpful to have AI helping you write code. It’s not clear how you run an AI-coded code base,” he commented. Armstrong replied, “I agree.”
Can’t say I’m surprised that a crypto CEO - an industry totally overflowing with contradictions - is completely unfazed when confronted with yet another contradiction
It's not a contradiction. AI is now an important tool to have for software engineers. It's not at the stage where you can just let it do the work without supervising and refining what it outputs.
I get that everyone wants to dunk on the crypto/AI guy, but their claim boils down into "AI is helpful but it shouldn't do all the work for you" which is a stance almost everyone has.
I don't think the CEO should know whether or not I've used AI though nor do I think it's fair to fire people for it.
I guess I could maybe see a case for catching someone saying "I don't care about trying out new tools" - it's a position held by some of the least productive people I've ever worked with. But there other reasons why someone might not have picked up new tools yet, like "I'm just trying to get my damn work done" or "I tried this tool but it just seemed distracting".
I fall into those camps all the time wrt new tools.
Yes, but in this case, the CEO said, “everybody try this this, it is important to me,” and the ones who dragged their feet on trying it got canned.
From my reading of the article, you don’t have to think AI is useful or great to keep your job there, you just have to try the tool out because the CEO said to.
Brian is known for adopting whatever appears to be hot. Sometimes he reads the latest fad book and starts projecting it, other times there's a hiring or retention trend that he adopts to remain competitive for hiring. I don't get why he'd fire people over this though, seems like he's undifferentiated.
I've known him for bucking trends actually. Back when every company was tripping over themselves to say how "Black Lives Matter" they were, he sent a memo to the company saying "if you support that stuff, that's fine, but do it on your own time. When you're at work, the focus is the company's mission. If you don't like this, here's a nice severance package." Something a low single-digit percentage of employees took the severance offer. It was a pretty brave move when the prevailing wind was towards all these social justice issues.
And he also bucks the trend by running a crypto company in the US instead of some random island in the Carribean and actually talking to regulators in the hopes of getting regulatory clarity.
As someone who hates AI but also wants to succeed in job interviews, what is the best way to lie about it when asked? What should I say that will pass without being too enthusiastic (and possibly being detected as lying that way)?
As an interviewer: basically nobody gets 'caught lying' in an interview. You have to lie about something that can and might be checked within the confines of the hiring process. Sometimes people fail a vibe check though. It is well worth it to take an afternoon to learn the workflows and the terminology. Confine yourself to ideas where the output is cloaked in your previous employers' IP (or better, confined to your head), and is robust to various degrees of success. Whether you used a tool to translate a fragment from an unfamiliar language, summarize a codebase, do internet research, or develop a high-level design for a feature is essentially an unknowable fact.
It's just a tool, right? I'm not an AI enthusiast, but I use it quite a bit, from googling to learning and checking my understanding of stuff.
So I would steer the conversation towards ways of using AI for dev-related tasks other than coding: understanding codebases, confirming my intuition/hypotheses. For example, I find LLMs pretty shitty at modifying larger codebases, but it's been helpful to, say, point Codex at the github repo of a large unfamiliar codebase that I had to learn and run my ideas about it by the LLM as I was learning.
Also, you probably don't want to succeed at job interviews with managers that will insist on your using AI in ways you don't like. A job interview is a two way process and all that.
"I'm interested in the technology and have been paying attention to its development, but it's not yet to the point that I believe it will be worth integrating into my workflow."
What's telling is that, even if his story is remotely true, his target audience for this tale are people who won't blink at potentially destroying peoples lives just for corporate point scoring or, worse, terrorising surviving employees into blindly obeying questionable actions. Anyone with an ounce of empathy would think how terrible it would be to be fired and maybe lose your home or end up in debt because your insane boss bought into the latest fad.
I think it’s more likely, “engineers who didn’t try the tool their CEO specifically asked them to, warning them that there would be consequences if they didn’t.”
I know I shouldn't judge a book by its cover, but that picture of him looking like a cult leader from an episode of Star Trek is all the explanation I need.
So he's bald? From everything I've read about him he actually seems like a pretty conscientious guy and a good leader. He's been in crypto for almost 15 years, which is a lot more than you can say about some other folks in crypto.
I don't agree with firing the engineer, but if your company is paying for a tool that could help your productivity and you don't even try it, I would like to know the reason for that. It's a job, they are paying your time and they are paying for a tool that can help you. At least try it.
They had one week to “onboard” and then were fired with no warning. The fact iscoding assistants show promise but just aren’t that helpful for most experienced engineers at this point, so installing the tool is just another time-wasting task. Maybe they didn’t get it done that week for the same reason people put off TPS reports and other management-driven make-work?
> “I said, ‘AI is important. We need you to all learn it and at least onboard. You don’t have to use it every day yet until we do some training, but at least onboard by the end of the week. And if not, I’m hosting a meeting on Saturday with everybody who hasn’t done it and I’d like to meet with you to understand why.’”
They would rather go to a Saturday meeting than do the thing their CEO explicitly asked them to do in the very reasonable timeframe they were asked to do it.
My gut reaction is that becoming dependent on an employer to provide access to proprietary tooling is a step backwards for worker empowerment.
My take on the past 20 years or so is that programmers gained enough market clout to demand a fair amount of agency over things like tooling and working conditions. One of the results is, for instance, that there are no more proprietary programming languages of any importance, that I'm aware of. Even the great Microsoft open-sourced their flagship language, C#.
Non-developers like myself looked to the programming world with a bit of admiration or perhaps even envy. I use programming tools in my job, and would not choose a proprietary tool even if offered. The engineering disciplines that depended on proprietary tooling tend to be lower paying, with less job mobility.
Maybe the tables have turned, and employers have the upper hand once again, so this may all be a moot point, or a period to look back on with fondness.
AI is just a tool, and depending on ones seniority, a tool of possibly low value. Sacking people who elect not to commit to a tool, a tool THEY are best placed to evaluate in terms of value, is just as unhinged as sacking someone who uses VSCode over Rider. This is just another example of an idiot CEO, and it's depressing to think anyone might consider this anything else.
Companies always dole out slop enterprise software their employees don’t use. Ours pays for g suite and ms products that serve identical uses just to keep up with the expectations that this is what enterprise software purchasing looks like. Am I to use both outlook and gmail simultaneously in order to not shirk the provided tooling?
Amen. Ours recently pushed a platform for comparing prices between the duopolies in our country to help us employees out with CoL. These duopolies are widely credited with contributing to the CoL problem. You just have to laugh.
How many LLM subscriptions are required for Coinbase to drop saturday meetings?
To quote Office Space:
> Well, I thought I remembered you saying that you wanted to express yourself
A/V reference, for those inclined: https://www.youtube.com/watch?v=F7SNEdjftno
I'm an extremely pro-AI person and use it heavily at home (I actually just finished the most recent rewrite of my agent software that uses MCP. It's really amazing, a lot of people say the AI bubble is popping, I think it's only just starting, people have no idea how how far this stuff can go.)
I suspect I was just laid off for not using any of the AI tools at work. Here's why I didn't.
1) They were typically very low quality. Often just more hosted chatbots (and of course they pick the cheapest hosted models) with bad RAG on a good day.
2) It wasn't clear to me that my boss wasn't able to read corespondance with chatbots the way he could with my other coworkers which creates a kind of chilling effect. I don't reflexively ask it casual questions the way I do at home.
3) Most of my blockers were administrative, not technical. Not only could AI tools not help me with that but in typical corporate fashion trying to use the few sanctioned tools actually generated more administrative work for me.
Oh well. I'm kind of over corporate employment anyway and moving onto my own thing. Just another insane misfeature of that mode of socialization at that scale.
Brian Armstrong is an unabashed ahole and here he's doing more ahole things. Nothing surprising.
This is not a smart move. AI is over hyped.
It's not even about AI. That's just the tag they using for "justifying" these terminations. Gotta blame something other than management.
"It’s hard to find programmers these days who aren’t using AI coding assistants in some capacity"
I don't think this is true at all. In fact there are major tech companies that ban the use of AI when coding and those folks do it for their job everyday without an llm.
A mature CEO would characterize the "not good reasons" engineers had for not onboarding.
I think most people would agree that engineers outright refusing to comply with what was asked of them would be a "not good" reason for not onboarding.
But Brian Armstrong is playing Strong CEO for the podcast circuit. So he can't admit that engineers were let go for potentially justifiable reasons. He has to leave room for speculation. Speculation that maybe some engineers were let go for trivial reasons, because Brian is tough, and tough Brian demands complaint employees.
The people who didn't comply because they were on vacation and then had to go to a Saturday meeting to explain themselves think Brian is something -- but I guarantee it's not that he's tough.
We've all seen this playbook before. This is the incredibly dumb, Idiocracy-emulating world in which we now live.
At a Saturday meeting…
This is not a story about AI.
If I had money entrusted to Coinbase, I'd be concerned about:
1. The idea of them using AI coding tools in a forced way like this. (Meticulous code quality, and perfect understanding of every detail, are critical.)
2. The culture implications of insta-firing someone whose explanation you didn't like, for why they hadn't started using AI tools yet.
3. Scheduling the firing call for a Saturday. Are they in some kind of whip-cracking forced march, and staff going to be fatigued and stressed and sick and making mistakes?
Going to agree somewhat with the sibling comment on this: (2) & (3) are of significantly greater concern here than (1).
Sure, code quality is important everywhere, & even moreso in finance, but if you're going through this world believing the mean standard across financial tech is high, even before considering the likely rot of coin-brained companies on their engineer's standards, then you need to readjust your skepticism.
On the other hand, the cultural implication of feeling my superiors even have any level of granular interest in monitoring the individual tools I personally use to generate outputs that benefit the company... outside of obvious security/endpoint concerns, there's no world in which that's an environment conductive to quality.
4. He probably fired the best engineers who were concerned that a financial application actually works correctly instead of toying and vibing around with Copilot.
Yes, this is probably right.
> Meticulous code quality, and perfect understanding of every detail
I'm sorry but there's just no fucking way. Even before AI these crypto coins companies were absolute clown factories. There's no way they ever had it.
I'd describe myself as a crypto "true believer" - but have been mostly quiet about that since ~2014. That was about the time I interviewed at a couple of startup exchanges. That was enough for me to realize that none of the exchanges that were popular at that time at least were anywhere near "professional".
While I think that's probably true for many of them, Coinbase is the most prominent crypto company, obviously a major target, and yet we haven't seen any real meaningful hacks of them. I think that says something.
> I think that says something.
I've worked on the triaging side of large corporate bug bounty programmes & trust me when I say that security-by-obscurity is far more impactful in keeping our world (incidentally) secure than any active measure. Absence of exploit does not equal absence of vulnerability.
I think it's unfair to say that Coinbase has effectively "gotten lucky" in not being hacked. Security by obscurity makes sense when you have a low traffic low stakes site, but a place like Coinbase can't rely on that. They are a huge target, they have to take into account the possibility of disgruntled/bribed ex-or-current-employees abusing their knowledge of CB's security systems. They host coins like Monero and others which can obscure where the money is sent.
Right...so the code that manages your money in Coinbase is Vibe Coded?
Because consumer investments is where we should "appreciate mediocrity." And yes, that's a real quote from a pro-vibe coding blog post whose URL ends with /youre-all-nuts/ [1]. Absolutely wild, lol
[1]: https://fly.io/blog/youre-all-nuts/
Toxic
> meeting on Saturday with everybody who hasn’t done it
Even more toxic
Not surprising. This is one of the CEOs who wants "special economic zones", i.e. a place where they can be fedual lords.
This is really, well... douchey. Emptying anything I have in Coinbase asap (and yes I read the whole thing)
I wonder how likely it is for CEO roles to get taken over by a sophisticated LLM at this point. I’d wager we’d see a 20x increase in value. I use and value llms in my coding and research workflows already but to fire people for careful and slow adoption speaks very poorly to individual and company maturity.
I could have understood it in a month - onboard, try something out and decide if it helps. But a week and then a meeting on a Saturday to explain why or get fired?
Management is hard so I’m generally a little more patient with managerial missteps. But this is a different level of unreasonable. Heck a lot of developers in the finance world adopt slowly because they’ve worked with compliance departments and it becomes a habit.
Based on the article, my understanding is that the people fired were unwilling to "onboard".
I assume "onboard" means something like "set up an account and get it working locally".
One week is a big part of the idiocy here. Anyone half busy with anything is going to miss miscellaneous bullshit from management like this, especially when management is prone to these "flights of fancy" with random tooling or the new hot thing.
I personally ignore or delay things all the time because of actual work I'm doing. If running some random AI tool is more important than "keep the company working," that's a really sick and fragile culture.
That is worrying.
If i was working on a finance or finance adjacent company i'd be very hesitant to use anything that might send data outside the company.
Are there any anti-AI religions yet? Maybe that'd provide an easy way to push back on such extreme mandates.
>> Are there any anti-AI religions yet?
No. The Butlerian Jihad (https://dune.fandom.com/wiki/Butlerian_Jihad) hasn't happened yet.
I would guess that something similar could exist in the Terminator / Skynet timeline, but I am not aware of the religious beliefs of the humans struggling there.
I tried AI for coding in my own time since the very early days of GPT 3.5 but I didn't try it in my day job until much later when Cursor came out. Now I use Claude Code frequently.
IMO this was the optimal approach, trying in my own time and not risking damaging the company codebase until it was safe... But I might have been fired, by the sounds of it.
Kind of reminds me of Bezos’ API mandate.
So, Armstrong, how did your company get off the ground without "AI" in the first place?
Is he returning a favor for all the goodies that "crypto" is getting from this administration? Like Tether being legitimized in El Salvador by best friend forever Bukele and having its finances and (alleged) USTD backing handled by Lutnick's Cantor Fitzgerald?
It makes me imagine engineers developing crypto trading bots based on LLM prompting.
> “It’s clear that it is very helpful to have AI helping you write code. It’s not clear how you run an AI-coded code base,” he commented. Armstrong replied, “I agree.”
Can’t say I’m surprised that a crypto CEO - an industry totally overflowing with contradictions - is completely unfazed when confronted with yet another contradiction
It's not a contradiction. AI is now an important tool to have for software engineers. It's not at the stage where you can just let it do the work without supervising and refining what it outputs.
Hand-waving negative externalities is maybe the biggest part of the job.
I get that everyone wants to dunk on the crypto/AI guy, but their claim boils down into "AI is helpful but it shouldn't do all the work for you" which is a stance almost everyone has.
I don't think the CEO should know whether or not I've used AI though nor do I think it's fair to fire people for it.
I guess I could maybe see a case for catching someone saying "I don't care about trying out new tools" - it's a position held by some of the least productive people I've ever worked with. But there other reasons why someone might not have picked up new tools yet, like "I'm just trying to get my damn work done" or "I tried this tool but it just seemed distracting".
I fall into those camps all the time wrt new tools.
Yes, but in this case, the CEO said, “everybody try this this, it is important to me,” and the ones who dragged their feet on trying it got canned.
From my reading of the article, you don’t have to think AI is useful or great to keep your job there, you just have to try the tool out because the CEO said to.
Brian is known for adopting whatever appears to be hot. Sometimes he reads the latest fad book and starts projecting it, other times there's a hiring or retention trend that he adopts to remain competitive for hiring. I don't get why he'd fire people over this though, seems like he's undifferentiated.
I've known him for bucking trends actually. Back when every company was tripping over themselves to say how "Black Lives Matter" they were, he sent a memo to the company saying "if you support that stuff, that's fine, but do it on your own time. When you're at work, the focus is the company's mission. If you don't like this, here's a nice severance package." Something a low single-digit percentage of employees took the severance offer. It was a pretty brave move when the prevailing wind was towards all these social justice issues.
And he also bucks the trend by running a crypto company in the US instead of some random island in the Carribean and actually talking to regulators in the hopes of getting regulatory clarity.
IMO, an engineer who refuses to explore a technology when paid and directed to do so isn't likely to be someone I'd want to work with.
It's... "uncurious" is the best way I can think of to describe it.
Right, then it's fortunate when a company has something more objective than "i don't happen to like them at the moment"
As someone who hates AI but also wants to succeed in job interviews, what is the best way to lie about it when asked? What should I say that will pass without being too enthusiastic (and possibly being detected as lying that way)?
As an interviewer: basically nobody gets 'caught lying' in an interview. You have to lie about something that can and might be checked within the confines of the hiring process. Sometimes people fail a vibe check though. It is well worth it to take an afternoon to learn the workflows and the terminology. Confine yourself to ideas where the output is cloaked in your previous employers' IP (or better, confined to your head), and is robust to various degrees of success. Whether you used a tool to translate a fragment from an unfamiliar language, summarize a codebase, do internet research, or develop a high-level design for a feature is essentially an unknowable fact.
It's just a tool, right? I'm not an AI enthusiast, but I use it quite a bit, from googling to learning and checking my understanding of stuff.
So I would steer the conversation towards ways of using AI for dev-related tasks other than coding: understanding codebases, confirming my intuition/hypotheses. For example, I find LLMs pretty shitty at modifying larger codebases, but it's been helpful to, say, point Codex at the github repo of a large unfamiliar codebase that I had to learn and run my ideas about it by the LLM as I was learning.
Also, you probably don't want to succeed at job interviews with managers that will insist on your using AI in ways you don't like. A job interview is a two way process and all that.
As someone who is very much "pro-AI", I'd say:
"I'm interested in the technology and have been paying attention to its development, but it's not yet to the point that I believe it will be worth integrating into my workflow."
I'm not a fan of AI coding as well, but you shouldn't be lying in your job interview.
That wasn't my question though.
Nevertheless that’s my answer.
I suggest honestly, generally. That will also help you select a place that fits you better.
Though I will say, if you copy and paste that question into ChatGPT, it can give you some options to respond in a diplomatic way ;)
[dead]
A crypto ceo jumping on the latest fad in an overdramatic way to generate clicks??! What will they think of next!?
Seriously, why anyone listens to crypto ceos is beyond me. Modern day snake oil salesmen.
What's telling is that, even if his story is remotely true, his target audience for this tale are people who won't blink at potentially destroying peoples lives just for corporate point scoring or, worse, terrorising surviving employees into blindly obeying questionable actions. Anyone with an ounce of empathy would think how terrible it would be to be fired and maybe lose your home or end up in debt because your insane boss bought into the latest fad.
So all engineers who have already been using better AI tools and had no reason to downgrade to copilot and cursor were fired?
I think it’s more likely, “engineers who didn’t try the tool their CEO specifically asked them to, warning them that there would be consequences if they didn’t.”
I don’t think it’s unreasonable at all to ask. I wonder if the reaction was similar when word processors were introduced in the workplace
If something increases your productivity 50%, you don’t need to ask people to use it. They’ll be beating down the your door begging to use it.
The few people who don’t will be forced out naturally when they can’t keep up.
I know I shouldn't judge a book by its cover, but that picture of him looking like a cult leader from an episode of Star Trek is all the explanation I need.
So he's bald? From everything I've read about him he actually seems like a pretty conscientious guy and a good leader. He's been in crypto for almost 15 years, which is a lot more than you can say about some other folks in crypto.
I don't agree with firing the engineer, but if your company is paying for a tool that could help your productivity and you don't even try it, I would like to know the reason for that. It's a job, they are paying your time and they are paying for a tool that can help you. At least try it.
They had one week to “onboard” and then were fired with no warning. The fact iscoding assistants show promise but just aren’t that helpful for most experienced engineers at this point, so installing the tool is just another time-wasting task. Maybe they didn’t get it done that week for the same reason people put off TPS reports and other management-driven make-work?
> “I said, ‘AI is important. We need you to all learn it and at least onboard. You don’t have to use it every day yet until we do some training, but at least onboard by the end of the week. And if not, I’m hosting a meeting on Saturday with everybody who hasn’t done it and I’d like to meet with you to understand why.’”
They would rather go to a Saturday meeting than do the thing their CEO explicitly asked them to do in the very reasonable timeframe they were asked to do it.
My gut reaction is that becoming dependent on an employer to provide access to proprietary tooling is a step backwards for worker empowerment.
My take on the past 20 years or so is that programmers gained enough market clout to demand a fair amount of agency over things like tooling and working conditions. One of the results is, for instance, that there are no more proprietary programming languages of any importance, that I'm aware of. Even the great Microsoft open-sourced their flagship language, C#.
Non-developers like myself looked to the programming world with a bit of admiration or perhaps even envy. I use programming tools in my job, and would not choose a proprietary tool even if offered. The engineering disciplines that depended on proprietary tooling tend to be lower paying, with less job mobility.
Maybe the tables have turned, and employers have the upper hand once again, so this may all be a moot point, or a period to look back on with fondness.
AI is just a tool, and depending on ones seniority, a tool of possibly low value. Sacking people who elect not to commit to a tool, a tool THEY are best placed to evaluate in terms of value, is just as unhinged as sacking someone who uses VSCode over Rider. This is just another example of an idiot CEO, and it's depressing to think anyone might consider this anything else.
What happens when you try it and it sucks?
You give that feedback?
Because I have plenty of opportunities that don't involve dancing for a micromanaging sociopath.
Companies always dole out slop enterprise software their employees don’t use. Ours pays for g suite and ms products that serve identical uses just to keep up with the expectations that this is what enterprise software purchasing looks like. Am I to use both outlook and gmail simultaneously in order to not shirk the provided tooling?
Amen. Ours recently pushed a platform for comparing prices between the duopolies in our country to help us employees out with CoL. These duopolies are widely credited with contributing to the CoL problem. You just have to laugh.
People that dont/wont use ai models are unironically lacking the creative capacity to understand the different ways the models can be used
I agree for "won't", but not for "don't".
Just because I've found it to be very helpful doesn't mean everyone will.