I'll be the contrarian and say that I don't find anything wrong with this, and if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.
sshine reply above is coming from a very conflictual mindset. "Can I still use AI and not be caught? Is it cheating? Does it matter if it's cheating?"
I think that's a bit like lying on your first date. If you're looking to score, sure, it's somewhat unethical but it works. But if you're looking for a long term collaboration, _and_ you expect to be interviewed by several rounds of very smart people, then you're much better off just going along.
It’s not about being wrong, it’s about being ironic. We have LLMs shoved down our throats as this new way to communicate—we are encouraged to ask them to make our writing “friendlier” or “more professional”—and then one of the companies creating such a tool asks the very people most interested in it to not use it for the exact purpose we’ve been told it’s good at. They are asking you pretty please to not do the bad thing they allow and encourage everyone to do. They have no issue if you do it to others, but they don’t like it when it’s done to them. It is funny and hypocritical and pulls back the curtain a bit on these companies.
It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.
You're wrapping all the AI companies up in a single box, but:
* Most of the AI you get shoved down your throat is by the providers of services you use, not the AI companies.
* Among the AI companies, Anthropic in particular has had a very balanced voice that doesn't push for using AI where it doesn't belong. Their marketing page can barely be called that [0]. Their Claude-specific page doesn't mention using it for writing at all [1].
You seem to be committing the common fallacy of treating a large and disparate group of people and organizations as a monolith and ascribing cognitive dissonance where what you're actually seeing is diversity of opinion.
The LLM companies have always been against this kind of thing.
Sam Altman (2023): "something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points"
3 years ago people were poking fun about how restrictive the terms were - you could get your API key blocked if you used it to pretend to be a human. Eventually people just used other AIs for things like that, so they got rid of these restrictions that they couldn't enforce anyway.
Interesting that this quote really contains "sender" where "recipient" was intended, but it had absolutely no impact on any reader. (I even asked Claude and ChatGPT if they noticed anything strange in the sentence, and both needed additional prompting to spot that mistake.)
Wow I completely didn’t notice that until I read your comment. My brain must have automatically filled in the correct word. I had to go back and re-read it to confirm.
Well English is not my first language, so I probably tend to go through text more slowly and/or scan the text differently, so I have higher chance stumbling upon these oddities. (I can confirm that sometimes I see unexpected amount of misspelt homophones or usage of strangely related words.) Seeing two distinct LLM chats gloss over this particular nuance in almost identical way was really interesting.
Grok, on the other hand, has absolutely no problem with concept of sender both expanding and compressing the message, and with absence of recipient. Even after super-painstaking discussion, where Grok identified the strange absence of the "recipient", when I asked him to correct the sentence, he simply changed the word "sender" to the word "themselves":
> something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and *themselves* using ChatGPT to condense it into the key bullet points
> It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.
Well, no. It's irony, but it's only cognitive dissonance in a comedy show which misses the nuance.
Most pro-gun organizations are heavily into gun safety. The message is that guns aren't unsafe if they're being used correctly. Most of the time, this means that most guns should be locked up in a safe, with ammo in a separate safe, except when being transported to a gun range, for hunting, or similar. When being used there, one should follow a specific set of procedures for keeping those activities safe as well.
It's a perfect analogy for the LLM here too. Anthropic encourages it for many uses, but not for the one textbox. Irony? Yes. Wrong? Probably not.
Huge miss on the gun analogy. The likes of NRA are pushing for 50-state constitutional carry. Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.
There’s probably actually some other hidden factor though, like the venue not allowing it.
Edit: FWIW those late night TV shows are nothing but rage bait low brow “comedy” that divides the country. But the above remains true.
> Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.
That's not what the NRA is pushing for, any more than there are Democrats pushing for mandatory sex changes for all kids (yes, this is cited on similar right-wing comedy shows, and individuals on the right believe it). Pushing for a right doesn't mean 100% of the population will exercise that right.
And yes, most venues (as well as schools, government buildings, etc.) will not allow guns. If there's a security guard, police, or similar within spitting distance, there isn't a reasonable self-defence argument.
One of the interesting pieces of data is looking at 2nd amendment support versus distance to the nearest police station / police officer / likely law enforcement response times. It explains a lot about where support / opposition comes from.
>And yes, most venues (as well as schools, government buildings, etc.) will not allow guns. If there's a security guard, police, or similar within spitting distance, there isn't a reasonable self-defence argument.
Can you give me one example of a valid "reasonable self-defence argument"? Legit question.
> yes, this is cited on similar right-wing comedy shows, and individuals on the right believe it
Can you give an example? Of course you can find 2 people in the US who believe it, and they held 2 comedy shows where it was said, and it's technically true, but I don't think I've ever seen anything like this said.
it's interesting to me how easily you can fact check the statement:
> Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.
yet, you claim that it's the late night TV that divides us, while making sure to double down on your misleading statement.
The NRA doesn't "ban guns at their conferences", they have been banned at small parts of a multi-day conferences e.g. where Trump was speaking because that was a rule established by the secret service and they complied for a small part of the conference.
When the majority of a conference allows guns, it's simply a lie to claim that guns were banned. An unintentional lie, I'm sure, but it seems likely to be the result of you believing some headline or tweet and accepting something wholesale as truth because it fit your narrative. I'm guilty of the same, it happens, but hopefully we can both get better about portraying easily fact checked things as the truth.
These are the same people that insist we arm elementary school teachers and expect those teachers to someday pull the trigger on a child instead of having proper gun laws.
If you think proper gun laws would keep guns away from evil people in the US, please explain to me why the war against illegal drugs in the US has been losing since the day it started.
Sure, some places in the world can successfully limit gun access. Those places aren't close and even bordered by the most active cartels in the world.
Just as a fun thought exercise, consider that to grow the plant necessary to produce just a single drug, cocaine, for the country, every year. It takes at least 300,000 acres, or roughly the size of los angeles. That's after decades of optimizations to reduce the amount of land needed. It's also only for one drug among a vast number that are regularly consumed in the US.
In relation, you can 3d print guns at home. Successful builds have been made from some of the cheapest 3d printers you can find.
They do, but safety and social control go hand in hand.
In any case, its not as if your kid is safer at a private school. Kids are violent, no matter where they are; maybe you remember going through school yourself?
My guess is it's more due to insurance at the venue. I don't know who pays in those situations, but I would imagine they require "no guns" posted and announced. And if there is any form of gunshot injury they have very strong teeth to dodge the claim.
It's sardonic rather than ironic - irony is sarcastic humor devised to highlight a contradiction or hypocrisy; while sardonic is disdainful, bitter and scornful.
It'd have been ironic if Anthropic had asked the applicant not to use AI for the sake of originality and authenticity; but if the applicant felt compelled to do so, then it better rock and wow them to hire the applicant sight unseen.
It's sardonic because Anthropic is implying use of AI on them is an indication of fraud, deceit, or incompetence; but it's a form of efficiency, productivity or cleverness when used by them on the job!
I don't see the cognitive dissonance here. If a model was applying for a position with a cosmetics company, they might want to see what the blank canvas looks like.
Being able to gauge a candidate's natural communication skills is highly useful. If you're an ineffective communicator, there's a good chance your comprehension skills are also deficient.
> If you're an ineffective communicator, there's a good chance your comprehension skills are also deficient.
We are quickly moving into a world where most communications are at best assisted by AI and more often have little human input at all. There’s nothing inherently “wrong” about that (we all have opinions there), but “natural” (quotes to emphasize that they’re taught and not natural anyway) communication skills are going to be less and less common as time marches on, much like handwriting, typewriting, calligraphy, etc.
The same could be said about a lot of things, like being able to write a functional solution to a leet code puzzle on a black board in front of an audience.
IMHO, an effective interview process should attempt to mimic the position for which a person is applying. Making a candidate jump through hoops is a bit disrespectful.
Making your comms friendlier or whatever is one of the myriad ways to use LLMs. Maybe you personally have "LLMs shoved down your throat" by your corporate overlords. No one in their right mind can say that LLMs were created for such a purpose, it just so happens you can use it in this way.
LLMs aren’t making your comms friendlier; they’re just making them more vapid. When I see the language that ChatGPT spits out when you tell it to be friendly, I immediately think ‘okay, what is this person trying to sell me?’
I'm with you, I'm very surprised by the amount arguments which boil down to, "Well I can cheat and get away with it, so therefore I should cheat".
I have read that people are getting more selfish[1], but it still shocks me how much people are willing to push individualism and selfishness under the guise of either, "Well it's not illegal" or "Well, it's not detectable".
I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.
I guess that puts me at a serious disadvantage in the job market, but I am okay with that, I've always been okay with that. 20 years ago my cohort were doing what I thought were selfish things to get ahead, and I'm fine with not doing those things and ending up on a different lesser trajectory.
But that doesn't mean I won't also air my dissatisfaction with just how much people seem to justify selfishness, or don't even regard it as selfish to ignore this request.
> I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.
No, what you are is ignoring the context.
This request comes from a company building, promoting, and selling the very thing they are asking you not to use.
Yes, asking you not to use AI is indeed a polite request. It is one you should respect. “The zeitgeist” has as much people in favour of AI as against it, and picking either camp doesn’t make anyone special. Either stance is bound to be detrimental in some companies and positive in others.
But none of that matters, what makes this relevant is the context of who’s asking.
The ones paying are in their vast majority the most selfish of them all, for example it would be reasonable to say that Jeff Bezos its one of the most selfish people on the planet, so at the end it doesn't boil down to "Well I can cheat and get away with it, so therefore I should cheat" but more like "Well I can cheat, get away with it and the victim is just another cheater, so therefore I should cheat"
Bezos, Musk, Zuckerberg and many many others do everything in their power to reduce costs including paying less taxes, which includes using tax havens and tax loopholes that they themselves make sure to keep open by "lobbying" politicians, so effectively to work in general means to work for mostly cheaters and there is no way to avoid it, sure you can stay unemployed and stay clean of the moral corruption that entails living in a capitalist system but many don't consider that an option; and is not like buying from them is any better morally speaking, for the exact same reasons.
I agree with your sentiment. But coming from a generative AI company that says "career development" and "communication" are among their two most popular use cases... That's like a tobacco company telling employees they are not permitted to smoke tobacco
Well, they probably aren't permitted to smoke tobacco indoors.
I honestly fail to see even the irony. "Company that makes hammers doesn't want you to use hammers all the time". It's a tool.
But if I squint, I _can_ see a mean-spirited "haha, look at those hypocrites" coming from people who enjoy tearing others down for no particular reason.
It is very sensible position and I think the quote is a bit out of context, but the important part here is who it is coming from -- the company that makes money on both cheating in the job application process (which harms employers) and replacing said jobs with AI, or at least creating another excuse for layoffs (which harms the employees).
In a sense, they poisoned the well and don't want to drink from it now. Looking at it from this perspective justifies (in the eyes of some people at least) said cheating. Something something catalytic converter from the company truck.
> if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.
Sorry, the thought process of thinking that using an LLM for a job application, esp. for a field which requests candid input about one's motivation is acceptable, is beyond me.
It's not that there's anything wrong with this in particular. It's just that the general market seems much more optimistic about AI impact, than the AI companies themselves.
They don't want a motivation letter to be written by an LLM (because it's specifically about the personal motivation of the human candidate) - as far as I can see this not reflect either positively or negatively on their level of optimism about AI impact in general.
Companies, especially large, are not interested in fads unless directly invested in them. The higher and steeper the initial wave the bigger disappointment, or at least unfulfilled expectations happen, not always but surprisingly often.
This is just experience and seniority in general, nothing particular about LLMs. For most businesses, I would behave the same.
> If you're looking to score, sure, it's somewhat unethical but it works.
Observation/Implication/Opinion:
Think reciprocal forces and trash TV ethics in both, closed and open systems. The consequences are continuously diminished AND unvarying returns. Professionally as well as personally, in all parties involved. Stimulating, inspiring, motivating factors as well as the ability to perceive and "sense" all degrade. But compensation and cheating continue to work, even though, the quality of the game, the players and their output decreases.
Nothing and nobody is resilient "enough" to the mechanism force - 'counter'-force so you better pick the right strategy. Waiting/Processing for a couple of days, lessons and honest attempts yields exponentially better results than cheating.
Companies should beware of this if they expect results that are qualitatively AND "honestly" safe & sound. This has been ignored in the past decades, which is why we are "here". Too much work, too many jobs, and way too many enabling outs have been lost almost irreversibly, on the individual level as well as in nano-, micro-, and macro-economics.
Applicants using AI is fine but applicants not being able to make that output usefully THEIRS is a problem.
Can I request that they not use AI when evaluating my application, and expect them to respect my wishes? Highly doubtful. Respect is a 2-way street. This is not a collaboration, but a hierarchical mandate in one direction, for protecting themselves from the harms of the very tools they peddle.
I also don't find anything wrong with their stance. Ironic, sure, but I think to judge someone they need to have filters and in this case, the filter is someone who is able to communicate without AI assistance.
> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.
There are two backwards things with this:
1) You can't ask people to not use AI when careful, responsible use is undetectable.
It just isn't a realistic request. You'll have great replies without AI use and great replies with AI use, and you won't be able to tell whether a great reply used AI or not. You will just be able to filter sludge and dyslexia.
2) This is still the "AI is cheating" approach, and I had hoped Anthropic to be thought leaders on responsible AI use:
In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
If AI is making your final product and you're none the wiser, it didn't really help you, it just made you addicted to it.
Can't disagree more. Talent is built and perfected upon thousands hours practice, LLMs just make you lazy. One thing people with seniority in the field don't realize, as I guess you are, is that LLMs don't help develop "muscle memory" in young practioners, it just make them miserable, often caged in an infinite feedback loop of bug fixing or trying to untangle a code mess. They may extract some value by using it for studying but I doubt it and only goes so far, when started I remember being able to extract so much knowledge just by reading a book about algorithms, try to reimplement things, break them, and so on. Today I can use an LLM because I'm wise enough and I can spot wrong answers, but still feel becoming a bit lazy.
I strongly agree with this comment. Anecdotal evidence time!
I'm an experienced dev (20 years of C++ and plenty of other stuff), and I frequently work with younger students in a mentor role, e.g. I've done Google Summer of Code three times as a mentor, and am also in KDE's own mentorship program.
In 2023/24, when ChatGPT was looming large, I took on a student who was of course attempting to use AI to learn and who was enjoying many of the obvious benefits - availability, tailoring information to his inquiry, etc. So we cut a deal: We'd use the same ChatGPT account and I could keep an eye on his interactions with the system, so I could help him when the AI went off the rails and was steering him into the wrong direction.
He initially made fast progress on the project I was helping him with, and was able to put more working code in place than others in the same phase. But then he hit a plateau really hard soon after, because he was running into bugs and issues he couldn't get solutions from the AI for and he just wasn't able to connect the dots himself.
He'd almost get there, but would sometimes forget to remove random single lines doing the wrong thing, etc. His mental map of the code was poor, because he hadn't written it himself in that oldschool "every line a hard-fought battle" style that really makes you understand why and how something works and how it connects to problems you're solving.
As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.
To his credit, he eventually realized leaning on ChatGPT was holding him back mentally and he tried to take things slower and go back to API docs and slowly building up his codebase by himself.
It's like when you play World of Warcraft for the first time and you have this character boost to max level and you use it. You didn't go through the leveling phase and you do not understand the mechanics of your character, the behaviour of the mobs, or even how to get to another continent.
You are directly loaded with all the shiny tools and, while it does make it interesting and fun at first, the magic wears off rather quickly.
On the other hand, when you had to fight and learn your way up to level 80, you have this deeper and well-earned understanding of the game that makes for a fantastic experience.
> "every line a hard-fought battle" style that really makes you understand why and how something works
Absolutely true. However:
The real value of AI will be to *be aware* when at that local optimum, and then - if unable to find a way forward - at least reliably notify the user that that is indeed the case.
Bottom line, the number of engineering “hard thought battles” is finite, and should be chosen very wisely.
The performance multiplier that LLM agents brought changed the world. At least as the consumer web did in the 90s, and there will be no turning back.
This is like a computer company around 1980, would be hiring engineers but forbade access to computers for some numerical task.
Funny, it reminds me the reason Konami MSX1 games look like they do, compared to the most of the competition: having access to superior development tools - their HP hardware emulator workstations.
If you are unable to come up with a filter for your applicants that is able to detect your own product, maybe you should evolve. What about asking an AI how to solve this? ;)
This is fascinating. The idea of leveling off in the learning curve is one that I hadn't considered before, although with hindsight it seems obvious. Based on your recollection (and without revealing too many personal details), do you recall any specific areas that caused the struggle? For example, was it a lack of understanding of the program architecture? Was it an issue of not understanding data structures? (or whatever) Thanks for your comment, it opened up a new set of questions for me.
A big problem was that he couldn't attain a mental model of how the code was behaving at runtime, in particular the lifetimes of data and objects - what would get created or destroyed when, exist at what time, happen in what sequence, exist for the whole runtime of the program vs. what's a temporary resource, that kind of thing.
The overall "flow" of the code didn't exist in his head, because he was basically taking small chunks of code in and out of ChatGPT, iterating locally wherever he was and the project just sort of growing organically that way. This is likely also what make the ChatGPT outputs themselves less useful over time: He wasn't aware of enough context to prompt the model with it, so it didn't have much to work with. There wasn't a lot of emerging intelligence a la provide what the client needs not what they think they need.
These days tools like aider end up prompting the model with a repo map etc. in the background transparently, but in 2023/24 that infra didn't exist yet and the context window of the models at the time was also much smaller.
In other words, the evolving nature of these tools might lead to different results today. On the other hand, if it had back then chances are he'd become even more reliant on them. The open question is whether there's a threshold there where it just stops mattering - if the results are always good, does it matter the human doesn't understand them? Naturally I find that prospect a bit frightening and creepy, but I assume some slice of the work will start looking like that.
I have a feeling that "almost getting there" will simply become the norm. I have seen a lot of buggy and almost but not exactly right applications, processes and even laws that people simply have to live with.
If US can be the worlds biggest economy while having an opiod epidemy and writing paper cheques, if Germany can be Europes manufacturing hub while using faxes, sure we as a society can live in the unoptimal state of everything digital being broken 10% of the time insteaf of hald percent
Years back I worked somewhere where we had to PDF documents to e-fax them to a supplier. We eventually found out that on their end it was just being received digitally and auto-converted to PDF.
It was never made paper.. So we asked if we could just email the PDF instead of paying for this fax service they wanted.
This seems to be the way of things. Oral traditions were devastated by writing, but the benefit is another civilization can hold on to all your knowledge while you experience a long and chaotic dark age so you don't start from 0 when the Enlightenment happens.
There is no 'learning' in the abstract. You learn something. Doing tutorials teach you how to do the thing you do in them.
It all comes down to what you wanna learn. If you want to acquire skills doing the things you can ask AI to do, probably a bad idea to use them. If you want to learn some pointers on a field you don't even know what key words are relevant to take to a library, LLMs can help a lot.
If you wanna learn complex context dependent professional skills, I don't think there's an alternative to an experienced mentor.
Not sure! My own path was very mentor-dependent. Participating in open source communities worked for me to find my original mentors as well. The other participants are incentivized to mentor/coach because the main thing you're bringing is time and motivation--and if they can teach you what you need to know to come back with better output while requiring less handholding down the road, their project wins.
It's not for everyone because open source tends to require you to have the personality to self-select goals. Outside of more explicit mentor relationships, the projects aren't set up to provide you with a structured curriculum or distribute tasks. But if you can think of something you want to get done or attempt in a project, chances are you'll get a lot of helping hands and eager teachers along the way.
Mostly by reading a good book to get the fundamentals down, then taking on a project to apply the knowledge and supplement the gap with online ressource. There are good books and nice open source projects out there. You can get far with these by just studying them with determination. Later you can go on the theoretical and philosophical part of the field.
How do you know what a good book is? I've seen recommendations in fields I'm knowledgeable about that were hot garbage. Those were recommendations by reputed people for reputed authors. I don't know how a beginner is supposed to start without trying a few and learning some bad habits.
So, so, so many people have learnt to code on their own without a mentor. It requires a strong desire to learn and perseverance but it’s absolutely possible.
That you can learn so much about programming from books and open source and trial and error has made it a refuge for people with extreme social anxiety, for whom "bothering" a mentor with their questions would be unthinkable.
Failing for a bit, thinking hard and then somehow getting to the answer - for me it was usually tutorials, asking on stackoverflow/forums, finding a random example on some webpage.
The fastest way for me to learn something new is to find working code or code that I can kick for a bit until it compiles/runs. Often I'll comment out everything and make it print hello world, and then from there try to figure out what the essential bits I need to bring back in, or simplify/mock, etc, until it works again.
I learn a lot more by forming a hypothesis "to make it do this, I need that bit of code, which needs that other bit that looks like it's just preparing this/that object" - and the hypothesis gets tested every time I try to compile/run.
Nowadays I might paste the error into chatgpt and it'll say something that will lead me a step or two closer to figuring out what's going on.
Why is modifying working code you didn't write better than having an AI help write code with you? Is it that the modified code doesn't run until you fix it? It still bypasses the 'hard won effort' criteria though?
Use LLM. But do not let it be the sole source of your information for any particular field. I think it's one of the most important disciplines the younger generation - to be honest, all generations - will have to learn.
I have a rule for myself as a non-native English speaker: Any day I ask LLMs to fix my English, I must read 10 pages from traditionally published books (preferably pre-2023). Just to prevent LLM from dominating my language comprehension.
You perfectly encapsulated my view on this. I'm utterly bewildered with people who take the opposing position that AI is essentially a complete replacement for the human mind and you'd be stupid not to fully embrace it as your thought process.
I drove cars before the sat nav systems and when I visited somewhere, I'd learn how to drive to there. The second drive would be from memory. However, as soon as I started relying on sat navs, I became dependent on them. I can not drive to a lot of places that I visited more than once without a sat nav these days (and I'm getting older, that's a part of it too).
I wonder if the same thing will happen with coding and LLMs.
I can even feel it in my own coding. I've been coding almost my entire life all the way back to C64 Basic and ever since I am relying on Copilot for most of my regular work I can feel my non AI assisted coding skills get rusty.
I hear this argument all the time, and I think “this is exactly how people who coded in assembly back in the day thought about those using higher level programming languages.”
It is a paradigm shift, yes. And you will know less about the implementation at times, yes. But will you care when you can deploy things twice, three times, five times as fast as the person not using AI? No. And also, when you want to learn more about a specific bit of the AI written code, you can simply delve deep into it by asking the AI questions.
The AI right now may not be perfect, so yes you still need to know how to code. But in 5 years from now? Chances are you will go in your favorite app builder, state what you want, tweak what you get and you will get the product that you want, with maybe one dev making sure every once in a while that you’re not messing things up - maybe. So will new devs need to know high level programming languages? Possibly, but maybe not.
1. We still teach assembly to students. Having a mental model of what the computer is doing is incredibly helpful. Every good programmer has such a model in my experience. Some of them learned it by studying it explicitly, some picked it up more implicitly. But the former tends to be a whole lot faster without the stop on the way where you are floundering as a mid level with a horribly incorrect model for years (which I’ve seen many many times).
2. Compilers are deterministic. You can recompile the source code and get the same assembly a million times.
You can also take a bit of assembly then look at the source code of the compiler and tell exactly where that assembly came from. And you can change the compiler to change that output.
3. Source code is written in a formal unambiguous language.
I’m sure LLMs will be great at spitting out green field apps, but unless they evolve to honest to goodness AGI, this won’t get far beyond existing low code solutions.
No one has solved or even proposed a solution for any of these issues beyond “the AI will advance sufficiently that humans won’t need to look at the code ever. They’ll never need to interact with it in any way other than through the AI”.
But to get to that point will require AGI and the AI won’t need input from humans at all, it won’t need a manager telling it what to build.
The point of coding is not to tell a machine what to do.
The point of coding is to remove ambiguity from the specs.
"Code" is unambiguous, deterministic and testable language -- something no human language is (or wants to be).
LLMs today make many implementation mistakes where they confuse one system with another, assume some SQL commands are available in a given SQL engine when they aren't, etc. It's possible that these mistakes will be reduced to almost zero in the future.
But there is a whole other class of mistakes that cannot be solved by code generation -- even less so if there's nobody left capable of reading the generated code. It's when the LLM misunderstands the question, and/or when the requirements aren't even clear in the head of the person writing the question.
I sometimes try to use LLMs like this: I state a problem, a proposed approach, and ask the LLM to shoot holes in the solution. For now, they all fail miserably at this. They recite "corner cases" that don't have much or anything to do with the problem.
Only coding the happy path is a recipe for unsolvable bugs and eventually, catastrophe.
You seem so strong opinionated and sure what the future holds for us, but I must remember you though, that in your example "from assembly to higher level programming languages" the demand for programmers didn't go down, went up, and as companies were able to develop more, more development and more investments were made, more challenges showed up, new jobs were invented and so on... You get where I'm going... The thing I'm questioning is how much lazy new technologies make you, many programmers even before LLMs had no idea how a computer works and only programmed in higher level languages, it was a disaster before with many people claming software was bad and industry going down a road where software quality matters less and less. Well that situation turbo boosted by an LLMs because "doesn't matter i can deploy 100x times a day" disrupting user experience imo won't led us far
I think the same kind of critical thinking that was required to reimplement and break algorithms must now be used to untangle AIs answers. In that way, it's a new skill, with its own muscle memory. Previously learnt skills like debugging segfaults slowly become less relevant.
Let me give you an example from yesterday. I was learning tailwind and had a really long class attribute on a div which I didn't like. I wanted to split it and found a way to do it using my JavaScript framework (the new way to do it was suggested by deepseek). When I started writing by hand the list of classes in the new format copilot gave me an auto complete suggestion after I wrote the first class. I pressed tab and it was done.
I showed this to my new colleague who is a bit older than me and sort of had similar attitudes as you. He told me he can do the same with some multi cursor shenanigans and I'll be honest in that I wasn't interested in his approach. Seems like he would've taken more time to solve the same problem even though he had superior technique than me. He said sure it takes longer but I need to verify by reading the whole class list and that's a pain but I just reloaded the page and it was fine. He still wasn't comfortable with me using copilot.
So yes, it does make me lazier but you could say the same about using go instead of C or any higher level abstraction. These tools will only get better and more correct. It's our job to figure out where it is appropriate to use them and where it isn't. Going to either extremes is where the issue is
I wouldn’t say it’s laziness. The thing is that every line of code is a burden as it’s written once, but will be read and edited many times. You should write the bare amount that makes the project work, then make it readable and then easily editable (for maintenance). There are many books written about the last part as it’s the hardest.
When you take all three in consideration, an llm won’t really matter unless you don’t know much about the language or the libraries. When people goes on about Vim or Emacs, it’s just that it makes the whole thing go faster.
Remember though that lazyness, as I learned in computing, is kinda "doing something later": you might have pushed the change/fix faster than your senior fellow programmer, but you still need to review and test that change right? Maybe the change you're talking about was really trivial and you just needed to refresh your browser to see a trivial change, but when it's not, being lazy about a change will only gets you suffer more when reviewing a pr and testing the non trivial change working for thousands customers with different devices
The problem is he wasn't comfortable with my solution even though it was clearly faster and it could be tested instantly. It's a mental block for him and a lot of people in this industry.
I don't advocate blindly trusting LLMs. I don't either and of course test whatever it spits out.
Testing usually isn’t enough if you don’t understand the solution in the first place. Testing is a sanity check for a solution that you do understand. Testing can’t prove correctness, it can only rind (some) errors.
LLMs are fine for inspiration in developing a solution.
I think it's a lot more complicated than that. I think it can be used as a tool for people who already have knowledge and skills, but I do worry how it will affect people growing up with it.
Personally I see it more like going to someone who (claims) to know what they're doing and asking them to do it for me. I might be able to watch them at work and maybe get a very general idea of what they're doing but will I actually learn something? I don't think so.
Now, we may point to the fact that previous generations railed at the degeneration of youth through things like pocket calculators or mobile phones but I think there is a massive difference between these things and so-called AI. Where those things were tools obligatorily (if you give a calculator to someone who doesn't know any formulae it will be useless to them), I think so-called AI can just jump straight to giving you the answer.
I personally believe that there are necessary steps that must be passed through to really obtain knowledge and I don't think so-called AI takes you through those steps. I think it will result in a generation of people with markedly fewer and shallower skills than the generations that came before.
AI will let some people conquer skills otherwise out of their reach, with all the pros and cons of that. It is exactly like the example someone else brought up of not needing to know assembly anymore with higher level languages: true, but those who do know it and can internalize how the machines operate have an easier time when it comes to figuring out the real hard problems and bugs they might hit.
Which means that you only need to learn machine language and assembly superficially, and you have a good chance of being a very good programmer.
However, where I am unsure how the things will unfold is that humans are constantly coming up with different programming languages, frameworks, patterns, because none of the existing ones really fit their mental model or are too much to learn about. Which — to me at least — hints at what I've long claimed: programming is more art than science. With complex interactions between a gazillion of mildly incompatible systems, even more so.
As such, for someone with strong fundamentals, AI tools never provided much of a boon to me (yet). Incidentally, neither did StackOverflow ever help me: I never found a problem that I struggled with that wasn't easily solved with reading the upstream docs or upstream code, and when neither was available or good enough, SO was mostly crickets too.
These days, I rarely do "gruntwork" programming, and only get called in on really hard problems, so the question switches to: how will we train the next generation of software engineers who are going to be called in for those hard problems?
Because let's admit it, even today, not everybody can handle them.
Tool use is fine, when you have the education and experience to use the tools properly, and to troubleshoot and recover when things go wrong.
The use of AI is not just a labour saving device, it allows the user to bypass thinking and learning. It robs the user of an opportunity to grow. If you don't have the experience to know better it may be able to masquerade as a teacher and a problem solver, but beyond a trivial level relying on it is actively harmful to one's education. At some point the user will encounter a problem that has no existing answer in the AI's training dataset, and come to realise they have no real foundation to rely on.
Code generative AI, as it currently exists, is a poisoned chalice.
It is if the way to learn is doing it without a tool. Imagine using a robot to lift weights if you want to grow your own muscle mass. "Robot is a tool"
"Growing your own muscle mass" is an artificial goal that exists because of tools. Our bodies evolved under the background assumption that daily back-breaking labor is necessary for survival, and rely on it to stay in good operating conditions. We've since all but eliminated most of that labor for most people - so now we're forced to engage in otherwise pointless activity called "exercise" that's physically hard on purpose, to synthesize physical exertion that no longer happens naturally.
So obviously, your goal is strictly to exert your body, you have to... exert your body. However, if your goal is anything else, then physical effort is not strictly required, and for many people, for many reasons, is often undesirable. Hence machines.
And guess what, people's overall health and fitness have declined. Obesity is at an all time high. If you're in the US, there is a 40% chance you are obese. Your body likely contains very little muscle mass, you are extremely likely to die of side effects of metabolic syndrome.
People are seeing the advent of machines to replace all physical labor and transportation, not gradually like in the 20th century, but withing the span of a decade going from the average physical exertion of 1900 to the average modern lack of physical exertion, take a car everyday, do no manual labor do no movement.
They are saying that you need exercise to replace what you are losing, you need to train your body to keep it healthy and can't just rly on machines/robots to do everything for them because your body needs that exertion - and your answer is to say "now that we have robots there is no need to exercise even for exercise sake". A point that's pretty much wrong as modern day physical health shows.
You've completely twisted what the parent post was saying, and I can't but laugh out loud at claims like:
> there is a 40% chance you are obese.
Obesity is not a random variable — "darn, so unlucky for me to have fallen in the 40% bucket of obese people on birth": you fully (except in rare cases) control the factors that lead to obesity.
A solution to obesity is not to exercise but a varied diet, and eating less of it to match your energy needs (or be under when you are trying to lose weight). While you can achieve that by increasing your energy needs (exercise) and maintain energy input, you don't strictly have to.
Your link is also filled with funny "science" like the following:
> Neck circumference of more than 40.25 cm (15.85 in) for men ... is considered high-risk for metabolic syndrome.
Darn, as a 195cm / 6'5" male and neck circumference of 41cm (had to measure since I suspected I am close), I am busted. Obviously it correlates, just like BMI does (which is actually "smarter" because it controls for height), but this is just silly.
Since you just argued a point someone was not making: I am not saying there are no benefits to physical activity, just that obesity and physical activity — while correlated, are not causally linked. And the problems when you are obese are not the same as those of being physically inactive.
>And guess what, people's overall health and fitness have declined.
Have you seen what physical labor does to a man's body? Go to a developing country to see it. Their 60 year olds look like our 75 year olds.
Sure, we're not as healthy as we could be with proper exercise and diet. But on the long run, sitting on your butt all day is better for your body than hard physical labor.
It could be a simple lifestyle that makes you "fit" (lots of walking, working a not-too-demanding physical job, a physical hobby, biking around...).
The parent post is saying that technological advance has removed the need for physical activity to survive, but all of the gym rats have come out of the woodwork to complain how we are all going to die if we don't hit the gym, pronto.
- Physical back-breaking work has not been eliminated for most people.
- Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people (arguable for most people as it is a mammalian trait) ergo it is not undesirable. UK NHS calls physical exercise essential.
> Physical back-breaking work has not been eliminated for most people.
I said most of it for most people specifically to avoid the quibble about mechanization in poorest countries and their relative population sizes.
> Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people
I envy them. I'm not one of them.
> ergo it is not undesirable
Again, I specifically said "and for many people, for many reasons, is often undesirable" as to not have to spell out the obvious: you may like the exercise benefits of a physically hard work, but your boss probably doesn't - reducing the need for physical exertion reduces workplace injuries, allows worker to do more for longer, and opens up the labor pool to physically weaker people. So even if people only ever felt pleasure from physical exertion, the market would've been pushing to eliminate it anyway.
> UK NHS calls physical exercise essential.
They wouldn't have to if people actually liked doing it.
Equally, if you just point to your friend and say "that's Dave, he's gonna do it for me", they won't give you the job. They'll give it to Dave instead.
That much is true, but I've seen a forklift operator face a situation where pallet of products fell apart and heavy items ended up on the floor. Guess who was in charge of picking them up and manually shelving them?
The claim was that it's lazy to use a tool as a substitute for learning how to do something yourself. But when the tool entirely obviates the need for doing the task yourself, you don't need to be able to do it yourself to do the job. It doesn't matter if a forklift driver isn't strong enough to manually carry a load, similarly once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.
> once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.
Once AI is that good, the developer won't have a job any more.
The whole question is if the AI will ever get that good.
All evidence so far points to no (just like with every tool — farmers are still usually strong men even if they've got tractors that are thousands of times stronger than any human), but that still leaves a bunch of non-great programmers out of a job.
The point he's making is, we still have to learn to use tools no? There still had to he some knowledge there or else you're just sat sifting through all the crap the AI spits out endlessly for the rest of your life. The op wrote his comment like it's a complete replacement rather than an enhancement.
Copying and pasting from stack overflow is a tool.
It’s fine to do in some cases, but it certainly gets abused by lazy incurious people.
Tool use in general certainly can be lazy. A car is a tool, but most people would call an able bodied person driving their car to the end of the driveway to get the mail lazy.
Tools help us to put layers of abstraction between us and our goals. when things become too abstracted we lose sight of what we're really doing or why. Tools allow us to feel smart and productive while acting stupidly, and against our best interests. So we get fascism and catastrophic climate change, stuff like that. Tools create dependencies. We can't imagine life without our tools.
"We shape our tools and our tools in turn shape us" said Marshall McLuhan.
For learning it can very well be. And also it really depends on the tool and task. Calculator is fine tool. But symbolic solver might be a few steps too far. If you don't already understand the process. And possibly the start and end points.
Problem with AI is that it is often black box tool. And not even deterministic one.
AI as applied today is pretty deterministic. It does get retrained and tuned often in most common applications like ChatGPT, but without any changes, you should expect a deterministic answer.
Learning comes from focus and repetition. Talent comes from knowing which skill to use. Using AI effectively is a talent. Some of us embrace learning new skills while others hold onto the past. AI is here to stay, sorry. You can either learn to adapt to it or you can slowly die.
The argument that AI is bad and anyone who uses it ends up in a tangled mess is only your perspective and your experience. I’m way more productive using AI to help me than I ever was before. Yes, I proofread the result. Yes, I can discern a good response from a bad one.
AI isn’t a replacement for knowing how to code, but it can be an extremely valuable teacher to those orgs that lack proper training.
Any company that has the position that AI is bad, and lacks proper training and incentives for those that want to learn new skills, isn’t a company I ever want to work for.
Yeah, and I'd like to emphasize that this is qualitatively different from older gripes such as "calculators make kids lazy in math."
This is because LLMs' have an amazing ability to dream up responses stuffed with traditional signals of truthfulness, care, engagement, honesty etc... but that ability is not matched by their chances of dreaming up answers and ideas that are logically true.
This gap is inevitable from their current design, and it means users are given signals that it's safe for their brains to think-less-hard (skepticism, critical analysis) about what's being returned at the same moments when they need to use their minds the most.
That's new. A calculator doesn't flatter you or pretend to be a wise professor with a big vocabulary listening very closely to your problems.
That's not what my comment implies. I'm just saying that relying solely on LLMs makes you lazy, like relying just on google/stackoverflow whatever, it doesn't shift you from a resource that can be layed off to a resource that can't. You must know your art, and use the tools wisely
It's 2025, not 2015. 'google it and add the word reddit' is a thing. For now.
Google 'reflections on trusting trust'. Your level of trust in software that purports to think for you out of a multi-gig stew of word associations is pretty intense, but I wouldn't call it pretty sensible.
Spot on. I'm not A Programmer(TM), but I have dabbled in a lot of languages doing a lot of random things.
Sometimes I have qwen2.5-coder:14b whip up a script to do some little thing where I don't want to spend a week doing remedial go/python just to get back to learning how to write boilerplate. All that experience means I can edit it easily enough because recognition kicks in and drags the memory kicking and screaming back into the front.
I quickly discovered it was essentially defaulting to "absolute novice." No error handlers, no file/folder existence checking, etc. I had to learn to put all that into the prompt.
>> "Write a python script to scrape all linked files of a certain file extension on a web page under the same domain as the page. Follow best practices. Handle errors, make strings OS-independent, etc. Be persnickety. Be pythonic."
I'm far from an expert and my memory might be foggy, but that looks like a solid script. I can see someone with less practice doing battle with debuggers trying the first thing that comes out without all the extra prompting hitting errors and not having any clue.
For example: I wrote a thing that pulled a bunch of JSON blobs from an API. Fixing the "out of handles" error is how I learned about file system and network default limits on open files and connections, and buffering. Hitting stuff like that over and over was educational and instilled good habits.
Feeling "Lazy" is just an emotion, which to me has nothing to do with how productive you are as a human. In fact the people not feeling lazy but hyped are probably more effective and productive. You're just doing this to yourself because you have assumptions on how a productive/effective human should function. You could call it "stuck in the past".
I have the idea that there are 2 kinds of people, those avidly against AI because it makes mistakes (it sure does) and makes one lazy and all other kinds of negative things, and those that experiment and find a place for it but aren't that vocal about it.
Sure you can go too far, I've heard someone in Quality Control Proclaim "ChatGPT just knows everything, its saves me so much time!" To which I asked if they heart about hallucinations and they hadn't, they'd just been copying whatever it said into their reports. Which is certainly problematic.
> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
At least in theory that’s not what homework is. Homework should be exercises to allow practicing whatever technique you’re trying to learn, because most people learn best by repeatedly doing a thing rather than reading a few chapters of a book. By applying an LLM to the problem you’re just practicing how to use an LLM, which may be useful in its own right, but will turn you into a one trick pony who’s left unable to do anything they can’t use an LLM for.
In the context of homework, how likely is someone still in school, who probably considers homework to be an annoying chore, going to do this?
I can't really see an optimistic long-term result from that, similar to giving kids an iPad at a young age to get them out of your hair: shockingly poor literacy, difficulty with problem solving or critical thinking, exacerbating the problems with poor attention span that 'content creators' who target kids capitalise on, etc.
I'm not really a fan of the concept of homework in general but I don't think that swapping brain power with an OpenAI subscription is the way to go there.
It was the same way I think a lot of us used textbooks back in the day. Can’t figure out how to solve a problem, so look around for a similar setup in the chapter.
If AI is just a search over all information, this makes that process faster. I guess the downside is there was arguably something to be learned searching through the chapter as well.
Homework problems are normally geared to the text book that is being used for the class. They might take you through the same steps, developing the knowledge in the same order.
Using another source is probably going to mess you up.
Depends. Do they care about the problem? If so, they'll quickly hit diminishing returns on naive LLM use, and be forced to continue with primary sources.
Um, get with the times, luddite. You can use an LLM for everything, including curing cancer and fixing climate change.
(I still mentally cringe as I remember the posts about Disney and Marvel going out of business because of Stable Diffusion. That certainly didn't age well.)
It would be great if all technologies freed us and gave us more time to do useful or constructive stuff instead. But the truth is, and AI is a very good example of this, a lot of these technologies are just making people dumb.
I'm not saying they are essentially bad, or that they are not useful at all, far from that. But it's about the use they are given.
> You can use an LLM for everything, including curing cancer and fixing climate change.
Maybe, yes. But the danger is rather in all the things you no longer feel you have a need to do, like learning a language, or how to properly write, or read.
LLM for everything is like the fast-food of information. Cheap, unhealthy, and sometimes addicting.
> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
Well, no. Homework is an aid to learning and LLM output is a shortcut for doing the thinking and typing yourself.
Copy and pasting some ChatGPT slop into your GCSE CS assignment (as I caught my 14yo doing last night…) isn’t learning (he hadn’t even read it) - it’s just chucking some text that might be passable at the examiner to see if you can get away with it.
Likewise, recruitment is a numbers game for under qualified applicants. Using the same shortcuts to increase the number of jobs you apply for will ultimately “pay off” but you’re only getting a short term advantage. You still haven’t really got the chops to do the job.
I've seen applicants use AI to answer questions during the 'behavioral' Q&A-style interviews. Those applicants are 'cheating', and it defeats the whole purpose as we want to understand the candidate and their experience, not what LLMs will regurgitate.
Thankfully it's usually pretty easy to spot this so it's basically an immediate rejection.
If the company is doing behavioral Q&A interviews, I hope they're getting as many bad applicants as possible.
Adding a load of pseudo-science to the already horrible process of looking for a job is definitely not what we need.
I'll never submit myself to pseudo-IQ tests and word association questions for a job that will 99.9% of the time ask you to build CRUD applications.
The lengths that companies go to avoid doing a proper job at hiring people (one of the most important jobs they need to do) with automatic screening and these types of interviews is astonishing.
Good on whoever uses AI for that kind of shit. They want bullshit so why not use the best bullshit generators of our time?
You want to talk to me and get a feeling about how I'd behave? That's totally normal and expected. But if you want to get a bunch of written text to then run sentiment analysis on it and get a score on how "good" it is? Screw that.
I think there maybe a misunderstanding here. I just want to talk to the applicant and see what they think about working with designers, their thoughts on learning golang as a javascript developer, or how they've handled a last minute "high priority" project.
You could reasonably argue that they're not cheating, indeed they're being very behaviorally revealing and you do understand everything you need to understand about them.
Too bad for them, but works for you…
I'm imagining a hiring workflow, for a role that is not 'specifically use AI for a thing', in which there is no suggestion that you shouldn't use AI systems for any part of the interview. It's just that it's an auto-fail, and if someone doesn't bother to hide it it's 'thanks for your time, bye!'.
And if they work to hide it, you know they're dishonest, also an auto-fail.
Homework is a proxy for your retention of information and a guide to what you should review. That somehow schools started assigning grades to it is as nonsensically barbaric as public bare ass caning was 80 years ago and driven by the same instinct.
I agree on the grades part. And I was just thinking that the university that I went to never gave us grades during the year (the only exception I can think of was when we did practice exam papers so we had an idea how we were doing).
I think homework is more than a guide to what you should review though. It's partly so that the teacher can find out what students have learned/understood so they can adapt their teaching appropriately. It's also because using class/contact time to do work that can be done independently isn't always the best use of that time (at least once students are willing and capable of doing that work independently).
Consider the case where there is a non Native English speaker and they use AI to misrepresent the standard of written English communication.
Assume their command of English is insufficient to get the job ultimately. They've just wasted their own time and the company's time in that situation.
>Hey Claude, translate this to Swahili from English. Ok, now translate my response from Swahili to English. Thanks.
We're close to the point where using a human -> stt -> llm -> tts -> human pipeline you can do real time high quality bi directional spoken translation on a desktop.
Why not just send the Swahili and let them MTL on the other end? At least then they have the original if there’s any ambiguity.
I’ve read multiple LLM job applications, and every single time I’d rather have just read the prompt. It’d be a quarter of the length and contain no less information.
I think that's wishful thinking. You're underestimating how much people can tell about other people with the smallest amount of information. Humans are highly attuned to social interactions and synthetic responses are more obvious that you think.
I was a TA years ago, before there were LLMs one could use to cheat effectively. The professor and I still detected a lot of cheating. The problem was what to do once you've caught it? If you can't prove that it's cheating -- you can't cite the sources copied from -- is it worth the fight? The professor's solution was just to knock down their grades.
At that time just downgrading them was justifiable, because though they had copied in someone else's text, they often weren't competent to identify the text that was best to copy, and they had to write some of the text themselves to make it appear a coherent whole and they weren't competent to do that. If they had used LLMs we would have been stuck. We would be sure they had cheated but their essay would still be better than that of many/most of their honest peers who had tried to demonstrate relevant skill and knowledge.
I think there is no solution except to stop assigning essays. Writing long form text will be a boutique skill like flint knapping, harvesting wild tubers, and casting bronze swords. (Who knows, the way things are going these skills might be relevant again all too soon.)
But this is exactly what we already do. Most exams have a "no cheating" rule, even though it's perfectly possible to cheat. The point is to discourage people from doing so, not to make it impossible.
>In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
No, the homework is a proxy for measuring what the homework provider is interested to evaluate (hopefully). Talent is a vague word, and what we can consider talent worth nurturing might be consider worthless from some other perspective.
For example, most schools will happily give you many nation myths to learn and evaluate how much of it you can restitute on demand. They will far less likely ask you to provide you some critics of these myths, to search who created them, with which intentions and which actual effects on people at large scale from different perspective and metrics available out there.
It's a warning sign, designed to improve the signal they are interested in marginally. Some n-% of applicants will reconsider. That's all it needs to do, to make it worth it, because putting that one sentence there required very little effort.
>You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
Isn't that the definition of cheating? Presenting a false level of talent you don't possess?
>AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
There's a difference between knowing how to use a calculator and knowing how to do math. The same goes for AI. Being talented at giving AI prompts doesn't mean you are generally talented or have AI-unrelated talents desired by an employer.
Eh, you can only learn how to do something when you actually do it (generally).
AI just lets you get deeper down the fake-it-until-you-make-it hole. At some point it will actually matter that you know how to do something, and good luck then.
They want to use AI in their hiring process. They want to be able to offload their work and biases to the machine. They just don't want other people to do it.
There's a reason that the EU AI legislation made AI that is used to hire people one of the focal points for action.
I think this gets to the core of the issue: interviews should be conducted by people who deeply understand the role and should involve a discussion, not a quiz.
This application requirement really bothered me as someone who's autistic and dyslexic. I think visually, and while I have valid ideas and unique perspectives, I sometimes struggle to convert my visual thoughts into traditional spoken/written language. AI tools are invaluable to me - they help bridge the gap between my visual thinking and the written expression that's expected in professional settings.
LLMs are essentially translation tools. I use them to translate my picture-thinking into words, just like others might use spell-checkers or dictation software. They don't change my ideas or insights - they just help me express them in a neurotypical-friendly format.
The irony here is that Anthropic is developing AI systems supposedly to benefit humanity, yet their application process explicitly excludes people who use AI as an accessibility tool. It's like telling someone they can't use their usual assistive tools during an application process.
When they say they want to evaluate "non-AI-assisted communication skills," they're essentially saying they want to evaluate my ability to communicate without my accessibility tools. For me, AI-assisted communication is actually a more authentic representation of my thoughts. It's not about gaining an unfair advantage - it's about leveling the playing field so my ideas can be understood by others.
This seems particularly short-sighted for a company developing AI systems. Shouldn't they want diverse perspectives, including from neurodivergent individuals who might have unique insights into how AI can genuinely help people think and communicate differently?
This is an excellent comment and it more-or-less changes my opinion on this issue. I approached it with an "AI bad" mentality which, if truth be told, I'm still going to hold. But you make a very good argument for why AI should be allowed and carefully monitored.
I think it was the spell-checker analogy that really sold me. And this ties in with the whole point that "AI" isn't one thing, it's a huge spectrum. I really don't think there's anything wrong with an interviewee using an editor that highlights their syntax, for example.
Where do you draw the line, though? Maybe you just don't. You conduct the interview and, if practical coding is a part of it, you observe the candidate using AI (or not) and assess them accordingly. If they just behave like a dumb proxy, they don't get the job. Beyond that, judge how dependant they are on AI and how well they can use it as a tool. Not easy, but probably better than just outright banning AI.
Exactly - being transparent about AI usage in interviews makes much more sense. Using AI effectively is becoming a crucial skill, like knowing how to use any other development tool. Using it well can supercharge productivity, but using it poorly can be counterproductive or even dangerous.
It's interesting that most software departments now expect their staff to use AI tools day-to-day, yet many still ban it during interviews. Why not flip this around? Let candidates demonstrate how they actually work with AI. It would be far more valuable to assess someone's judgment and skill in using these tools rather than pretending they don't exist.
If a candidate wants to show how they leverage AI in their workflow, that should be seen as a positive - it demonstrates transparency and real-world problem-solving approaches. After all, you're hiring someone for how they'll actually work, not how they perform in an artificial AI-free environment that doesn't reflect reality.
The key isn't whether someone uses AI, but how effectively they use it as part of their broader skillset. That's what companies should really be evaluating.
I feel very similarly. I'm also an extremely visual thinker who has a job as a programmer, and being able to bounce ideas back and forth between a "gifted intern" and myself is invaluable (in the past I used to use actual interns!)
I regard it as similar to using a text-to-speech tool for a blind person - who cares how they get their work done? I care about the quality of their work and my ability to interact with them, regardless of the method they use to get there.
Another example I would give: imagine there's someone who only works as a pair programmer with their associate. Apart, they are completely useless. Together, they're approximately 150% as productive as any two programmers pairing together. Would you hire them? How much would you pay them as a pair? I submit the right answer is yes, and something north of one full salary split in two. But for bureaucracy I'd love to try it.
I do lots of technical interviews in Big Tech, and I would be open to candidates using AI tools in the open. I don't know why most companies ban it. IMO we should embrace them, or at least try to and see how it goes (maybe as a pilot program?).
I believe it won't change the outcomes that much. For example, on coding, an AI can't teach someone to program or reason in the spot, and the purpose of the interview never was to just answer the coding puzzle anyway.
To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc). If I give you a puzzle and you paste the most optimized answer with no reasoning or comment you're not going to pass the interview, no matter if it's done with AI, from memory or with stack overflow.
So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.
> So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.
Candidates could also have an AI listening to the questions and giving them answers. There are other ways that they could be in the process without copy/pasting blindly.
> To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc).
Exactly, that's why I feel like saying "AI is not allowed" makes it all more clear. As interviewers we want to see these abilities you have, and if candidates use an AI it's harder to know what's them and what's the AI. It's not that we don't think AI is an useful tool, it's that it reduces the amount of signal we get in an interview; and in any case there's the assumption than the better someone performs the better they could use AI.
You could also learn a lot from what someone is asking an AI assistant.
Someone asking: "solve this problem" vs "what is the difference between array and dict" vs "what is the time complexity of a hashmap add operation", etc.
They give you different nuances on what the candidate knows and how it is approaching the understanding of the problem and its solution.
This is quite a conundrum. These AI companies thrive on the idea that very soon people will not be replaced by AI, but by people who can effectively use AI to be 10x more productive. If AI turns a normal coder into a 10x dev, then why wouldn't you want to see that during an interview? Especially since cheating this whole interview system has become trivial in the past months. It's not the applicants that are the problem, it's the outdated way of doing interviews.
Because as someone who’s interviewing, I know you can use AI — anyone can. It likely obscures me from judging the pitfalls, and design and architecture decisions that are required in proper engineering roles. Especially for senior and above applications, I want to make an assessment of how you think about problems, where it gives a chance for the candidate to show their experience, their technical understanding, and their communication skills.
We don’t want to work with AI, we are going to pay the person for the persons time, and we want to employ someone who isn’t switching off half their cognition when a hard problem approaches.
> No, not everyone can really use AI to deliver something that works
"That works" is doing a lot of heavy lifting here, and really depends more on the technical skills of the person. Because, shocker, AI doesn't magically make you good and isn't good itself.
Anyone can prompt an AI for answers, it takes skill and knowledge to use those answers in something that works. By prompting AI for simple questions you don't train your skill/knowledge to answer the question yourself. Put simply, using AI makes you worse at your job - precisely when you need to be better.
"Put simply, using AI makes you worse at your job - precisely when you need to be better."
I don't follow.
Usually jobs require deliver working things. The more efficient the worker knows his tools(like AI), the more he will deliver -> the better he is at his job.
If he cannot deliver reliable working things, because he does not understand the LLM output, then he fails at delivering.
You cannot just reduce programming to "deliver working things", though. For some tasks, sure, "working" is all that matters. For many tasks, though, efficiency, maintainability, and other factors are important.
You also need to take into account how to judge if something is "working" or not — that's not necessarily a trivial task.
Completely agree. I'm judging the outputs of a process, I really am only interested in the inputs to that process as a matter of curiosity.
If I can't tell the difference, or if the AI helps you write drastically better code, I see it as no more nor no less than, for example, pair programming or using assistive devices.
I also happen to think that most people, right now, are not very good at using AI to get things done, but I also expect those skills to improve with time.
Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc. For the record, I'm not just saying "AI bad"; I've come around to some use of AI being acceptable in an interview, provided it's properly assessed.
> Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc
Agreed, but I as the "end user" care not at all whether you're running a local LLM that you fine tune, or storing it all in your eidetic memory, or writing it down on post it notes that are all over your workspace[1]. Anything that works, works. I'm results oriented, and I do care very much about the results, but the methods (within obvious ethical and legal constraints) are up to you.
[1] I've seen all three in action. The post-it notes guy was amazing though. Apparently he had a head injury at one point and had almost no short term memory, so he coated every surface in post-its to remind himself. You'd never know unless you saw them though.
I think we're agreeing on the aim—good results—but disagreeing on what those results consist of. If I'm acting as a 'company', one that wants a beneficial relationship with a productive programmer for the long-term, I would rather have [ program that works 90%, programmer who is 10% better at their job having written it ] as my outputs than a perfect program and a less-good programmer.
I take epistemological issue with that, basically, because I don't know how you measure those things. I believe fundamentally that the only way to measure things like that is to look at the outputs, and whether it's the system improving or the person operating that system improving I can't tell.
What is the difference between a "less good programmer" and a "more good programmer" if you can't tell via their work output? Are we doing telepathy or soul gazing here? If they produce good work they could be a team of raccoons in a trench coat as far as I'm aware, unless they start stealing snacks from the corner store.
There is also a skill in prompting the AI for the right things in the right way in the right situations. Just like everyone can use google and read documentation, but some people are a lot better at it than others.
You absolutely can be a great developer who can't use AI effectively, or a mediocre developer who is very good with AI.
> not everyone can really use AI to deliver something that works.
That's not the assumption. The assumption is that if you prove you have a firm grip on delivering things that work without using AI, then you can also do it with AI.
And that it's easier to test you when you're working by yourself.
I see this line of "I need to assess your thinking, not the AI's" thinking so often from people who claim they are interviewing, but they never recognize the elephant in the room for some reason.
If people can AI their way into the position you are advertising, then at least one of the following two things have to be true:
1) the job you are advertising can be _literally_ solved by AI
2) you are not tailoring your interview process properly to the actual job that the candidate will need to do, hence the handwave-y "oh well harder problems will come up later that the AI will not be able to do". Focus the interview on the actual job that the AI can't do, and your worries will disappear.
My impression is that the people who are crying about AI use in interviews are the same people who refuse to make an effort themselves. This is just the variation of the meme where you are asked to flip a red black tree on a whiteboard, but then you get the job, and your task is to center a button with CSS. Make an effort and focus your interview on the actual job, and if you are still worried people will AI their way into it, then what position are you even advertising? Either use the AI to solve the problem then, or admit that the AI can't solve this and stop worrying about people using it.
>We don’t want to work with AI, we are going to pay the person for the persons time
If your interview problems are representative of the work that you actually do, and an AI can do it as well as a qualified candidate, then that means that eventually you'll be out-competed by a competitor that does want to work with AI, because it's much cheaper to hire an AI. If an AI could do great at your interview problems but still suck at the job, that means your interview questions aren't very good/representative.
Then they shouldn't use libraries, open source code or even existing compilers. They shouldn't search online (man pages is OK). They should use git plumbing commands and sh (not bash of zsh). They should not have potable water in there house but distill river water.
Very very true! Give them a take home assignment first and if they have a good result on that, give them an easier task, without AI, in person. Then you will quickly figure out who actually understands their work
If the interview consists of the interviewer asking "Write (xyz)", the interviewee opening copilot and asking "Write (xyz)", and accepting the code. What was the point of the interview? Is the interviewee a genius productive 10x programmer because by using AI he just spent 1/10 the time to write the code?
Sure, maybe you can say that the tasks should be complex enough that AI can't do it, but AI systems are constantly changing, collecting user prompts and training to improve on them. And sometimes the candidates aren't deep enough in the hiring process yet to justify spending significant time giving a complex task. It's just easier and more effective to just say no AI please
If an AI can do your test better than a human in 2025 it reflects not much better on your test than if a pocket calculator could do your test better than a human in 1970.
That did happen and the result from the test creators was the same back then: "we're not the problem, the machines are the problem. ban them!"
In the long run it turned out that if you could cheat with a calculator though, it was just a bad test....
I think there is an unwillingness to admit that there is a skill issue here with the test creators and that if they got better at their job they wouldnt need to ban candidates from using AI.
It's surprising to hear this from anthropic though.
The irony here is obvious, but what's interesting is that Anthropic is basically asking to not give then a realistic preview of how you will work.
This feels similar to asking devs to only use vim during a coding challenge and please refrain from using VS Code or another full featured IDE.
If you know, and even encourage, your employees to use LLMs at work you should want to see how well candidates present themselves in that same situation.
I still don't know how to quit Vim without googling for instructions :P
As an anecdote from my time at uni, I can share that all our exams were either writing code with pen on paper for 3-4 hours, or a take-home exam that would make up 50% of the final grade. There was never any expectation that students would use pen and paper on their take-home exams. You were free to use your books and to search the web for help, but you were not allowed to copy any code you found without citing it. Also not allowed to collaborate with anyone.
Half way through a recent interview it became very apparent that the candidate was using AI. This was only apparent in the standard 'why are you interested in working here?' Questions. Once the questions became more AI resistant the candidate floundered. There English language skills and there general reasoning declined catastrophically. These question had originally been introduced to see see how good the candidate was at thinking abstractly. Example: 'what is your creative philosophy?'
Everyone arguing for LLMs as a corrupting crutch needs to explain why this time is different: why the grammar-checkers-are-crutches, don't-use-wikipedia, spell-check-is-a-crutch, etc. etc. people were all wrong, but this time the tool really is somehow unacceptable.
It also depends on what you're hiring for. If you want a proofreader you probably want to test their abilities to both use a grammar checker, and work without it.
For me the difference is that using an LLM requires a insane amount of work from the interviewer. Fair enough that you'd use Copilot day to day, but can you actually prompt it? Are you able to judge the quality of the output (or where you planning on just pawning that of to your code-reviewer). The spell checker is a good example, do you trust it blindly, or are you literate enough to spot when it makes mistakes?
The "being able to spot the mistakes" is what an interviewer wants to know. Can you actually reason about a problem and sadly many cannot.
> While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.
Full quote here; seems like most of the comments here are leaving out the first part.
The goal of an interview is to assess talent. AI use gets in the way of that. If the goal were only to produce working code, or to write a quality essay, then sure use AI. But arguing that misunderstands the point of the interview process.
Disclaimer: I work at Anthropic but these views are my own.
Funny on the tin, but it makes complete sense to me. A sunglasses company would also ask me to take off my sunglasses during the job interview, presumably.
It makes sense. Having the right people with the right merits and motivations will become even more important in the age of AI. Why you might ask? Execution is nothing when AI matures. Grasping the big picture, communicating effectively and possessing domain knowledge will be key. More roles in cognitive work will become senior positions. Of course you must know how to make the most out of AI, but it is more interesting what skills you bring to the table without it.
It's almost like what people have been saying for years: there's the promise of AI and the reality of AI - and they're 2 very different things. They only look similar to a layman without experience in the field.
AI may not yet be as good an engineer as most coders, but it's already absolutely much better at written communication than the average sotware engineer (or at least more willing to put effort into it).
Anthropic is kind of positioning themselves as the "we want the cream of the crop" company (Dario himself said as much in his Davos interviews), and what I could understand was that they would a) prefer to pick people they already knew b) didn't really care about recruiting outside the US.
Maybe I read that wrong, but I suspect they are self-selecting themselves out of some pretty large talent pools, AI or not. But that application note is completely consistent with what they espouse as their core values.
As someone for whom the answer is always 'money' I learned very quickly that a certain level of -how should I call it- bullshit is necessary to get the HR person to pass my CV to someone competent. As I am not as skilled in bullshit as I am in coding, it would make sense to outsource that irrelevant part of the selection process, no?
This adds another twist, since I'd bet nowadays most CVs are processed (or at least pre-screened) by "AI": we're in a ridiculous situation where applicants feed a few bullet points to AI to generate full-blown polished resumes and motivational letters … and then HR uses different AI to distil all that back to the original bullet points. Interesting times.
This makes me think about adversarial methods of affecting the outcome, where we end up with a "who can hack the metabrain the best" contest. Kind of like the older leet-code system, where obviously software engineering skills were purely secondary to gamesmanship.
It's a bad question. What is actually being tested here is whether the candidate can reel off an 'acceptable' motivation. Whether it is their motivation or not. This is asking questions that incentivize disingenuous answers (boo) and then reacting with pikachu shock when the obvious outcome happens.
I'm sure Anthropic have too many applications submitted that are obviously AI generated, and I am sure what they mean by "non-AI-assisted communication", they don't want "slop" applications, that sounds like a LLM wrote it. They want some greater proof of human ability.
I expect humans at Anthropic can tell what LLM model was used to rewrite (or polish) applications they get, but if they can't, a basic BERT classifier can (I've trained one for this task, it's not so hard).
How much you wanna bet they're using AI to evaluate applicants and they don't even have a human reading 99% of the applications they're asking people to write?
As someone who has recently applied to over 300 jobs, just to get form letter rejections, it's really hard to want to invest my time to hand-write an application that I know isn't even going to be read by a human.
I would 100% expect a company to not use AI to evaluate candidates and, if they are, I wouldn't want to work there. That's far worse than using AI as the candidate.
This strikes me as similar to job applicants who apply for a position and are told it's hybrid or in-office - and then on the day of the interview, it suddenly changes from one in-person to one held over a videoconference, with the other participants with backdrops that look suspiciously like they're working from home.
I recently took their CodeSignal assessment, which is part of their initial screening process.
Oh, wow. I really believe they are missing out on great engineers due to the nature of it.
90 minutes to implement a series of increasingly difficult specs and pass all the unit tests.
There is zero consideration for quality of code. My email from the recruiter said (verbatim), “the CodeSignal screen is intended to test your ability to quickly and correctly write and refactor working code. You will not be evaluated on code quality.”
It was my first time ever taking a CodeSignal assessment and there was really no way to prepare for it ahead of time.
- AI companies ask candidates to not "eat their own dog food"
- AI companies blames each other of "copying" their IP while they find it legit to use "humans" IP for training.
Prepping for an interview a couple weeks ago, I grabbed the latest version of IntelliJ. I wanted to set up a blank project with some tests, in case I got stuck and wanted to bail out of whatever app they hit me with and just have unit tests available.
So lacking any other ideas for a sample project I just started implementing Fizzbuzz. And IntelliJ started auto suggesting the implementation. That seems more problematic than helpful, so it was a good thing I didn’t end up needing it.
I understand why it's amusing, but there is really nothing to see here.
It could be rephrased as:
« The process we use to asses candidates relies on measuring the candidate's ability to solve trivia problems that can easily be solved by AI (or internet search or impersonation etc). Please refrain from using such tools until the industry come up with a better way to assess candidates. »
Actually, since the whole point of those many screening levels during hiring is to avoid the cost of having long, in depth discussions between many experts and each individual candidates, probably IA will be the solution that makes the selection process a bit less reliant on trivia quizz (a solution that will, no doubt, come with its own set of new issues).
How do you guys do coding assessments nowadays with AI?
I don’t mind if applicants use it in our tech round but if they do I question them on the generated code and potential performance or design issues (if I spot any) - but not sure if it’s the best approach (I mostly hire SDETs so do a ‘easy’ dev round with a few easy/very easy leet code questions that don’t require prep)
Why aren't they dog fooding? Surely if AIs improve output and performance they should readily accept input from them. Seems like they don't believe in their own products.
However, not sure what to think of it. So AI should help people on their job and their interview process, but also not? When it matters? What if you're super good ML/AI, but very bad at doing applications? Would you still have a chance?
On the face of it it's a reasonable request but the question itself is pointless. An applicants outside opinion on a company is pretty irrelevant and is subject to a lot of change after starting work.
Whenever someone asks you to not do something that is victimless, you always should think about the power they are taking away from you, often unfairly. It is often the reason why they have power over you at all. By then doing that very thing, you regain your power, and so you absolutely should do it. I am not asking you to become a criminal, but to never be subservient to a corporation.
Much better approach is to ask the candidate about the limitations of AI assistants and the rakes you can step on while walking that path. And the rakes you have already stepped on with AI.
Maybe they are ahead of the curve at finding that hiring people based on ability to exploit AI-augmented reach produces catastrophically bad results.
If so, that's bad for their mission and marketing department, but it just puts them in the realm of a tobacco company, which can still be quite profitable so long as they don't offer health care insurance and free cigarettes to their employees :)
I see no conflict of interest in their reasoning. They're just trying to screen out people who trust their product, presumably because they've had more experience than most with such people. Who would be more likely to attract AI-augmented job applicants and trust their apparent augmented skill than an AI company? They would have far more experience with this than most, because they'd be ground zero for NOT rejecting the idea.
If Alice can do better against Bob when they aren’t using AI, but Bob performs better when both use AI, isn’t it in the company’s best interest to hire Bob, since AI is there to be used during his position duties?
If graphic design A can design on paper better that B, but B can design on the computer better than A, paper or computer, why would you hire A?
When applying for a college math professor job, it's understandable that you'll use mathematica/matlab/whatever for most of actual work, but needing a calculator for simple mutliplication-table style calculations would be a red flag. Especially if there is lecturing involved.
> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.
Exact opposite of our application process at my previous company. We said usage of ChatGPT was expected during the application and interview phase, since we heavily rely on it for work
There are a bunch of subtly different ways to perform a coding interview.
If the interviewer points you at a whiteboard and asks you how to reverse an array, most likely they're checking you know what a for loop is and how to index into an array, and how to be careful of off-by-one errors. Even if your language has a built-in library function for doing this, they'd probably like you to not use it.
If the interviewer hands you a laptop with a realistic codebase on it and asks you to implement e-mail address validation, they're going for a more real-world test. Probably they'll be fine with you googling for an e-mail address validation regex, what they want to see is that you do things like add unit tests and whatnot.
Makes sense. I've never been asked to make such an exercise in real time, most of the time that would be a take home task - but I understand if someone wants to do that. Still it would be weird to demand that a candidate uses Google, wouldn't it?
I picked e-mail validation as an example precisely because it's something even experienced developers would be well advised to google if they want to get it right :)
Of course, if someone can get it right off the top of their head, more power to them!
I'll be the contrarian and say that I don't find anything wrong with this, and if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.
sshine reply above is coming from a very conflictual mindset. "Can I still use AI and not be caught? Is it cheating? Does it matter if it's cheating?"
I think that's a bit like lying on your first date. If you're looking to score, sure, it's somewhat unethical but it works. But if you're looking for a long term collaboration, _and_ you expect to be interviewed by several rounds of very smart people, then you're much better off just going along.
> I don't find anything wrong with this
It’s not about being wrong, it’s about being ironic. We have LLMs shoved down our throats as this new way to communicate—we are encouraged to ask them to make our writing “friendlier” or “more professional”—and then one of the companies creating such a tool asks the very people most interested in it to not use it for the exact purpose we’ve been told it’s good at. They are asking you pretty please to not do the bad thing they allow and encourage everyone to do. They have no issue if you do it to others, but they don’t like it when it’s done to them. It is funny and hypocritical and pulls back the curtain a bit on these companies.
It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.
https://youtube.com/watch?v=m2v9z2S5XzQ&t=190
You're wrapping all the AI companies up in a single box, but:
* Most of the AI you get shoved down your throat is by the providers of services you use, not the AI companies.
* Among the AI companies, Anthropic in particular has had a very balanced voice that doesn't push for using AI where it doesn't belong. Their marketing page can barely be called that [0]. Their Claude-specific page doesn't mention using it for writing at all [1].
You seem to be committing the common fallacy of treating a large and disparate group of people and organizations as a monolith and ascribing cognitive dissonance where what you're actually seeing is diversity of opinion.
[0] https://www.anthropic.com/
[1] https://www.anthropic.com/claude
The LLM companies have always been against this kind of thing.
Sam Altman (2023): "something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points"
3 years ago people were poking fun about how restrictive the terms were - you could get your API key blocked if you used it to pretend to be a human. Eventually people just used other AIs for things like that, so they got rid of these restrictions that they couldn't enforce anyway.
Interesting that this quote really contains "sender" where "recipient" was intended, but it had absolutely no impact on any reader. (I even asked Claude and ChatGPT if they noticed anything strange in the sentence, and both needed additional prompting to spot that mistake.)
https://x.com/sama/status/1631394688384270336
Thanks for this heads-up, by the way, I've missed this particular tweet, but eventually got into exact same observation.
Wow I completely didn’t notice that until I read your comment. My brain must have automatically filled in the correct word. I had to go back and re-read it to confirm.
Well English is not my first language, so I probably tend to go through text more slowly and/or scan the text differently, so I have higher chance stumbling upon these oddities. (I can confirm that sometimes I see unexpected amount of misspelt homophones or usage of strangely related words.) Seeing two distinct LLM chats gloss over this particular nuance in almost identical way was really interesting.
Grok, on the other hand, has absolutely no problem with concept of sender both expanding and compressing the message, and with absence of recipient. Even after super-painstaking discussion, where Grok identified the strange absence of the "recipient", when I asked him to correct the sentence, he simply changed the word "sender" to the word "themselves":
> something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and *themselves* using ChatGPT to condense it into the key bullet points
https://x.com/i/grok/share/mwFR2jQ9MS6uVemgiHokJKiBd (cringe/pain warning)
Google running the "Dear Sydney" ad is in strong disagreement with that claim.
> It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.
Well, no. It's irony, but it's only cognitive dissonance in a comedy show which misses the nuance.
Most pro-gun organizations are heavily into gun safety. The message is that guns aren't unsafe if they're being used correctly. Most of the time, this means that most guns should be locked up in a safe, with ammo in a separate safe, except when being transported to a gun range, for hunting, or similar. When being used there, one should follow a specific set of procedures for keeping those activities safe as well.
It's a perfect analogy for the LLM here too. Anthropic encourages it for many uses, but not for the one textbox. Irony? Yes. Wrong? Probably not.
Huge miss on the gun analogy. The likes of NRA are pushing for 50-state constitutional carry. Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.
There’s probably actually some other hidden factor though, like the venue not allowing it.
Edit: FWIW those late night TV shows are nothing but rage bait low brow “comedy” that divides the country. But the above remains true.
> Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.
That's not what the NRA is pushing for, any more than there are Democrats pushing for mandatory sex changes for all kids (yes, this is cited on similar right-wing comedy shows, and individuals on the right believe it). Pushing for a right doesn't mean 100% of the population will exercise that right.
And yes, most venues (as well as schools, government buildings, etc.) will not allow guns. If there's a security guard, police, or similar within spitting distance, there isn't a reasonable self-defence argument.
One of the interesting pieces of data is looking at 2nd amendment support versus distance to the nearest police station / police officer / likely law enforcement response times. It explains a lot about where support / opposition comes from.
>And yes, most venues (as well as schools, government buildings, etc.) will not allow guns. If there's a security guard, police, or similar within spitting distance, there isn't a reasonable self-defence argument.
Can you give me one example of a valid "reasonable self-defence argument"? Legit question.
> yes, this is cited on similar right-wing comedy shows, and individuals on the right believe it
Can you give an example? Of course you can find 2 people in the US who believe it, and they held 2 comedy shows where it was said, and it's technically true, but I don't think I've ever seen anything like this said.
There are LEOs that were prosecuted by states and the federal government for not taking action while children were being shot by another child.
LEOs are expected to take fire to protect civilians. Protect & Serve is their credo.
I wouldn’t trust LEOs to protect me, so I sure as hell fire am not trusting a low paid rent-a-cop to perform a similar duty.
Nope. I believe that my mindset is prevalent and not an outlier.
it's interesting to me how easily you can fact check the statement:
> Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.
yet, you claim that it's the late night TV that divides us, while making sure to double down on your misleading statement.
The NRA doesn't "ban guns at their conferences", they have been banned at small parts of a multi-day conferences e.g. where Trump was speaking because that was a rule established by the secret service and they complied for a small part of the conference.
When the majority of a conference allows guns, it's simply a lie to claim that guns were banned. An unintentional lie, I'm sure, but it seems likely to be the result of you believing some headline or tweet and accepting something wholesale as truth because it fit your narrative. I'm guilty of the same, it happens, but hopefully we can both get better about portraying easily fact checked things as the truth.
These are the same people that insist we arm elementary school teachers and expect those teachers to someday pull the trigger on a child instead of having proper gun laws.
There is no irony.
If you think proper gun laws would keep guns away from evil people in the US, please explain to me why the war against illegal drugs in the US has been losing since the day it started.
Sure, some places in the world can successfully limit gun access. Those places aren't close and even bordered by the most active cartels in the world.
Just as a fun thought exercise, consider that to grow the plant necessary to produce just a single drug, cocaine, for the country, every year. It takes at least 300,000 acres, or roughly the size of los angeles. That's after decades of optimizations to reduce the amount of land needed. It's also only for one drug among a vast number that are regularly consumed in the US.
In relation, you can 3d print guns at home. Successful builds have been made from some of the cheapest 3d printers you can find.
Drugs are more addictive than guns typically. I am not sure your comparison is useful.
I don't expect that, so I won't be using public schools.
I have no illusion that safety or education is an actual concern in public schools in general.
They do, but safety and social control go hand in hand.
In any case, its not as if your kid is safer at a private school. Kids are violent, no matter where they are; maybe you remember going through school yourself?
When have gun laws ever stopped a shooting?
When have criminals EVER followed law, code, rules, or even a suggestion from their fellow citizens?
Believing laws deter criminals is almost criminally insane and beggars all logic.
After all of the accumulated evidence against your belief, you still believe laws deter criminality.
The death penalty doesn’t deter criminals. How could words possibly have an effect?
> When have gun laws ever stopped a shooting?
Have you heard of Australia? https://www.sydney.edu.au/news-opinion/news/2018/03/13/gun-l...
Australia has no land borders. It's one of the easiest places to secure, and thus it makes sense to do so.
My guess is it's more due to insurance at the venue. I don't know who pays in those situations, but I would imagine they require "no guns" posted and announced. And if there is any form of gunshot injury they have very strong teeth to dodge the claim.
> Well, no. It's irony, but it's only cognitive dissonance in a comedy show which misses the nuance.
Like the nuance between sending out your love and doing the Nazi salute? Or different?
It's sardonic rather than ironic - irony is sarcastic humor devised to highlight a contradiction or hypocrisy; while sardonic is disdainful, bitter and scornful.
It'd have been ironic if Anthropic had asked the applicant not to use AI for the sake of originality and authenticity; but if the applicant felt compelled to do so, then it better rock and wow them to hire the applicant sight unseen.
It's sardonic because Anthropic is implying use of AI on them is an indication of fraud, deceit, or incompetence; but it's a form of efficiency, productivity or cleverness when used by them on the job!
I don't see the cognitive dissonance here. If a model was applying for a position with a cosmetics company, they might want to see what the blank canvas looks like.
Being able to gauge a candidate's natural communication skills is highly useful. If you're an ineffective communicator, there's a good chance your comprehension skills are also deficient.
> If you're an ineffective communicator, there's a good chance your comprehension skills are also deficient.
We are quickly moving into a world where most communications are at best assisted by AI and more often have little human input at all. There’s nothing inherently “wrong” about that (we all have opinions there), but “natural” (quotes to emphasize that they’re taught and not natural anyway) communication skills are going to be less and less common as time marches on, much like handwriting, typewriting, calligraphy, etc.
The same could be said about a lot of things, like being able to write a functional solution to a leet code puzzle on a black board in front of an audience.
IMHO, an effective interview process should attempt to mimic the position for which a person is applying. Making a candidate jump through hoops is a bit disrespectful.
Yes, and they should state that they also don't use AI in the selection process.
They don't because they do. However maybe the Anthropic AI isn't performing well on AI generated applications.
I think they will get better results by having applicants talk to an AI during the application process.
> We have LLMs shoved down our throats as this new way to communicate
I don't think that's true at all.
Making your comms friendlier or whatever is one of the myriad ways to use LLMs. Maybe you personally have "LLMs shoved down your throat" by your corporate overlords. No one in their right mind can say that LLMs were created for such a purpose, it just so happens you can use it in this way.
LLMs are sold by corporate overlords to corporate overlords. They all know this is what it will be used for.
The writing was on the wall that the main use will be spam and phishing.
You can say the creators did not intend on this purpose, but it was created with knowledge that this would be the main use case.
LLMs aren’t making your comms friendlier; they’re just making them more vapid. When I see the language that ChatGPT spits out when you tell it to be friendly, I immediately think ‘okay, what is this person trying to sell me?’
Now imagine a world where most kids were raised with this bullshit and it's normal to them.
I'm with you, I'm very surprised by the amount arguments which boil down to, "Well I can cheat and get away with it, so therefore I should cheat".
I have read that people are getting more selfish[1], but it still shocks me how much people are willing to push individualism and selfishness under the guise of either, "Well it's not illegal" or "Well, it's not detectable".
I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.
I guess that puts me at a serious disadvantage in the job market, but I am okay with that, I've always been okay with that. 20 years ago my cohort were doing what I thought were selfish things to get ahead, and I'm fine with not doing those things and ending up on a different lesser trajectory.
But that doesn't mean I won't also air my dissatisfaction with just how much people seem to justify selfishness, or don't even regard it as selfish to ignore this request.
[1] https://fortune.com/2024/03/12/age-of-selfishness-sick-singl...
> I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.
No, what you are is ignoring the context.
This request comes from a company building, promoting, and selling the very thing they are asking you not to use.
Yes, asking you not to use AI is indeed a polite request. It is one you should respect. “The zeitgeist” has as much people in favour of AI as against it, and picking either camp doesn’t make anyone special. Either stance is bound to be detrimental in some companies and positive in others.
But none of that matters, what makes this relevant is the context of who’s asking.
I didn't miss that context, I understand who Anthropic are.
That may be true, but your first response still doesn't seem to account for that fact.
Honesty is not something our modern societies optimize for, although I do wish things were different
It's not just the society, it's this particular company who optimizes for it.
> "Well it's not illegal"
What's good for the gander.... I promise you they will use AI to vet your application.
> I promise you they will use AI to vet your application.
So?
> What's good for the gander....
The ones paying are in their vast majority the most selfish of them all, for example it would be reasonable to say that Jeff Bezos its one of the most selfish people on the planet, so at the end it doesn't boil down to "Well I can cheat and get away with it, so therefore I should cheat" but more like "Well I can cheat, get away with it and the victim is just another cheater, so therefore I should cheat"
Two wrongs don’t make a right, and it seems weird to me you‘d want to work for such a most-selfish cheater in the first place.
Bezos, Musk, Zuckerberg and many many others do everything in their power to reduce costs including paying less taxes, which includes using tax havens and tax loopholes that they themselves make sure to keep open by "lobbying" politicians, so effectively to work in general means to work for mostly cheaters and there is no way to avoid it, sure you can stay unemployed and stay clean of the moral corruption that entails living in a capitalist system but many don't consider that an option; and is not like buying from them is any better morally speaking, for the exact same reasons.
I agree with your sentiment. But coming from a generative AI company that says "career development" and "communication" are among their two most popular use cases... That's like a tobacco company telling employees they are not permitted to smoke tobacco
Well, they probably aren't permitted to smoke tobacco indoors.
I honestly fail to see even the irony. "Company that makes hammers doesn't want you to use hammers all the time". It's a tool.
But if I squint, I _can_ see a mean-spirited "haha, look at those hypocrites" coming from people who enjoy tearing others down for no particular reason.
But it's ok for Anthropic's marketing, sales and development teams to push to use case (AI for writing, communication and career development)?
Even when squinting I can't see a genuine argument for why Anthrpoic shouldn't be raked over the coals for their sheer hypocrisy
"Company that makes hammers for nailing wood together doesn't want candidates to use hammers during their wood-nailing test."
A brewery telling their employees to not drink the product while at work?
If only there were many jobs that mandate to drink alcohol to enhance your capabilities...
It's like an un-inked tattoo artist or a tee-totaling sommelier.
The optics are just bad. Stand behind your product, or accept that you will be fighting ridicule and suspicion endlessly.
It is very sensible position and I think the quote is a bit out of context, but the important part here is who it is coming from -- the company that makes money on both cheating in the job application process (which harms employers) and replacing said jobs with AI, or at least creating another excuse for layoffs (which harms the employees).
In a sense, they poisoned the well and don't want to drink from it now. Looking at it from this perspective justifies (in the eyes of some people at least) said cheating. Something something catalytic converter from the company truck.
> if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.
Sorry, the thought process of thinking that using an LLM for a job application, esp. for a field which requests candid input about one's motivation is acceptable, is beyond me.
It's not that there's anything wrong with this in particular. It's just that the general market seems much more optimistic about AI impact, than the AI companies themselves.
They don't want a motivation letter to be written by an LLM (because it's specifically about the personal motivation of the human candidate) - as far as I can see this not reflect either positively or negatively on their level of optimism about AI impact in general.
Companies, especially large, are not interested in fads unless directly invested in them. The higher and steeper the initial wave the bigger disappointment, or at least unfulfilled expectations happen, not always but surprisingly often.
This is just experience and seniority in general, nothing particular about LLMs. For most businesses, I would behave the same.
> If you're looking to score, sure, it's somewhat unethical but it works.
Observation/Implication/Opinion:
Think reciprocal forces and trash TV ethics in both, closed and open systems. The consequences are continuously diminished AND unvarying returns. Professionally as well as personally, in all parties involved. Stimulating, inspiring, motivating factors as well as the ability to perceive and "sense" all degrade. But compensation and cheating continue to work, even though, the quality of the game, the players and their output decreases.
Nothing and nobody is resilient "enough" to the mechanism force - 'counter'-force so you better pick the right strategy. Waiting/Processing for a couple of days, lessons and honest attempts yields exponentially better results than cheating.
Companies should beware of this if they expect results that are qualitatively AND "honestly" safe & sound. This has been ignored in the past decades, which is why we are "here". Too much work, too many jobs, and way too many enabling outs have been lost almost irreversibly, on the individual level as well as in nano-, micro-, and macro-economics.
Applicants using AI is fine but applicants not being able to make that output usefully THEIRS is a problem.
Can I request that they not use AI when evaluating my application, and expect them to respect my wishes? Highly doubtful. Respect is a 2-way street. This is not a collaboration, but a hierarchical mandate in one direction, for protecting themselves from the harms of the very tools they peddle.
I also don't find anything wrong with their stance. Ironic, sure, but I think to judge someone they need to have filters and in this case, the filter is someone who is able to communicate without AI assistance.
I love it when the "contrarian view" is absolutely not the contrarian view :)
See “contrarian dynamic”.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I did check the conversation thread before I commented. At the time, and without looking very carefully, this particular view seemed missing.
> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.
There are two backwards things with this:
1) You can't ask people to not use AI when careful, responsible use is undetectable.
It just isn't a realistic request. You'll have great replies without AI use and great replies with AI use, and you won't be able to tell whether a great reply used AI or not. You will just be able to filter sludge and dyslexia.
2) This is still the "AI is cheating" approach, and I had hoped Anthropic to be thought leaders on responsible AI use:
In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
If AI is making your final product and you're none the wiser, it didn't really help you, it just made you addicted to it.
Teach a man to fish...
Can't disagree more. Talent is built and perfected upon thousands hours practice, LLMs just make you lazy. One thing people with seniority in the field don't realize, as I guess you are, is that LLMs don't help develop "muscle memory" in young practioners, it just make them miserable, often caged in an infinite feedback loop of bug fixing or trying to untangle a code mess. They may extract some value by using it for studying but I doubt it and only goes so far, when started I remember being able to extract so much knowledge just by reading a book about algorithms, try to reimplement things, break them, and so on. Today I can use an LLM because I'm wise enough and I can spot wrong answers, but still feel becoming a bit lazy.
I strongly agree with this comment. Anecdotal evidence time!
I'm an experienced dev (20 years of C++ and plenty of other stuff), and I frequently work with younger students in a mentor role, e.g. I've done Google Summer of Code three times as a mentor, and am also in KDE's own mentorship program.
In 2023/24, when ChatGPT was looming large, I took on a student who was of course attempting to use AI to learn and who was enjoying many of the obvious benefits - availability, tailoring information to his inquiry, etc. So we cut a deal: We'd use the same ChatGPT account and I could keep an eye on his interactions with the system, so I could help him when the AI went off the rails and was steering him into the wrong direction.
He initially made fast progress on the project I was helping him with, and was able to put more working code in place than others in the same phase. But then he hit a plateau really hard soon after, because he was running into bugs and issues he couldn't get solutions from the AI for and he just wasn't able to connect the dots himself.
He'd almost get there, but would sometimes forget to remove random single lines doing the wrong thing, etc. His mental map of the code was poor, because he hadn't written it himself in that oldschool "every line a hard-fought battle" style that really makes you understand why and how something works and how it connects to problems you're solving.
As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.
To his credit, he eventually realized leaning on ChatGPT was holding him back mentally and he tried to take things slower and go back to API docs and slowly building up his codebase by himself.
It's like when you play World of Warcraft for the first time and you have this character boost to max level and you use it. You didn't go through the leveling phase and you do not understand the mechanics of your character, the behaviour of the mobs, or even how to get to another continent.
You are directly loaded with all the shiny tools and, while it does make it interesting and fun at first, the magic wears off rather quickly.
On the other hand, when you had to fight and learn your way up to level 80, you have this deeper and well-earned understanding of the game that makes for a fantastic experience.
'"every line a hard-fought battle" style that really makes you understand why and how something works'
I totally agree with this and I really like that way of wording it.
> "every line a hard-fought battle" style that really makes you understand why and how something works
Absolutely true. However:
The real value of AI will be to *be aware* when at that local optimum, and then - if unable to find a way forward - at least reliably notify the user that that is indeed the case.
Bottom line, the number of engineering “hard thought battles” is finite, and should be chosen very wisely.
The performance multiplier that LLM agents brought changed the world. At least as the consumer web did in the 90s, and there will be no turning back.
This is like a computer company around 1980, would be hiring engineers but forbade access to computers for some numerical task.
Funny, it reminds me the reason Konami MSX1 games look like they do, compared to the most of the competition: having access to superior development tools - their HP hardware emulator workstations.
If you are unable to come up with a filter for your applicants that is able to detect your own product, maybe you should evolve. What about asking an AI how to solve this? ;)
This is fascinating. The idea of leveling off in the learning curve is one that I hadn't considered before, although with hindsight it seems obvious. Based on your recollection (and without revealing too many personal details), do you recall any specific areas that caused the struggle? For example, was it a lack of understanding of the program architecture? Was it an issue of not understanding data structures? (or whatever) Thanks for your comment, it opened up a new set of questions for me.
A big problem was that he couldn't attain a mental model of how the code was behaving at runtime, in particular the lifetimes of data and objects - what would get created or destroyed when, exist at what time, happen in what sequence, exist for the whole runtime of the program vs. what's a temporary resource, that kind of thing.
The overall "flow" of the code didn't exist in his head, because he was basically taking small chunks of code in and out of ChatGPT, iterating locally wherever he was and the project just sort of growing organically that way. This is likely also what make the ChatGPT outputs themselves less useful over time: He wasn't aware of enough context to prompt the model with it, so it didn't have much to work with. There wasn't a lot of emerging intelligence a la provide what the client needs not what they think they need.
These days tools like aider end up prompting the model with a repo map etc. in the background transparently, but in 2023/24 that infra didn't exist yet and the context window of the models at the time was also much smaller.
In other words, the evolving nature of these tools might lead to different results today. On the other hand, if it had back then chances are he'd become even more reliant on them. The open question is whether there's a threshold there where it just stops mattering - if the results are always good, does it matter the human doesn't understand them? Naturally I find that prospect a bit frightening and creepy, but I assume some slice of the work will start looking like that.
I have a feeling that "almost getting there" will simply become the norm. I have seen a lot of buggy and almost but not exactly right applications, processes and even laws that people simply have to live with.
If US can be the worlds biggest economy while having an opiod epidemy and writing paper cheques, if Germany can be Europes manufacturing hub while using faxes, sure we as a society can live in the unoptimal state of everything digital being broken 10% of the time insteaf of hald percent
Using faxes is much more streamlined than the more modern Scan, Email, Print process.
Only if you're starting with paper?
Years back I worked somewhere where we had to PDF documents to e-fax them to a supplier. We eventually found out that on their end it was just being received digitally and auto-converted to PDF.
It was never made paper.. So we asked if we could just email the PDF instead of paying for this fax service they wanted.
They said no.
Found the german!
there shouldn't ever be a print or scan step in the pipeline.
This seems to be the way of things. Oral traditions were devastated by writing, but the benefit is another civilization can hold on to all your knowledge while you experience a long and chaotic dark age so you don't start from 0 when the Enlightenment happens.
What about people who don't have access to a mentor? If not AI then what is their option? Is doing tutorials on your own a good way to learn?
Write something on your own. When stuck, consult the docs, Google the error message, and ask on StackOverflow (in this order).
There's no royal road to learning.
There is no 'learning' in the abstract. You learn something. Doing tutorials teach you how to do the thing you do in them.
It all comes down to what you wanna learn. If you want to acquire skills doing the things you can ask AI to do, probably a bad idea to use them. If you want to learn some pointers on a field you don't even know what key words are relevant to take to a library, LLMs can help a lot.
If you wanna learn complex context dependent professional skills, I don't think there's an alternative to an experienced mentor.
Not sure! My own path was very mentor-dependent. Participating in open source communities worked for me to find my original mentors as well. The other participants are incentivized to mentor/coach because the main thing you're bringing is time and motivation--and if they can teach you what you need to know to come back with better output while requiring less handholding down the road, their project wins.
It's not for everyone because open source tends to require you to have the personality to self-select goals. Outside of more explicit mentor relationships, the projects aren't set up to provide you with a structured curriculum or distribute tasks. But if you can think of something you want to get done or attempt in a project, chances are you'll get a lot of helping hands and eager teachers along the way.
Mostly by reading a good book to get the fundamentals down, then taking on a project to apply the knowledge and supplement the gap with online ressource. There are good books and nice open source projects out there. You can get far with these by just studying them with determination. Later you can go on the theoretical and philosophical part of the field.
How do you know what a good book is? I've seen recommendations in fields I'm knowledgeable about that were hot garbage. Those were recommendations by reputed people for reputed authors. I don't know how a beginner is supposed to start without trying a few and learning some bad habits.
So, so, so many people have learnt to code on their own without a mentor. It requires a strong desire to learn and perseverance but it’s absolutely possible.
That you can learn so much about programming from books and open source and trial and error has made it a refuge for people with extreme social anxiety, for whom "bothering" a mentor with their questions would be unthinkable.
Failing for a bit, thinking hard and then somehow getting to the answer - for me it was usually tutorials, asking on stackoverflow/forums, finding a random example on some webpage.
The fastest way for me to learn something new is to find working code or code that I can kick for a bit until it compiles/runs. Often I'll comment out everything and make it print hello world, and then from there try to figure out what the essential bits I need to bring back in, or simplify/mock, etc, until it works again.
I learn a lot more by forming a hypothesis "to make it do this, I need that bit of code, which needs that other bit that looks like it's just preparing this/that object" - and the hypothesis gets tested every time I try to compile/run.
Nowadays I might paste the error into chatgpt and it'll say something that will lead me a step or two closer to figuring out what's going on.
Why is modifying working code you didn't write better than having an AI help write code with you? Is it that the modified code doesn't run until you fix it? It still bypasses the 'hard won effort' criteria though?
> listless poking in the mud
https://xkcd.com/1838/
Use LLM. But do not let it be the sole source of your information for any particular field. I think it's one of the most important disciplines the younger generation - to be honest, all generations - will have to learn.
I have a rule for myself as a non-native English speaker: Any day I ask LLMs to fix my English, I must read 10 pages from traditionally published books (preferably pre-2023). Just to prevent LLM from dominating my language comprehension.
You perfectly encapsulated my view on this. I'm utterly bewildered with people who take the opposing position that AI is essentially a complete replacement for the human mind and you'd be stupid not to fully embrace it as your thought process.
This is a straightforward position and it's the one I hold but I had to reply to thank you for stating it so succinctly.
I drove cars before the sat nav systems and when I visited somewhere, I'd learn how to drive to there. The second drive would be from memory. However, as soon as I started relying on sat navs, I became dependent on them. I can not drive to a lot of places that I visited more than once without a sat nav these days (and I'm getting older, that's a part of it too).
I wonder if the same thing will happen with coding and LLMs.
Progress is built on top of an infinite number of dependences.
In many ways people that don't use sat nav are at a disadvantage: real time traffic and redirection, high precision ETA, trip logging, etc.
I can even feel it in my own coding. I've been coding almost my entire life all the way back to C64 Basic and ever since I am relying on Copilot for most of my regular work I can feel my non AI assisted coding skills get rusty.
That's a scary thing
I hear this argument all the time, and I think “this is exactly how people who coded in assembly back in the day thought about those using higher level programming languages.”
It is a paradigm shift, yes. And you will know less about the implementation at times, yes. But will you care when you can deploy things twice, three times, five times as fast as the person not using AI? No. And also, when you want to learn more about a specific bit of the AI written code, you can simply delve deep into it by asking the AI questions.
The AI right now may not be perfect, so yes you still need to know how to code. But in 5 years from now? Chances are you will go in your favorite app builder, state what you want, tweak what you get and you will get the product that you want, with maybe one dev making sure every once in a while that you’re not messing things up - maybe. So will new devs need to know high level programming languages? Possibly, but maybe not.
1. We still teach assembly to students. Having a mental model of what the computer is doing is incredibly helpful. Every good programmer has such a model in my experience. Some of them learned it by studying it explicitly, some picked it up more implicitly. But the former tends to be a whole lot faster without the stop on the way where you are floundering as a mid level with a horribly incorrect model for years (which I’ve seen many many times).
2. Compilers are deterministic. You can recompile the source code and get the same assembly a million times.
You can also take a bit of assembly then look at the source code of the compiler and tell exactly where that assembly came from. And you can change the compiler to change that output.
3. Source code is written in a formal unambiguous language.
I’m sure LLMs will be great at spitting out green field apps, but unless they evolve to honest to goodness AGI, this won’t get far beyond existing low code solutions.
No one has solved or even proposed a solution for any of these issues beyond “the AI will advance sufficiently that humans won’t need to look at the code ever. They’ll never need to interact with it in any way other than through the AI”.
But to get to that point will require AGI and the AI won’t need input from humans at all, it won’t need a manager telling it what to build.
The point of coding is not to tell a machine what to do.
The point of coding is to remove ambiguity from the specs.
"Code" is unambiguous, deterministic and testable language -- something no human language is (or wants to be).
LLMs today make many implementation mistakes where they confuse one system with another, assume some SQL commands are available in a given SQL engine when they aren't, etc. It's possible that these mistakes will be reduced to almost zero in the future.
But there is a whole other class of mistakes that cannot be solved by code generation -- even less so if there's nobody left capable of reading the generated code. It's when the LLM misunderstands the question, and/or when the requirements aren't even clear in the head of the person writing the question.
I sometimes try to use LLMs like this: I state a problem, a proposed approach, and ask the LLM to shoot holes in the solution. For now, they all fail miserably at this. They recite "corner cases" that don't have much or anything to do with the problem.
Only coding the happy path is a recipe for unsolvable bugs and eventually, catastrophe.
You seem so strong opinionated and sure what the future holds for us, but I must remember you though, that in your example "from assembly to higher level programming languages" the demand for programmers didn't go down, went up, and as companies were able to develop more, more development and more investments were made, more challenges showed up, new jobs were invented and so on... You get where I'm going... The thing I'm questioning is how much lazy new technologies make you, many programmers even before LLMs had no idea how a computer works and only programmed in higher level languages, it was a disaster before with many people claming software was bad and industry going down a road where software quality matters less and less. Well that situation turbo boosted by an LLMs because "doesn't matter i can deploy 100x times a day" disrupting user experience imo won't led us far
I think the same kind of critical thinking that was required to reimplement and break algorithms must now be used to untangle AIs answers. In that way, it's a new skill, with its own muscle memory. Previously learnt skills like debugging segfaults slowly become less relevant.
Let me give you an example from yesterday. I was learning tailwind and had a really long class attribute on a div which I didn't like. I wanted to split it and found a way to do it using my JavaScript framework (the new way to do it was suggested by deepseek). When I started writing by hand the list of classes in the new format copilot gave me an auto complete suggestion after I wrote the first class. I pressed tab and it was done.
I showed this to my new colleague who is a bit older than me and sort of had similar attitudes as you. He told me he can do the same with some multi cursor shenanigans and I'll be honest in that I wasn't interested in his approach. Seems like he would've taken more time to solve the same problem even though he had superior technique than me. He said sure it takes longer but I need to verify by reading the whole class list and that's a pain but I just reloaded the page and it was fine. He still wasn't comfortable with me using copilot.
So yes, it does make me lazier but you could say the same about using go instead of C or any higher level abstraction. These tools will only get better and more correct. It's our job to figure out where it is appropriate to use them and where it isn't. Going to either extremes is where the issue is
I wouldn’t say it’s laziness. The thing is that every line of code is a burden as it’s written once, but will be read and edited many times. You should write the bare amount that makes the project work, then make it readable and then easily editable (for maintenance). There are many books written about the last part as it’s the hardest.
When you take all three in consideration, an llm won’t really matter unless you don’t know much about the language or the libraries. When people goes on about Vim or Emacs, it’s just that it makes the whole thing go faster.
Remember though that lazyness, as I learned in computing, is kinda "doing something later": you might have pushed the change/fix faster than your senior fellow programmer, but you still need to review and test that change right? Maybe the change you're talking about was really trivial and you just needed to refresh your browser to see a trivial change, but when it's not, being lazy about a change will only gets you suffer more when reviewing a pr and testing the non trivial change working for thousands customers with different devices
The problem is he wasn't comfortable with my solution even though it was clearly faster and it could be tested instantly. It's a mental block for him and a lot of people in this industry.
I don't advocate blindly trusting LLMs. I don't either and of course test whatever it spits out.
Testing usually isn’t enough if you don’t understand the solution in the first place. Testing is a sanity check for a solution that you do understand. Testing can’t prove correctness, it can only rind (some) errors.
LLMs are fine for inspiration in developing a solution.
AI is a tool, and tool use is not lazy.
I think it's a lot more complicated than that. I think it can be used as a tool for people who already have knowledge and skills, but I do worry how it will affect people growing up with it.
Personally I see it more like going to someone who (claims) to know what they're doing and asking them to do it for me. I might be able to watch them at work and maybe get a very general idea of what they're doing but will I actually learn something? I don't think so.
Now, we may point to the fact that previous generations railed at the degeneration of youth through things like pocket calculators or mobile phones but I think there is a massive difference between these things and so-called AI. Where those things were tools obligatorily (if you give a calculator to someone who doesn't know any formulae it will be useless to them), I think so-called AI can just jump straight to giving you the answer.
I personally believe that there are necessary steps that must be passed through to really obtain knowledge and I don't think so-called AI takes you through those steps. I think it will result in a generation of people with markedly fewer and shallower skills than the generations that came before.
I think you are both right.
AI will let some people conquer skills otherwise out of their reach, with all the pros and cons of that. It is exactly like the example someone else brought up of not needing to know assembly anymore with higher level languages: true, but those who do know it and can internalize how the machines operate have an easier time when it comes to figuring out the real hard problems and bugs they might hit.
Which means that you only need to learn machine language and assembly superficially, and you have a good chance of being a very good programmer.
However, where I am unsure how the things will unfold is that humans are constantly coming up with different programming languages, frameworks, patterns, because none of the existing ones really fit their mental model or are too much to learn about. Which — to me at least — hints at what I've long claimed: programming is more art than science. With complex interactions between a gazillion of mildly incompatible systems, even more so.
As such, for someone with strong fundamentals, AI tools never provided much of a boon to me (yet). Incidentally, neither did StackOverflow ever help me: I never found a problem that I struggled with that wasn't easily solved with reading the upstream docs or upstream code, and when neither was available or good enough, SO was mostly crickets too.
These days, I rarely do "gruntwork" programming, and only get called in on really hard problems, so the question switches to: how will we train the next generation of software engineers who are going to be called in for those hard problems?
Because let's admit it, even today, not everybody can handle them.
Tool use is fine, when you have the education and experience to use the tools properly, and to troubleshoot and recover when things go wrong.
The use of AI is not just a labour saving device, it allows the user to bypass thinking and learning. It robs the user of an opportunity to grow. If you don't have the experience to know better it may be able to masquerade as a teacher and a problem solver, but beyond a trivial level relying on it is actively harmful to one's education. At some point the user will encounter a problem that has no existing answer in the AI's training dataset, and come to realise they have no real foundation to rely on.
Code generative AI, as it currently exists, is a poisoned chalice.
It is if the way to learn is doing it without a tool. Imagine using a robot to lift weights if you want to grow your own muscle mass. "Robot is a tool"
Incidentally, there are things like these: https://pmc.ncbi.nlm.nih.gov/articles/PMC6104107/
Your favourite online store is full of devices that'd help there, and they are used in physical therapy too.
"Growing your own muscle mass" is an artificial goal that exists because of tools. Our bodies evolved under the background assumption that daily back-breaking labor is necessary for survival, and rely on it to stay in good operating conditions. We've since all but eliminated most of that labor for most people - so now we're forced to engage in otherwise pointless activity called "exercise" that's physically hard on purpose, to synthesize physical exertion that no longer happens naturally.
So obviously, your goal is strictly to exert your body, you have to... exert your body. However, if your goal is anything else, then physical effort is not strictly required, and for many people, for many reasons, is often undesirable. Hence machines.
And guess what, people's overall health and fitness have declined. Obesity is at an all time high. If you're in the US, there is a 40% chance you are obese. Your body likely contains very little muscle mass, you are extremely likely to die of side effects of metabolic syndrome.
People are seeing the advent of machines to replace all physical labor and transportation, not gradually like in the 20th century, but withing the span of a decade going from the average physical exertion of 1900 to the average modern lack of physical exertion, take a car everyday, do no manual labor do no movement.
They are saying that you need exercise to replace what you are losing, you need to train your body to keep it healthy and can't just rly on machines/robots to do everything for them because your body needs that exertion - and your answer is to say "now that we have robots there is no need to exercise even for exercise sake". A point that's pretty much wrong as modern day physical health shows.
https://en.m.wikipedia.org/wiki/Metabolic_syndrome
You've completely twisted what the parent post was saying, and I can't but laugh out loud at claims like:
> there is a 40% chance you are obese.
Obesity is not a random variable — "darn, so unlucky for me to have fallen in the 40% bucket of obese people on birth": you fully (except in rare cases) control the factors that lead to obesity.
A solution to obesity is not to exercise but a varied diet, and eating less of it to match your energy needs (or be under when you are trying to lose weight). While you can achieve that by increasing your energy needs (exercise) and maintain energy input, you don't strictly have to.
Your link is also filled with funny "science" like the following:
> Neck circumference of more than 40.25 cm (15.85 in) for men ... is considered high-risk for metabolic syndrome.
Darn, as a 195cm / 6'5" male and neck circumference of 41cm (had to measure since I suspected I am close), I am busted. Obviously it correlates, just like BMI does (which is actually "smarter" because it controls for height), but this is just silly.
Since you just argued a point someone was not making: I am not saying there are no benefits to physical activity, just that obesity and physical activity — while correlated, are not causally linked. And the problems when you are obese are not the same as those of being physically inactive.
>And guess what, people's overall health and fitness have declined.
Have you seen what physical labor does to a man's body? Go to a developing country to see it. Their 60 year olds look like our 75 year olds.
Sure, we're not as healthy as we could be with proper exercise and diet. But on the long run, sitting on your butt all day is better for your body than hard physical labor.
Said no one with ever with a even remote knowledge in healthy benefits of fitness.
Fitness does not equal physical exercise.
It could be a simple lifestyle that makes you "fit" (lots of walking, working a not-too-demanding physical job, a physical hobby, biking around...).
The parent post is saying that technological advance has removed the need for physical activity to survive, but all of the gym rats have come out of the woodwork to complain how we are all going to die if we don't hit the gym, pronto.
What on earth are you talking about?
- Physical back-breaking work has not been eliminated for most people.
- Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people (arguable for most people as it is a mammalian trait) ergo it is not undesirable. UK NHS calls physical exercise essential.
> Physical back-breaking work has not been eliminated for most people.
I said most of it for most people specifically to avoid the quibble about mechanization in poorest countries and their relative population sizes.
> Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people
I envy them. I'm not one of them.
> ergo it is not undesirable
Again, I specifically said "and for many people, for many reasons, is often undesirable" as to not have to spell out the obvious: you may like the exercise benefits of a physically hard work, but your boss probably doesn't - reducing the need for physical exertion reduces workplace injuries, allows worker to do more for longer, and opens up the labor pool to physically weaker people. So even if people only ever felt pleasure from physical exertion, the market would've been pushing to eliminate it anyway.
> UK NHS calls physical exercise essential.
They wouldn't have to if people actually liked doing it.
When you go for a job as a forklift operator, nobody asks you to demonstrate how you can manually carry a load of timber.
Equally, if you just point to your friend and say "that's Dave, he's gonna do it for me", they won't give you the job. They'll give it to Dave instead.
That much is true, but I've seen a forklift operator face a situation where pallet of products fell apart and heavy items ended up on the floor. Guess who was in charge of picking them up and manually shelving them?
You forgot a second sentence that completes the logic chain. Yes, "some tools are useful for some work", so what? That wasn't the original claim...
The claim was that it's lazy to use a tool as a substitute for learning how to do something yourself. But when the tool entirely obviates the need for doing the task yourself, you don't need to be able to do it yourself to do the job. It doesn't matter if a forklift driver isn't strong enough to manually carry a load, similarly once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.
> once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.
Once AI is that good, the developer won't have a job any more.
The whole question is if the AI will ever get that good.
All evidence so far points to no (just like with every tool — farmers are still usually strong men even if they've got tractors that are thousands of times stronger than any human), but that still leaves a bunch of non-great programmers out of a job.
The point he's making is, we still have to learn to use tools no? There still had to he some knowledge there or else you're just sat sifting through all the crap the AI spits out endlessly for the rest of your life. The op wrote his comment like it's a complete replacement rather than an enhancement.
You could similarly consider driving a car as "a tool that helps me get to X quicker". Now tell me cars don't make you lazy.
Copying and pasting from stack overflow is a tool.
It’s fine to do in some cases, but it certainly gets abused by lazy incurious people.
Tool use in general certainly can be lazy. A car is a tool, but most people would call an able bodied person driving their car to the end of the driveway to get the mail lazy.
Tools help us to put layers of abstraction between us and our goals. when things become too abstracted we lose sight of what we're really doing or why. Tools allow us to feel smart and productive while acting stupidly, and against our best interests. So we get fascism and catastrophic climate change, stuff like that. Tools create dependencies. We can't imagine life without our tools.
"We shape our tools and our tools in turn shape us" said Marshall McLuhan.
For learning it can very well be. And also it really depends on the tool and task. Calculator is fine tool. But symbolic solver might be a few steps too far. If you don't already understand the process. And possibly the start and end points.
Problem with AI is that it is often black box tool. And not even deterministic one.
AI as applied today is pretty deterministic. It does get retrained and tuned often in most common applications like ChatGPT, but without any changes, you should expect a deterministic answer.
Tools can be crutches — useful but ultimately inhibitory to developing advanced skill.
Using the wrong tool for the job required isn't lazy but it may be dangerous, inefficient and ultimately more expensive.
Tool use can make you lazy if you're not careful.
AI companies don't think so, is clearly the implication.
I think GP is basically saying the same as you.
Learning comes from focus and repetition. Talent comes from knowing which skill to use. Using AI effectively is a talent. Some of us embrace learning new skills while others hold onto the past. AI is here to stay, sorry. You can either learn to adapt to it or you can slowly die.
The argument that AI is bad and anyone who uses it ends up in a tangled mess is only your perspective and your experience. I’m way more productive using AI to help me than I ever was before. Yes, I proofread the result. Yes, I can discern a good response from a bad one.
AI isn’t a replacement for knowing how to code, but it can be an extremely valuable teacher to those orgs that lack proper training.
Any company that has the position that AI is bad, and lacks proper training and incentives for those that want to learn new skills, isn’t a company I ever want to work for.
Talent is innate. Proficiency requires practice.
> LLMs just make you lazy.
Yeah, and I'd like to emphasize that this is qualitatively different from older gripes such as "calculators make kids lazy in math."
This is because LLMs' have an amazing ability to dream up responses stuffed with traditional signals of truthfulness, care, engagement, honesty etc... but that ability is not matched by their chances of dreaming up answers and ideas that are logically true.
This gap is inevitable from their current design, and it means users are given signals that it's safe for their brains to think-less-hard (skepticism, critical analysis) about what's being returned at the same moments when they need to use their minds the most.
That's new. A calculator doesn't flatter you or pretend to be a wise professor with a big vocabulary listening very closely to your problems.
You sound like the sort of person of old who “Why would you use the internet. Look in an encyclopaedia. ‘Google it’ is just lazy”
This trope is unbecoming of anyone sensible.
That's not what my comment implies. I'm just saying that relying solely on LLMs makes you lazy, like relying just on google/stackoverflow whatever, it doesn't shift you from a resource that can be layed off to a resource that can't. You must know your art, and use the tools wisely
It's 2025, not 2015. 'google it and add the word reddit' is a thing. For now.
Google 'reflections on trusting trust'. Your level of trust in software that purports to think for you out of a multi-gig stew of word associations is pretty intense, but I wouldn't call it pretty sensible.
What on earth does that drivel even say. Did you generate this from a heavily quantised gpt2?
Spot on. I'm not A Programmer(TM), but I have dabbled in a lot of languages doing a lot of random things.
Sometimes I have qwen2.5-coder:14b whip up a script to do some little thing where I don't want to spend a week doing remedial go/python just to get back to learning how to write boilerplate. All that experience means I can edit it easily enough because recognition kicks in and drags the memory kicking and screaming back into the front.
I quickly discovered it was essentially defaulting to "absolute novice." No error handlers, no file/folder existence checking, etc. I had to learn to put all that into the prompt.
>> "Write a python script to scrape all linked files of a certain file extension on a web page under the same domain as the page. Follow best practices. Handle errors, make strings OS-independent, etc. Be persnickety. Be pythonic."
Here's the output: https://gist.github.com/kyefox/d42471893de670a2a4179482d3c8b...
I'm far from an expert and my memory might be foggy, but that looks like a solid script. I can see someone with less practice doing battle with debuggers trying the first thing that comes out without all the extra prompting hitting errors and not having any clue.
For example: I wrote a thing that pulled a bunch of JSON blobs from an API. Fixing the "out of handles" error is how I learned about file system and network default limits on open files and connections, and buffering. Hitting stuff like that over and over was educational and instilled good habits.
Not lazy per se, but you stop thinking and start relying on AI to think for you.
And you must use that brain muscles otherwise your skills became to degrade fast, like really fast.
As long as you ask llm What - or high level How - you should be good.
As soon as you ask for (more than trivial) code or solutions - you start losing your skill and value as a developer.
Feeling "Lazy" is just an emotion, which to me has nothing to do with how productive you are as a human. In fact the people not feeling lazy but hyped are probably more effective and productive. You're just doing this to yourself because you have assumptions on how a productive/effective human should function. You could call it "stuck in the past".
I have the idea that there are 2 kinds of people, those avidly against AI because it makes mistakes (it sure does) and makes one lazy and all other kinds of negative things, and those that experiment and find a place for it but aren't that vocal about it.
Sure you can go too far, I've heard someone in Quality Control Proclaim "ChatGPT just knows everything, its saves me so much time!" To which I asked if they heart about hallucinations and they hadn't, they'd just been copying whatever it said into their reports. Which is certainly problematic.
> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
At least in theory that’s not what homework is. Homework should be exercises to allow practicing whatever technique you’re trying to learn, because most people learn best by repeatedly doing a thing rather than reading a few chapters of a book. By applying an LLM to the problem you’re just practicing how to use an LLM, which may be useful in its own right, but will turn you into a one trick pony who’s left unable to do anything they can’t use an LLM for.
What if you use it to get unstuck from a problem? Then come back and learn more about what you got stuck on.
That seems like responsible use.
In the context of homework, how likely is someone still in school, who probably considers homework to be an annoying chore, going to do this?
I can't really see an optimistic long-term result from that, similar to giving kids an iPad at a young age to get them out of your hair: shockingly poor literacy, difficulty with problem solving or critical thinking, exacerbating the problems with poor attention span that 'content creators' who target kids capitalise on, etc.
I'm not really a fan of the concept of homework in general but I don't think that swapping brain power with an OpenAI subscription is the way to go there.
But how likely is that?
It was the same way I think a lot of us used textbooks back in the day. Can’t figure out how to solve a problem, so look around for a similar setup in the chapter.
If AI is just a search over all information, this makes that process faster. I guess the downside is there was arguably something to be learned searching through the chapter as well.
Homework problems are normally geared to the text book that is being used for the class. They might take you through the same steps, developing the knowledge in the same order.
Using another source is probably going to mess you up.
> something to be learned searching through the chapter as well
Learning to mentally sort through and find links between concepts is probably the primary benefit of homework
Depends. Do they care about the problem? If so, they'll quickly hit diminishing returns on naive LLM use, and be forced to continue with primary sources.
doesn't sound much different than googling and finding a snippet that gets you unstuck. This might be a shortcut to the same thing.
But they asked how likely it is. My guess is it's a pretty small fraction of problems where you need to get unstuck.
fair enough
Um, get with the times, luddite. You can use an LLM for everything, including curing cancer and fixing climate change.
(I still mentally cringe as I remember the posts about Disney and Marvel going out of business because of Stable Diffusion. That certainly didn't age well.)
AI did my gym workout, still no muscles.
It would be great if all technologies freed us and gave us more time to do useful or constructive stuff instead. But the truth is, and AI is a very good example of this, a lot of these technologies are just making people dumb.
I'm not saying they are essentially bad, or that they are not useful at all, far from that. But it's about the use they are given.
> You can use an LLM for everything, including curing cancer and fixing climate change.
Maybe, yes. But the danger is rather in all the things you no longer feel you have a need to do, like learning a language, or how to properly write, or read.
LLM for everything is like the fast-food of information. Cheap, unhealthy, and sometimes addicting.
> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
Well, no. Homework is an aid to learning and LLM output is a shortcut for doing the thinking and typing yourself.
Copy and pasting some ChatGPT slop into your GCSE CS assignment (as I caught my 14yo doing last night…) isn’t learning (he hadn’t even read it) - it’s just chucking some text that might be passable at the examiner to see if you can get away with it.
Likewise, recruitment is a numbers game for under qualified applicants. Using the same shortcuts to increase the number of jobs you apply for will ultimately “pay off” but you’re only getting a short term advantage. You still haven’t really got the chops to do the job.
I've seen applicants use AI to answer questions during the 'behavioral' Q&A-style interviews. Those applicants are 'cheating', and it defeats the whole purpose as we want to understand the candidate and their experience, not what LLMs will regurgitate.
Thankfully it's usually pretty easy to spot this so it's basically an immediate rejection.
If the company is doing behavioral Q&A interviews, I hope they're getting as many bad applicants as possible.
Adding a load of pseudo-science to the already horrible process of looking for a job is definitely not what we need.
I'll never submit myself to pseudo-IQ tests and word association questions for a job that will 99.9% of the time ask you to build CRUD applications.
The lengths that companies go to avoid doing a proper job at hiring people (one of the most important jobs they need to do) with automatic screening and these types of interviews is astonishing.
Good on whoever uses AI for that kind of shit. They want bullshit so why not use the best bullshit generators of our time?
You want to talk to me and get a feeling about how I'd behave? That's totally normal and expected. But if you want to get a bunch of written text to then run sentiment analysis on it and get a score on how "good" it is? Screw that.
I think there maybe a misunderstanding here. I just want to talk to the applicant and see what they think about working with designers, their thoughts on learning golang as a javascript developer, or how they've handled a last minute "high priority" project.
You could reasonably argue that they're not cheating, indeed they're being very behaviorally revealing and you do understand everything you need to understand about them.
Too bad for them, but works for you…
I'm imagining a hiring workflow, for a role that is not 'specifically use AI for a thing', in which there is no suggestion that you shouldn't use AI systems for any part of the interview. It's just that it's an auto-fail, and if someone doesn't bother to hide it it's 'thanks for your time, bye!'.
And if they work to hide it, you know they're dishonest, also an auto-fail.
Homework is a proxy for your retention of information and a guide to what you should review. That somehow schools started assigning grades to it is as nonsensically barbaric as public bare ass caning was 80 years ago and driven by the same instinct.
I agree on the grades part. And I was just thinking that the university that I went to never gave us grades during the year (the only exception I can think of was when we did practice exam papers so we had an idea how we were doing).
I think homework is more than a guide to what you should review though. It's partly so that the teacher can find out what students have learned/understood so they can adapt their teaching appropriately. It's also because using class/contact time to do work that can be done independently isn't always the best use of that time (at least once students are willing and capable of doing that work independently).
Consider the case where there is a non Native English speaker and they use AI to misrepresent the standard of written English communication.
Assume their command of English is insufficient to get the job ultimately. They've just wasted their own time and the company's time in that situation.
I imagine Anthropic is not short of applicants...
>Hey Claude, translate this to Swahili from English. Ok, now translate my response from Swahili to English. Thanks.
We're close to the point where using a human -> stt -> llm -> tts -> human pipeline you can do real time high quality bi directional spoken translation on a desktop.
Why not just send the Swahili and let them MTL on the other end? At least then they have the original if there’s any ambiguity.
I’ve read multiple LLM job applications, and every single time I’d rather have just read the prompt. It’d be a quarter of the length and contain no less information.
> careful, responsible use is undetectable
I think that's wishful thinking. You're underestimating how much people can tell about other people with the smallest amount of information. Humans are highly attuned to social interactions and synthetic responses are more obvious that you think.
I was a TA years ago, before there were LLMs one could use to cheat effectively. The professor and I still detected a lot of cheating. The problem was what to do once you've caught it? If you can't prove that it's cheating -- you can't cite the sources copied from -- is it worth the fight? The professor's solution was just to knock down their grades.
At that time just downgrading them was justifiable, because though they had copied in someone else's text, they often weren't competent to identify the text that was best to copy, and they had to write some of the text themselves to make it appear a coherent whole and they weren't competent to do that. If they had used LLMs we would have been stuck. We would be sure they had cheated but their essay would still be better than that of many/most of their honest peers who had tried to demonstrate relevant skill and knowledge.
I think there is no solution except to stop assigning essays. Writing long form text will be a boutique skill like flint knapping, harvesting wild tubers, and casting bronze swords. (Who knows, the way things are going these skills might be relevant again all too soon.)
> You can't ask people to not use AI when careful, responsible use is undetectable.
You can't make a rule, if people can cheat undetectably?
You can, but it would be pointless since it would just filter out some honest people.
But this is exactly what we already do. Most exams have a "no cheating" rule, even though it's perfectly possible to cheat. The point is to discourage people from doing so, not to make it impossible.
>In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
No, the homework is a proxy for measuring what the homework provider is interested to evaluate (hopefully). Talent is a vague word, and what we can consider talent worth nurturing might be consider worthless from some other perspective.
For example, most schools will happily give you many nation myths to learn and evaluate how much of it you can restitute on demand. They will far less likely ask you to provide you some critics of these myths, to search who created them, with which intentions and which actual effects on people at large scale from different perspective and metrics available out there.
It's a warning sign, designed to improve the signal they are interested in marginally. Some n-% of applicants will reconsider. That's all it needs to do, to make it worth it, because putting that one sentence there required very little effort.
>In life there is no cheating
Huh?
>You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
Isn't that the definition of cheating? Presenting a false level of talent you don't possess?
>AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
There's a difference between knowing how to use a calculator and knowing how to do math. The same goes for AI. Being talented at giving AI prompts doesn't mean you are generally talented or have AI-unrelated talents desired by an employer.
> In life there is no cheating
This really rubs me the wrong way because it reflects the shallow, borderline sociopathic stance of nowadays entrepreneurship.
Obedience to rules and honest attitudes is more than just an annoyance in your way to get rich, its the foundation of cooperation -- our civilization.
You can enforce people to not use AI by making the intrview in the office.
> In life there is no cheating
Oh, my sweet summer child
Eh, you can only learn how to do something when you actually do it (generally).
AI just lets you get deeper down the fake-it-until-you-make-it hole. At some point it will actually matter that you know how to do something, and good luck then.
Either for you, or for your customers, or both.
If I want to assess a candidates performance when they can't use AI then I think I'd sit in a room with them and talk to them.
If I ask people not to use AI on a task where using AI is advantageous and undetectable then I'm going to discriminate against honest people.
But they don't want to do that.
They want to use AI in their hiring process. They want to be able to offload their work and biases to the machine. They just don't want other people to do it.
There's a reason that the EU AI legislation made AI that is used to hire people one of the focal points for action.
I think this gets to the core of the issue: interviews should be conducted by people who deeply understand the role and should involve a discussion, not a quiz.
Is it advantageous? The AI generated responses to this question are prone to be dull.
It might even give the honest people an advantage by giving them a tip to answer on their own.
This application requirement really bothered me as someone who's autistic and dyslexic. I think visually, and while I have valid ideas and unique perspectives, I sometimes struggle to convert my visual thoughts into traditional spoken/written language. AI tools are invaluable to me - they help bridge the gap between my visual thinking and the written expression that's expected in professional settings.
LLMs are essentially translation tools. I use them to translate my picture-thinking into words, just like others might use spell-checkers or dictation software. They don't change my ideas or insights - they just help me express them in a neurotypical-friendly format.
The irony here is that Anthropic is developing AI systems supposedly to benefit humanity, yet their application process explicitly excludes people who use AI as an accessibility tool. It's like telling someone they can't use their usual assistive tools during an application process.
When they say they want to evaluate "non-AI-assisted communication skills," they're essentially saying they want to evaluate my ability to communicate without my accessibility tools. For me, AI-assisted communication is actually a more authentic representation of my thoughts. It's not about gaining an unfair advantage - it's about leveling the playing field so my ideas can be understood by others.
This seems particularly short-sighted for a company developing AI systems. Shouldn't they want diverse perspectives, including from neurodivergent individuals who might have unique insights into how AI can genuinely help people think and communicate differently?
This is an excellent comment and it more-or-less changes my opinion on this issue. I approached it with an "AI bad" mentality which, if truth be told, I'm still going to hold. But you make a very good argument for why AI should be allowed and carefully monitored.
I think it was the spell-checker analogy that really sold me. And this ties in with the whole point that "AI" isn't one thing, it's a huge spectrum. I really don't think there's anything wrong with an interviewee using an editor that highlights their syntax, for example.
Where do you draw the line, though? Maybe you just don't. You conduct the interview and, if practical coding is a part of it, you observe the candidate using AI (or not) and assess them accordingly. If they just behave like a dumb proxy, they don't get the job. Beyond that, judge how dependant they are on AI and how well they can use it as a tool. Not easy, but probably better than just outright banning AI.
Exactly - being transparent about AI usage in interviews makes much more sense. Using AI effectively is becoming a crucial skill, like knowing how to use any other development tool. Using it well can supercharge productivity, but using it poorly can be counterproductive or even dangerous.
It's interesting that most software departments now expect their staff to use AI tools day-to-day, yet many still ban it during interviews. Why not flip this around? Let candidates demonstrate how they actually work with AI. It would be far more valuable to assess someone's judgment and skill in using these tools rather than pretending they don't exist.
If a candidate wants to show how they leverage AI in their workflow, that should be seen as a positive - it demonstrates transparency and real-world problem-solving approaches. After all, you're hiring someone for how they'll actually work, not how they perform in an artificial AI-free environment that doesn't reflect reality.
The key isn't whether someone uses AI, but how effectively they use it as part of their broader skillset. That's what companies should really be evaluating.
I feel very similarly. I'm also an extremely visual thinker who has a job as a programmer, and being able to bounce ideas back and forth between a "gifted intern" and myself is invaluable (in the past I used to use actual interns!)
I regard it as similar to using a text-to-speech tool for a blind person - who cares how they get their work done? I care about the quality of their work and my ability to interact with them, regardless of the method they use to get there.
Another example I would give: imagine there's someone who only works as a pair programmer with their associate. Apart, they are completely useless. Together, they're approximately 150% as productive as any two programmers pairing together. Would you hire them? How much would you pay them as a pair? I submit the right answer is yes, and something north of one full salary split in two. But for bureaucracy I'd love to try it.
I do lots of technical interviews in Big Tech, and I would be open to candidates using AI tools in the open. I don't know why most companies ban it. IMO we should embrace them, or at least try to and see how it goes (maybe as a pilot program?).
I believe it won't change the outcomes that much. For example, on coding, an AI can't teach someone to program or reason in the spot, and the purpose of the interview never was to just answer the coding puzzle anyway.
To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc). If I give you a puzzle and you paste the most optimized answer with no reasoning or comment you're not going to pass the interview, no matter if it's done with AI, from memory or with stack overflow.
So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.
> So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.
Candidates could also have an AI listening to the questions and giving them answers. There are other ways that they could be in the process without copy/pasting blindly.
> To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc).
Exactly, that's why I feel like saying "AI is not allowed" makes it all more clear. As interviewers we want to see these abilities you have, and if candidates use an AI it's harder to know what's them and what's the AI. It's not that we don't think AI is an useful tool, it's that it reduces the amount of signal we get in an interview; and in any case there's the assumption than the better someone performs the better they could use AI.
You could also learn a lot from what someone is asking an AI assistant.
Someone asking: "solve this problem" vs "what is the difference between array and dict" vs "what is the time complexity of a hashmap add operation", etc.
They give you different nuances on what the candidate knows and how it is approaching the understanding of the problem and its solution.
It's a new spin on the old leetcode problem - if you are good at leetcode you are not necessarily a good programmer for a company.
This is quite a conundrum. These AI companies thrive on the idea that very soon people will not be replaced by AI, but by people who can effectively use AI to be 10x more productive. If AI turns a normal coder into a 10x dev, then why wouldn't you want to see that during an interview? Especially since cheating this whole interview system has become trivial in the past months. It's not the applicants that are the problem, it's the outdated way of doing interviews.
Because as someone who’s interviewing, I know you can use AI — anyone can. It likely obscures me from judging the pitfalls, and design and architecture decisions that are required in proper engineering roles. Especially for senior and above applications, I want to make an assessment of how you think about problems, where it gives a chance for the candidate to show their experience, their technical understanding, and their communication skills.
We don’t want to work with AI, we are going to pay the person for the persons time, and we want to employ someone who isn’t switching off half their cognition when a hard problem approaches.
No, not everyone can really use AI to deliver something that works.
And ultimately, this is what this is about, right? Delivering working products.
> No, not everyone can really use AI to deliver something that works
"That works" is doing a lot of heavy lifting here, and really depends more on the technical skills of the person. Because, shocker, AI doesn't magically make you good and isn't good itself.
Anyone can prompt an AI for answers, it takes skill and knowledge to use those answers in something that works. By prompting AI for simple questions you don't train your skill/knowledge to answer the question yourself. Put simply, using AI makes you worse at your job - precisely when you need to be better.
"Put simply, using AI makes you worse at your job - precisely when you need to be better."
I don't follow.
Usually jobs require deliver working things. The more efficient the worker knows his tools(like AI), the more he will deliver -> the better he is at his job.
If he cannot deliver reliable working things, because he does not understand the LLM output, then he fails at delivering.
You cannot just reduce programming to "deliver working things", though. For some tasks, sure, "working" is all that matters. For many tasks, though, efficiency, maintainability, and other factors are important.
You also need to take into account how to judge if something is "working" or not — that's not necessarily a trivial task.
That is part of "deliver working things".
A car hold together by duct tape is usually not considered a working car or road save.
Same with code.
"You also need to take into account how to judge if something is "working" or not — that's not necessarily a trivial task."
Indeed, and if the examiner cannot do that, he might be in a wrong position in the first place.
If I am presented with code, I can ask the person what it does. If the person does not have a clue - then this shows quickly.
Completely agree. I'm judging the outputs of a process, I really am only interested in the inputs to that process as a matter of curiosity.
If I can't tell the difference, or if the AI helps you write drastically better code, I see it as no more nor no less than, for example, pair programming or using assistive devices.
I also happen to think that most people, right now, are not very good at using AI to get things done, but I also expect those skills to improve with time.
Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc. For the record, I'm not just saying "AI bad"; I've come around to some use of AI being acceptable in an interview, provided it's properly assessed.
> Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc
Agreed, but I as the "end user" care not at all whether you're running a local LLM that you fine tune, or storing it all in your eidetic memory, or writing it down on post it notes that are all over your workspace[1]. Anything that works, works. I'm results oriented, and I do care very much about the results, but the methods (within obvious ethical and legal constraints) are up to you.
[1] I've seen all three in action. The post-it notes guy was amazing though. Apparently he had a head injury at one point and had almost no short term memory, so he coated every surface in post-its to remind himself. You'd never know unless you saw them though.
I think we're agreeing on the aim—good results—but disagreeing on what those results consist of. If I'm acting as a 'company', one that wants a beneficial relationship with a productive programmer for the long-term, I would rather have [ program that works 90%, programmer who is 10% better at their job having written it ] as my outputs than a perfect program and a less-good programmer.
I take epistemological issue with that, basically, because I don't know how you measure those things. I believe fundamentally that the only way to measure things like that is to look at the outputs, and whether it's the system improving or the person operating that system improving I can't tell.
What is the difference between a "less good programmer" and a "more good programmer" if you can't tell via their work output? Are we doing telepathy or soul gazing here? If they produce good work they could be a team of raccoons in a trench coat as far as I'm aware, unless they start stealing snacks from the corner store.
There is also a skill in prompting the AI for the right things in the right way in the right situations. Just like everyone can use google and read documentation, but some people are a lot better at it than others.
You absolutely can be a great developer who can't use AI effectively, or a mediocre developer who is very good with AI.
> not everyone can really use AI to deliver something that works.
That's not the assumption. The assumption is that if you prove you have a firm grip on delivering things that work without using AI, then you can also do it with AI.
And that it's easier to test you when you're working by yourself.
I see this line of "I need to assess your thinking, not the AI's" thinking so often from people who claim they are interviewing, but they never recognize the elephant in the room for some reason.
If people can AI their way into the position you are advertising, then at least one of the following two things have to be true:
1) the job you are advertising can be _literally_ solved by AI
2) you are not tailoring your interview process properly to the actual job that the candidate will need to do, hence the handwave-y "oh well harder problems will come up later that the AI will not be able to do". Focus the interview on the actual job that the AI can't do, and your worries will disappear.
My impression is that the people who are crying about AI use in interviews are the same people who refuse to make an effort themselves. This is just the variation of the meme where you are asked to flip a red black tree on a whiteboard, but then you get the job, and your task is to center a button with CSS. Make an effort and focus your interview on the actual job, and if you are still worried people will AI their way into it, then what position are you even advertising? Either use the AI to solve the problem then, or admit that the AI can't solve this and stop worrying about people using it.
>We don’t want to work with AI, we are going to pay the person for the persons time
If your interview problems are representative of the work that you actually do, and an AI can do it as well as a qualified candidate, then that means that eventually you'll be out-competed by a competitor that does want to work with AI, because it's much cheaper to hire an AI. If an AI could do great at your interview problems but still suck at the job, that means your interview questions aren't very good/representative.
Interview problems are never representative of the work that software developers do.
Sounds like the interview process needs to be improved then?
Then they shouldn't use libraries, open source code or even existing compilers. They shouldn't search online (man pages is OK). They should use git plumbing commands and sh (not bash of zsh). They should not have potable water in there house but distill river water.
There is a balance to be struck. You obviously don't expect a SWE to begin by identifying rare earth metal mining spots on his first day.
Where the line is drawn is context dependent, drawing the same single line for all possible situations is not possible and it's stupid to do so.
It's not a conundrum, they're selling snake oil. (Come on people, we've been through this many times already.)
Very very true! Give them a take home assignment first and if they have a good result on that, give them an easier task, without AI, in person. Then you will quickly figure out who actually understands their work
If the interview consists of the interviewer asking "Write (xyz)", the interviewee opening copilot and asking "Write (xyz)", and accepting the code. What was the point of the interview? Is the interviewee a genius productive 10x programmer because by using AI he just spent 1/10 the time to write the code?
Sure, maybe you can say that the tasks should be complex enough that AI can't do it, but AI systems are constantly changing, collecting user prompts and training to improve on them. And sometimes the candidates aren't deep enough in the hiring process yet to justify spending significant time giving a complex task. It's just easier and more effective to just say no AI please
If an AI can do your test better than a human in 2025 it reflects not much better on your test than if a pocket calculator could do your test better than a human in 1970.
That did happen and the result from the test creators was the same back then: "we're not the problem, the machines are the problem. ban them!"
In the long run it turned out that if you could cheat with a calculator though, it was just a bad test....
I think there is an unwillingness to admit that there is a skill issue here with the test creators and that if they got better at their job they wouldnt need to ban candidates from using AI.
It's surprising to hear this from anthropic though.
The irony here is obvious, but what's interesting is that Anthropic is basically asking to not give then a realistic preview of how you will work.
This feels similar to asking devs to only use vim during a coding challenge and please refrain from using VS Code or another full featured IDE.
If you know, and even encourage, your employees to use LLMs at work you should want to see how well candidates present themselves in that same situation.
It’s hardly that. This is one component of an interview process - not all of it!
I still don't know how to quit Vim without googling for instructions :P
As an anecdote from my time at uni, I can share that all our exams were either writing code with pen on paper for 3-4 hours, or a take-home exam that would make up 50% of the final grade. There was never any expectation that students would use pen and paper on their take-home exams. You were free to use your books and to search the web for help, but you were not allowed to copy any code you found without citing it. Also not allowed to collaborate with anyone.
Kudos to Anthropic. The industry has way too many workers rationalizing cheating with AI right now.
Also, I think that the people who are saying it doesn't matter if they use AI to write their job application might not realize that:
1. Sometimes, application questions actually do have a point.
2. Some people can read a lot into what you say, and how you say it.
Half way through a recent interview it became very apparent that the candidate was using AI. This was only apparent in the standard 'why are you interested in working here?' Questions. Once the questions became more AI resistant the candidate floundered. There English language skills and there general reasoning declined catastrophically. These question had originally been introduced to see see how good the candidate was at thinking abstractly. Example: 'what is your creative philosophy?'
>There English language skills... declined catastrophically.
Let he who is without sin...
Point taken
> what is your creative philosophy?
Seriously?
Yep. Some candidates really enjoy the question to the point where it becomes difficult to get them to stop answering it.
Everyone arguing for LLMs as a corrupting crutch needs to explain why this time is different: why the grammar-checkers-are-crutches, don't-use-wikipedia, spell-check-is-a-crutch, etc. etc. people were all wrong, but this time the tool really is somehow unacceptable.
It also depends on what you're hiring for. If you want a proofreader you probably want to test their abilities to both use a grammar checker, and work without it.
For me the difference is that using an LLM requires a insane amount of work from the interviewer. Fair enough that you'd use Copilot day to day, but can you actually prompt it? Are you able to judge the quality of the output (or where you planning on just pawning that of to your code-reviewer). The spell checker is a good example, do you trust it blindly, or are you literate enough to spot when it makes mistakes?
The "being able to spot the mistakes" is what an interviewer wants to know. Can you actually reason about a problem and sadly many cannot.
You forgot calculators ;)
Definitely agreed. And slide rules, and log tables, and...
> While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.
Full quote here; seems like most of the comments here are leaving out the first part.
The goal of an interview is to assess talent. AI use gets in the way of that. If the goal were only to produce working code, or to write a quality essay, then sure use AI. But arguing that misunderstands the point of the interview process.
Disclaimer: I work at Anthropic but these views are my own.
Funny on the tin, but it makes complete sense to me. A sunglasses company would also ask me to take off my sunglasses during the job interview, presumably.
Thanks for the amusing mental image.
Only if they stop screening us with their shitty AI first. Otherwise it is slop vs slop.
Pretty ironic that they use an automated system called CodeSignal that does the first round of the interviews
It makes sense. Having the right people with the right merits and motivations will become even more important in the age of AI. Why you might ask? Execution is nothing when AI matures. Grasping the big picture, communicating effectively and possessing domain knowledge will be key. More roles in cognitive work will become senior positions. Of course you must know how to make the most out of AI, but it is more interesting what skills you bring to the table without it.
> Grasping the big picture, communicating effectively and possessing domain knowledge will be key
But isn't this all the things AI promises to solve for you?
It's almost like what people have been saying for years: there's the promise of AI and the reality of AI - and they're 2 very different things. They only look similar to a layman without experience in the field.
>communicating effectively
AI may not yet be as good an engineer as most coders, but it's already absolutely much better at written communication than the average sotware engineer (or at least more willing to put effort into it).
Anthropic is kind of positioning themselves as the "we want the cream of the crop" company (Dario himself said as much in his Davos interviews), and what I could understand was that they would a) prefer to pick people they already knew b) didn't really care about recruiting outside the US.
Maybe I read that wrong, but I suspect they are self-selecting themselves out of some pretty large talent pools, AI or not. But that application note is completely consistent with what they espouse as their core values.
It's also a personal question, not a "why should someone work here", but a "what motivates YOU"
As someone for whom the answer is always 'money' I learned very quickly that a certain level of -how should I call it- bullshit is necessary to get the HR person to pass my CV to someone competent. As I am not as skilled in bullshit as I am in coding, it would make sense to outsource that irrelevant part of the selection process, no?
Maybe it makes sense for you, however from their perspective, it's not an "irrelevant" part of the selection process, but the most important part.
You can use an AI assistant to help you fix grammar and come up with creative reasons why you should work there.
This adds another twist, since I'd bet nowadays most CVs are processed (or at least pre-screened) by "AI": we're in a ridiculous situation where applicants feed a few bullet points to AI to generate full-blown polished resumes and motivational letters … and then HR uses different AI to distil all that back to the original bullet points. Interesting times.
This makes me think about adversarial methods of affecting the outcome, where we end up with a "who can hack the metabrain the best" contest. Kind of like the older leet-code system, where obviously software engineering skills were purely secondary to gamesmanship.
It's a bad question. What is actually being tested here is whether the candidate can reel off an 'acceptable' motivation. Whether it is their motivation or not. This is asking questions that incentivize disingenuous answers (boo) and then reacting with pikachu shock when the obvious outcome happens.
I'm sure Anthropic have too many applications submitted that are obviously AI generated, and I am sure what they mean by "non-AI-assisted communication", they don't want "slop" applications, that sounds like a LLM wrote it. They want some greater proof of human ability. I expect humans at Anthropic can tell what LLM model was used to rewrite (or polish) applications they get, but if they can't, a basic BERT classifier can (I've trained one for this task, it's not so hard).
You wouldn't show up drunk to a job interview just because its at brewery, would you?
I guarantee you the lawns at the lawnmower manufacturer are not cut with scissors.
Isn't Anthropic's stated goal to make "nice" self-improving general AI? Cut out the middleman and have the AI train the next generation.
Never get high on your own supply ;)
Only if they stop screening us with their shitty AI first.
How much you wanna bet they're using AI to evaluate applicants and they don't even have a human reading 99% of the applications they're asking people to write?
As someone who has recently applied to over 300 jobs, just to get form letter rejections, it's really hard to want to invest my time to hand-write an application that I know isn't even going to be read by a human.
Do they also promise not to use ai to evaluate the answers?
Also will they be happy to provide 200-400 word reasoning how answer to their question was evaluated for each and every candidate? Written by a human.
I would 100% expect a company to not use AI to evaluate candidates and, if they are, I wouldn't want to work there. That's far worse than using AI as the candidate.
This strikes me as similar to job applicants who apply for a position and are told it's hybrid or in-office - and then on the day of the interview, it suddenly changes from one in-person to one held over a videoconference, with the other participants with backdrops that look suspiciously like they're working from home.
Cool. Does that mean Anthropic is not using ATS to scan resumes?
Of course it doesn’t…
It’s cause they wanna use the data to train AI on and traininy AI on AI is useless.
If your test can be done by an LLM maybe you shouldn't be hiring a human being based on that test...
Not new they had that 5 years ago at least.
Anthropic interview is nebulous. You get a coding interview. Fast paced, little time, 100% pass mark.
Then they chat to you for half an hour to gauge your ethics. Maybe I was too honest :)
I'm really bad at the "essay" subjects vsm the "hard" subjects so at that point I was dumped.
I recently took their CodeSignal assessment, which is part of their initial screening process.
Oh, wow. I really believe they are missing out on great engineers due to the nature of it.
90 minutes to implement a series of increasingly difficult specs and pass all the unit tests.
There is zero consideration for quality of code. My email from the recruiter said (verbatim), “the CodeSignal screen is intended to test your ability to quickly and correctly write and refactor working code. You will not be evaluated on code quality.”
It was my first time ever taking a CodeSignal assessment and there was really no way to prepare for it ahead of time.
Apparently, I can apply again in 6 months.
You can definitely pass their interview without solving everything.
Plot twist: They are actually looking for the freethinkers who are subversive enough to still use AI assistants.
This probably means they are completely unable to differentiate between AI and non-AI else they would just discard the AI piles of applications.
So suddenly we're in a state where:
- AI companies ask candidates to not "eat their own dog food" - AI companies blames each other of "copying" their IP while they find it legit to use "humans" IP for training.
Prepping for an interview a couple weeks ago, I grabbed the latest version of IntelliJ. I wanted to set up a blank project with some tests, in case I got stuck and wanted to bail out of whatever app they hit me with and just have unit tests available.
So lacking any other ideas for a sample project I just started implementing Fizzbuzz. And IntelliJ started auto suggesting the implementation. That seems more problematic than helpful, so it was a good thing I didn’t end up needing it.
Aha: maybe they want to train their AI on their applicant’s / job seekers text submissions :D
It’s always the popular clubs that make the most rules
Relevant (and could probably have been a comment there): https://news.ycombinator.com/item?id=42909166 "Ask HN: What is interviewing like now with everyone using AI?"
> We want to understand your personal interest in Anthropic without mediation through an AI system
Is the application being reviewed with the help of an AI assistant though? If yes, AI mediation is still taking place.
I'd be fine with this if they agree to not use AI to assess you as a candidate.
I understand why it's amusing, but there is really nothing to see here. It could be rephrased as:
« The process we use to asses candidates relies on measuring the candidate's ability to solve trivia problems that can easily be solved by AI (or internet search or impersonation etc). Please refrain from using such tools until the industry come up with a better way to assess candidates. »
Actually, since the whole point of those many screening levels during hiring is to avoid the cost of having long, in depth discussions between many experts and each individual candidates, probably IA will be the solution that makes the selection process a bit less reliant on trivia quizz (a solution that will, no doubt, come with its own set of new issues).
How do you guys do coding assessments nowadays with AI?
I don’t mind if applicants use it in our tech round but if they do I question them on the generated code and potential performance or design issues (if I spot any) - but not sure if it’s the best approach (I mostly hire SDETs so do a ‘easy’ dev round with a few easy/very easy leet code questions that don’t require prep)
Why aren't they dog fooding? Surely if AIs improve output and performance they should readily accept input from them. Seems like they don't believe in their own products.
Funny this massive irony just out, as I don’t think I’ll renew my subscription with them because of R1.
TBH its motivated me to apply with AI to try somehow to get away with it.
(I need to reevaluate my work load and priorities.)
This has a poetic tone to it.
However, not sure what to think of it. So AI should help people on their job and their interview process, but also not? When it matters? What if you're super good ML/AI, but very bad at doing applications? Would you still have a chance?
Or do you get filtered out?
slop for thee but not for me
don't get high off your own supply
At least someone realizes the soulless unimaginative mediocrity machine makes people sound soulless, unimaginative, and mediocre.
Don't get high on your own supply, like zuck doing the conquistador in Kaua'i
On the face of it it's a reasonable request but the question itself is pointless. An applicants outside opinion on a company is pretty irrelevant and is subject to a lot of change after starting work.
AI for thee but not for me?
I generally trust Anthropic vs others, I think Claude (beyond obligatory censorship) ticks all the right boxes and strikes the right balance
this is a reasonable request, provided there is a human on the other side who is going to read the 200-400 word response, and make a judgment call.
this remember me an old interview, years ago, when they asked me to code something "without using Google"....
This insistence of using only human intelligence reminds me of the quest for low-background steel.
So I guess people should not use other available tools? Spell checker? Grammar checker? The Internet? Grammarly?
The issue is that they are receiving excellent responses from everyone and can no longer discriminate against people who are not good at writing.
Evaluators neither but here we are
>Why do you want to work at Anthropic? (We value this response highly - great answers are often 200-400 words.)
Low lifes
Whenever someone asks you to not do something that is victimless, you always should think about the power they are taking away from you, often unfairly. It is often the reason why they have power over you at all. By then doing that very thing, you regain your power, and so you absolutely should do it. I am not asking you to become a criminal, but to never be subservient to a corporation.
Beyond ridiculous. I am lacking words on how stupid this statement is coming from the AI company who enables all this crap.
Much better approach is to ask the candidate about the limitations of AI assistants and the rakes you can step on while walking that path. And the rakes you have already stepped on with AI.
Maybe they are ahead of the curve at finding that hiring people based on ability to exploit AI-augmented reach produces catastrophically bad results.
If so, that's bad for their mission and marketing department, but it just puts them in the realm of a tobacco company, which can still be quite profitable so long as they don't offer health care insurance and free cigarettes to their employees :)
I see no conflict of interest in their reasoning. They're just trying to screen out people who trust their product, presumably because they've had more experience than most with such people. Who would be more likely to attract AI-augmented job applicants and trust their apparent augmented skill than an AI company? They would have far more experience with this than most, because they'd be ground zero for NOT rejecting the idea.
Seems reasonable.
Question 1:
Write a program that describes the number of SS's in "Slow Mississippi bass". Then multiply the result by hex number A & 2.
Question 2:
Do you think your peers will haze you week 1 of the evaluation period? [Yes|No]
There are a million reasons to exclude people, and most HR people will filter anything odd or extraordinary.
https://www.youtube.com/watch?v=TRZAJY23xio&t=1765s
Hardly a new issue, =3
If Alice can do better against Bob when they aren’t using AI, but Bob performs better when both use AI, isn’t it in the company’s best interest to hire Bob, since AI is there to be used during his position duties?
If graphic design A can design on paper better that B, but B can design on the computer better than A, paper or computer, why would you hire A?
That's totally reasonable, imo. You also can't look up the answers using a search engine during your application to work at Google
That's actually not reasonable.
Why not
This probably depends on the questions.
When applying for a college math professor job, it's understandable that you'll use mathematica/matlab/whatever for most of actual work, but needing a calculator for simple mutliplication-table style calculations would be a red flag. Especially if there is lecturing involved.
application or interview?
this is about application.
Good luck with that
> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.
Exact opposite of our application process at my previous company. We said usage of ChatGPT was expected during the application and interview phase, since we heavily rely on it for work
> We said usage of ChatGPT was expected during the application and interview phase, since we heavily rely on it for work
You must have missed out on hiring some very good candidates, then.
Wow. I heavily rely on Google for work, wouldn't expect a candidate spend precious interview time googling though.
There are a bunch of subtly different ways to perform a coding interview.
If the interviewer points you at a whiteboard and asks you how to reverse an array, most likely they're checking you know what a for loop is and how to index into an array, and how to be careful of off-by-one errors. Even if your language has a built-in library function for doing this, they'd probably like you to not use it.
If the interviewer hands you a laptop with a realistic codebase on it and asks you to implement e-mail address validation, they're going for a more real-world test. Probably they'll be fine with you googling for an e-mail address validation regex, what they want to see is that you do things like add unit tests and whatnot.
Makes sense. I've never been asked to make such an exercise in real time, most of the time that would be a take home task - but I understand if someone wants to do that. Still it would be weird to demand that a candidate uses Google, wouldn't it?
I picked e-mail validation as an example precisely because it's something even experienced developers would be well advised to google if they want to get it right :)
Of course, if someone can get it right off the top of their head, more power to them!