I am sorry to be the bearer of bad news, but the AI hype train has already left prompt engineering in the dust and driven on to the great central context engineering station.
I don't think this is generally true, though maybe I'm just missing the semantic drift. "Context engineering" as I'm familiar with it is something you do when you're building your own agent loops, not simply strategies for how to get the best response from a chatbot.
Isn’t this the same thing folks said in the past when software developers got reclassified into “engineers” alongside civil/electrical/etc engineers who had standards bodies, professional credentials, and licenses?
It’s really convenient that now folks here are in the in-group so that we can turn around and guard the entry gate.
We have to set the lower bar somewhere. Am I a spreadsheet engineer now because I calculate expenses in Excel? No, because that would be ridiculous.
More to the point: If the system you’re interacting with isn’t deterministic, you don’t understand how it really works internally, and you’re prodding it to see what happens if you do this instead of that—then that just isn’t engineering! or at least not in the sense that the word has been used, not without it becoming a term void of any specific meaning.
> If the system you’re interacting with isn’t deterministic, you don’t understand how it really works internally, and you’re prodding it to see what happens if you do this instead of that—then that just isn’t engineering!
Biological engineering would like a word.
But to press your overall point: why? What bad happens if we discard an attempt at a “lower” bound and let somebody be a spreadsheet engineer or a breakfast engineer or a roulette engineer?
Nothing really bad happens for sure. The term engineer just looses its meaning, becoming a generic way to refer to someone doing… something. I like language that's precise, expressive, and succinct. If someone says they're a baker, I know what they do. If we start to apply that label to anyone working in a bakery, even if they just operate the coffee machine, that holds no longer true, and more explanations are needed, more opportunities for misunderstandings arise, things become less efficient and more error-prone. How can that be desirable?
"Context Engineering" is all the rage now. Picked up by AI Influencers.
Basically you give context to the AI models. Like if you want to build a social network you give the AI all the documentation around it so it can follow the best practices and guidelines etc.
Article 16: The Anatomy of a Good Prompt in 2025 (Coming Soon)
Article 17: Zero-Shot Prompting: The Foundation (Coming Soon)
Article 18: Few-Shot Prompting: Learning from Examples (Coming Soon)
Article 19: The Power of Instructions: Clarity and Precision (Coming Soon)
Article 20: Using Personas and Role-Playing Effectively (Coming Soon)
Article 21: The Art of Prompt Formatting and Delimiters (Coming Soon)
Article 22: System Messages vs. User Messages: Best Practices (Coming Soon)
Article 23: Prompt Chaining and Sequential Reasoning (Coming Soon)
Article 24: The Art of Iteration: Refining Your Prompts (Coming Soon)
Article 25: A/B Testing Your Prompts for Optimal Performance (Coming Soon)
Article 26: The Impact of Prompt Length on Response Quality (Coming Soon)
Article 27: How to Deal with "I don't know" Responses (Coming Soon)
Article 28: Common Prompting Mistakes to Avoid (Coming Soon)
Article 29: Techniques for Reducing Bias in LLM Outputs (Coming Soon)
Article 30: Understanding and Working with Model Limitations (Coming Soon)
Prompt engineering was never a thing, sorry. Anyone that got paid as one certainly rode the hype train funded by people who didn't understand what they were paying for.
I am sorry to be the bearer of bad news, but the AI hype train has already left prompt engineering in the dust and driven on to the great central context engineering station.
I don't think this is generally true, though maybe I'm just missing the semantic drift. "Context engineering" as I'm familiar with it is something you do when you're building your own agent loops, not simply strategies for how to get the best response from a chatbot.
The chatbot forgets once the context tokens are used up. How do you engineer a solution, then, that ensures that the chatbot never forgets?
That's not engineering in any sense of the word, and calling it that is an insult to any real engineer.
What you're describing is poking a thing with a stick to see if it moves.
Isn’t this the same thing folks said in the past when software developers got reclassified into “engineers” alongside civil/electrical/etc engineers who had standards bodies, professional credentials, and licenses?
It’s really convenient that now folks here are in the in-group so that we can turn around and guard the entry gate.
We have to set the lower bar somewhere. Am I a spreadsheet engineer now because I calculate expenses in Excel? No, because that would be ridiculous.
More to the point: If the system you’re interacting with isn’t deterministic, you don’t understand how it really works internally, and you’re prodding it to see what happens if you do this instead of that—then that just isn’t engineering! or at least not in the sense that the word has been used, not without it becoming a term void of any specific meaning.
> If the system you’re interacting with isn’t deterministic, you don’t understand how it really works internally, and you’re prodding it to see what happens if you do this instead of that—then that just isn’t engineering!
Biological engineering would like a word.
But to press your overall point: why? What bad happens if we discard an attempt at a “lower” bound and let somebody be a spreadsheet engineer or a breakfast engineer or a roulette engineer?
Nothing really bad happens for sure. The term engineer just looses its meaning, becoming a generic way to refer to someone doing… something. I like language that's precise, expressive, and succinct. If someone says they're a baker, I know what they do. If we start to apply that label to anyone working in a bakery, even if they just operate the coffee machine, that holds no longer true, and more explanations are needed, more opportunities for misunderstandings arise, things become less efficient and more error-prone. How can that be desirable?
It doesn’t bother me as much as having an in-group trying to close the door on people they’ve decided are below the bar and don’t get to use a word.
I think engineers will survive just fine.
"Context Engineering" is all the rage now. Picked up by AI Influencers.
Basically you give context to the AI models. Like if you want to build a social network you give the AI all the documentation around it so it can follow the best practices and guidelines etc.
The main problem with books lagging behind "now" is almost solved
I'm confused, where's the content?
For instance I looked in Meta-Cognitive Techniques and it mentions "Articles in This Series" but the list doesn't have any links...?
https://promptz2h.com/chapter_03_advanced_prompting_techniqu...
Oh, the content? It's coming soon!
Will be ready VERY soon )
> 1. Prompt Engineering: From Zero to Hero (promptz2h.com) 12 points by blackpc 6 hours ago
> 2. .NET: From Zero to Hero (dotnetz2h.com) 4 points by blackpc 12 hours ago
> 3. Python: From Zero to Hero (pythonz2h.com) 4 points by blackpc 17 hours ago
> 4. AI-Powered Financial Companion (pomegra.io) 3 points by blackpc 1 day ago
> 5. Python: From Zero to Hero (pythonz2h.com) 2 points by blackpc 5 days ago
> 6. ReactJS from Zero to Hero Free Online Book (reactz2h.com) 2 points by blackpc 5 days ago
If I wanted this low quality garbage why wouldn't I just generate it myself?
Just do it
Lie? The user testimonials cannot be true, the domain was created yesterday.
Also one of them is an Asian lady named James Wilson, another is Reagan using the name Carlos Martinez?
This book is AI generated, the author doesn't exist ..., right?
Prompt engineering was never a thing, sorry. Anyone that got paid as one certainly rode the hype train funded by people who didn't understand what they were paying for.