Astroturfing on reddit has been a thing for over a decade and has really accelerated over the last few years. There's several companies where literally that is their business model to promote your product or service on reddit. I saw one for sale on acquire.com a while back for 7 figures
Astroturfing is the practice of creating a fake "grassroots" movement to make it look like a cause, product, or candidate has widespread public support when they actually do not.
Also for people who don't know, if you pay someone to post something (including just giving them a free product) it has to be disclosed. Astroturfing is (in simple terms) a form of fraud and the FTC does go over companies for it.
I'm curious, where it has to be disclosed? Like if a company would pay a few legitimate reddit account owners to review their post and upvote, and would disclose this activity in the DISCLOSURES.txt available on their website, would that be legal?
Where would one find some reddit users willing to do such reviews, by the way?
They're buying stolen Reddit accounts and spamming over 500 videos a day to various subreddits.
They're also advertising fake "unlimited" plans. Their reseller pricing (they're a reseller) is 1/10th the upstream API pricing, so they're metering and throttling and banning users that cost them money.
They're getting thousands of people to subscribe to $1800 "18 month" plans.
Their unofficial subreddit is full of complaints. Probably a dozen complaint threads a day now.
The conservative subreddits go from hundreds of active users to tens of thousands of active users depending on the talking points of the GOP at any moment. It's very very obvious.
Sure, but I suspect also harder because it's full of contrarians that will point out at least five better ways of solving the problem and people that upon being greeted with anything that looks like disguised marketing copy will spontaneously combust. And if you do manage to sneak a URL into a discussion without anyone thinking it's marketing, the next four comments will be about whether the design of the landing page is unsuited for people with 2400 pixel wide monitors or people with noscript enabled
(Also it's the kind of website where you absolutely can get good responses from "Show HN: A thing you might want to use and here's how much profit I'm making from it already" until a bunch of green usernames say nice things about it)
It should be possible to use plain old cryptography to prove
1. I am one of the named, publicly accountable people registered as participating in this thing, and the same as posted under this pseudonym yesterday
2. Provided I'm reasonably careful, you can't tell which one is me unless n of m participants agree to unmask me.
3. I can only post under one name at a time. I can change pseudonym, but then my old one is marked as abandoned, so I can't trivially fake conversations with myself.
Doesn't that then require a centralised ( or a hierarchy of centralized ) authority to manage point 3?
Who would that be? ( each country issuing it's own citizens IDs? )
If the solution requires you to keep a private key private ( to prove who you are ) - how is the average person going to do that?
How are you going to build in your cryptography into all those different systems?
All you need is one link between that pseudonym and some identifying info - like IP address or a payment - and it's all gone, and you've already built a perfect system for government tracking.
So even if you have built all that successfully I'd still suggest the world would split into sites that would use it and sites that wouldn't.
HN gets a lot less sockpuppeting/astroturfing than reddit or twitter, as far as I can tell. There is some of it, but if it gets too big dang et al seem to generally put a stop to it.
I'm seeing a much less sophisticated campaign for this on my city's subreddits recently... Someone will ask a weird or generic question, and either poster or the top comment is a throwaway account with a spiel about checking $site_name.
It's exhausting, especially since people will write out real advice and corrections about how to deal with rats, bedbugs, neighborhoods, etc. and it all goes into the ether in hopes someone will get scammed. Or maybe it's an SEO thing because the site name is so generic it's un-googleable. I hope it doesn't work.
Youtube has been spammed with these multi-account back and fourth discussion advertisements for a while. They're frequently for small local attorneys or financial advisors, and I have to wonder if those individuals realise how they're being marketed in bad faith.
I used to co-work next to a SEO specialist back in my freelance days and he would offer rankings, but the client would not be told that they were getting said rankings by blackhat SEO tactics (that mostly no longer work).
It's all so obvious and standardised that I have to imagine it is part of a toolkit or framework marketers are using without much thought.
This is the same kind of spam that overwhelmed blog comment sections and sent everyone scurrying to Reddit and Facebook 15 years ago. The spam was always had a generic comment praising the post and a line below shilling dick pills and Prada bags, but spelled like v_1_@_g_r_@ and the URL was simiarly obfuscated so they didn't trip a spam filter.
There's obviously a massive difference between using sockpuppet accounts to:
* Influence perception on a social media platform as a 3rd party
vs.
* Put content on a social media platform that users are looking for so they return to the platform
It doesn't matter who shares a story with you on social media if the goal is to entertain, but it does matter if the goal is to get you to do something [spend money on their courses]
The "value" of Reddit as a platform is knowing that people ORGANICALLY liked content. Otherwise, just go to an aggregator. These sock puppet accounts were there to deceive, no doubt. Obviously, they are "different" in some ways, but deceptive nonetheless.
It's so painfully obvious too. On my local subreddit: "what's the best ice cream shop in $CITY?" Check their post history, one "lol" on a cat pic on /r/aww 8 months ago.
The funny thing about that is it's extremely simple to bypass. On old or new reddit, search 'author:example' to find posts by /u/example. Or to see both comments and posts, on new reddit go to the user profile and do blank search like a single space character.
That's using reddit's own site, of course there are other methods like Google dorks.
Well that and some moderators of large subreddits like to ban people based on participation in other subreddits that “disagree” with whatever flavor echo chamber the moderator happens to live in.
And yet I suspect it's super effective, because of the powerful illusion of it being real people.
There's the classic search "hack" of adding site:reddit.com to any product recommendation search, to find "real" recommendations.
Most of the time this is going to find 5-10 posts, each with only a dozen comments and a dozen up-votes. And yet it feels do much more real than whatever at the top of Google that many people will trust these reviews.
Semi-related opinion, but it really boggles me how people can be such a*holes to leave spam all over the internet and keep turning it into a cesspit. If a company does it in such an obvious way it has to be systematic, meaning that someone somewhere comes up with an idea, his manager gives an okay to proceed and then they chop it into tasks and other employees pick them up. There are multiple people involved. All of them know about what's happening and that it's unethical. And for what? I doubt most people running/working for the YC companies need money that badly. Are they going to buy a boat? A higher Tesla model? Invite their friends to Michelin-starred restaurants to show how successful they are?
I looked into founding a company in this space and steered straight back out of it because yes, by far and away the VAST majority of demand in the market of study tools for high/middle schoolers is cheating. Below that, parents are involved and there's a market there (but a bad one, because of double sales where you have to sell through the parent to the child even though those two actors have misaligned incentives).
That is interesting and kind of what I suspected anecdotally. I think it's unfortunate for people who aren't aware of all this. That is what I will say.
A cheat sheet could be a piece of paper you're allowed to bring to an exam. To make a proper cheat sheet you have to understand the material you're working with anyway so it usually doesn't help you.
What is Astroturfing ?
It’s a deceptive, organized effort to make a message, campaign, or movement look like it’s genuine grassroots public support (or opposition) when it is actually planned, funded, and often controlled by a hidden sponsor (such as a corporation, political group, or other organization).
Pickle, another yc backed startup, is also acting really fishy. They claimed they developed a standalone AR device, took money from customers, and now they're saying it requires tethering to your phone. https://x.com/cixliv/status/2008129653467492631
They do back a lot of companies. Is there any evidence that they are pushing unethical or illegal business practices on their portfolio companies at a rate higher than non-YC start ups?
They don’t have to say anything. The market speaks for itself: do illegal stuff, don’t get caught, capture enough market share of whatever it is you are pursuing, and you will be rewarded handsomely by investors. The name of the game is capital return at whatever cost necessary. We will be living through the repercussions of this system for decades
Honestly I've posted about some unethical YC companies before and those posts got a lot of attention without being removed. That said we'll see what happens here.
They do however often ask friends and family to upvote and leave comments. Like "we have been very happy using XYZ" which is against HN rules but not that strictly enforced. I feel like it is extra-lax towards YC companies but maybe I'm imagining things.
On the contrary, we particularly tell YC founders not to do this—mostly privately, but there's a public version of it at https://news.ycombinator.com/yli.html (scroll down to "Comments" and see the part in bold. That's me trying to scare them.)
We do tend to be more lenient when there's no evidence of organized manipulation, just friends/fans/users trying to be helpful and not realizing that it's actually unhelpful. What dedicated HN users tend not to realize is that such casual commenters usually have no idea how HN is supposed to work.
But this leniency isn't YC-specific. We're actually less lax when it comes to YC startups, for several reasons.
Okay I have heard different things from some friends that are YC founders and saw them do it in practice (e.g. posting about their HN post on LinkedIn).
I'm not going to out people here but maybe it helps you to know that not everyone plays by the rules. Tbf I also understand that this is just really hard to enforce.
This is a great way to get your posts buried. We have to warn well-wishers not to do this (autonomously) before we post anything major, from bitter experience.
But as noted by freehorse, dang has stated it multiple times and I personally have not seen any threads memoryholed and would call out YC if they were.
You are free to present examples where they are lying in this. From what I have noticed, I have not seen any censorship in this matter, especially in contrast to other topics that reach flagged status very easily.
Occam's razor is more nuanced than blind naivete. It is the nature of suppression that there is no evidence of it. I do not understand how you stand by your words.
Healthy skepticism plus the maturing industry of online propaganda and persuasion campaigns is where I would put Occam's razor a la "minimal assumptions". Every social media site has been manipulated at all levels, moderation notwithstanding, I see no reason to believe HN is immune to this.
It is not just a question of economics for YC to allow and even administrate this kind of manipulation, but of second- and third-order goals like consent/consensus manufacturing, reputation-building, shoring up investments by building "viral" interest, etc. These are immediate logical deductions from the patterns of behavior by humans and the bots that imitate them that are present everywhere on the internet these days.
I do not disagree that tactics like "building viral interest" may happen here (not necessarily by mods or with their help though), as they happen also in reddit, as OP shows, and in general obviously all around the internet, and it is safe to assume that it does happen from time to time. For me the most disturbing part of the post is that the manipulating tactics target teenagers.
But talking about repressive behaviour by mods against YC-related criticism specifically, I do not see that. I understand the prior would be that a popular forum run by YC would want to protect and censor in order to protect interests of the companies they back. I also had that prior. However, this is not "occam's razor", it is a prior. The time I have been around here I have not noticed this kind of behaviour happening though, while I have definitely noticed other kinds of stuff getting repressed, meaning that it is less likely that such repression would go unnoticed all the time. Thus I adjusted my understanding accordingly by shifting the prior according the data. If you find different examples I am willing to take them into account.
Astroturfing on reddit has been a thing for over a decade and has really accelerated over the last few years. There's several companies where literally that is their business model to promote your product or service on reddit. I saw one for sale on acquire.com a while back for 7 figures
For those like me who didn’t know:
Astroturfing is the practice of creating a fake "grassroots" movement to make it look like a cause, product, or candidate has widespread public support when they actually do not.
Also for people who don't know, if you pay someone to post something (including just giving them a free product) it has to be disclosed. Astroturfing is (in simple terms) a form of fraud and the FTC does go over companies for it.
I'm curious, where it has to be disclosed? Like if a company would pay a few legitimate reddit account owners to review their post and upvote, and would disclose this activity in the DISCLOSURES.txt available on their website, would that be legal?
Where would one find some reddit users willing to do such reviews, by the way?
You should see Higgsfield right now.
They're buying stolen Reddit accounts and spamming over 500 videos a day to various subreddits.
They're also advertising fake "unlimited" plans. Their reseller pricing (they're a reseller) is 1/10th the upstream API pricing, so they're metering and throttling and banning users that cost them money.
They're getting thousands of people to subscribe to $1800 "18 month" plans.
Their unofficial subreddit is full of complaints. Probably a dozen complaint threads a day now.
Highly unethical company.
The conservative subreddits go from hundreds of active users to tens of thousands of active users depending on the talking points of the GOP at any moment. It's very very obvious.
I’m shocked when I come across people who think that sockpuppeting doesn’t happen on social media including HN.
I wish there were laws that required large social media sites to publish data to their end users that indicate the severity of the problem.
HN is so prime to sockpuppet on because of its fairly low comment rate but high concentration of users who can make technology decisions at companies.
Sure, but I suspect also harder because it's full of contrarians that will point out at least five better ways of solving the problem and people that upon being greeted with anything that looks like disguised marketing copy will spontaneously combust. And if you do manage to sneak a URL into a discussion without anyone thinking it's marketing, the next four comments will be about whether the design of the landing page is unsuited for people with 2400 pixel wide monitors or people with noscript enabled
(Also it's the kind of website where you absolutely can get good responses from "Show HN: A thing you might want to use and here's how much profit I'm making from it already" until a bunch of green usernames say nice things about it)
Contrarianism is an effective sockpuppet tactic, then, in order to buy legitimacy and trust in the marketplace of ideas.
But also easier because those comments quickly become dead.
Its a goldmine for social engineering, has come in handy personally a few times.
Spam, astroturfing, sockpuppetry are just some of the costs of anonymity, as it removes accountability.
It's also the flip side of people feeling free to say what they want under the cover of (pseudo) anonymity.
I wonder if one solution is to partition the web into places where anonymity isn't possible, and places where it is.
It should be possible to use plain old cryptography to prove
1. I am one of the named, publicly accountable people registered as participating in this thing, and the same as posted under this pseudonym yesterday
2. Provided I'm reasonably careful, you can't tell which one is me unless n of m participants agree to unmask me.
3. I can only post under one name at a time. I can change pseudonym, but then my old one is marked as abandoned, so I can't trivially fake conversations with myself.
How do you implement point 3?
Doesn't that then require a centralised ( or a hierarchy of centralized ) authority to manage point 3?
Who would that be? ( each country issuing it's own citizens IDs? )
If the solution requires you to keep a private key private ( to prove who you are ) - how is the average person going to do that?
How are you going to build in your cryptography into all those different systems?
All you need is one link between that pseudonym and some identifying info - like IP address or a payment - and it's all gone, and you've already built a perfect system for government tracking.
So even if you have built all that successfully I'd still suggest the world would split into sites that would use it and sites that wouldn't.
I got downvoted like mad for suggesting that HN is a marketing outlet for YC.
The people who frequent this forum think they are immune to astroturfing because they all work in ad tech.
Same here. I pointed out HN is the marketing arm of a capitalist investment fund and got flagged to oblivion.
HN gets a lot less sockpuppeting/astroturfing than reddit or twitter, as far as I can tell. There is some of it, but if it gets too big dang et al seem to generally put a stop to it.
I'm seeing a much less sophisticated campaign for this on my city's subreddits recently... Someone will ask a weird or generic question, and either poster or the top comment is a throwaway account with a spiel about checking $site_name.
It's exhausting, especially since people will write out real advice and corrections about how to deal with rats, bedbugs, neighborhoods, etc. and it all goes into the ether in hopes someone will get scammed. Or maybe it's an SEO thing because the site name is so generic it's un-googleable. I hope it doesn't work.
Youtube has been spammed with these multi-account back and fourth discussion advertisements for a while. They're frequently for small local attorneys or financial advisors, and I have to wonder if those individuals realise how they're being marketed in bad faith.
I used to co-work next to a SEO specialist back in my freelance days and he would offer rankings, but the client would not be told that they were getting said rankings by blackhat SEO tactics (that mostly no longer work).
It's all so obvious and standardised that I have to imagine it is part of a toolkit or framework marketers are using without much thought.
This is the same kind of spam that overwhelmed blog comment sections and sent everyone scurrying to Reddit and Facebook 15 years ago. The spam was always had a generic comment praising the post and a line below shilling dick pills and Prada bags, but spelled like v_1_@_g_r_@ and the URL was simiarly obfuscated so they didn't trip a spam filter.
Sockpuppet accounts are literally the foundation of Reddit [1]
[1] https://arstechnica.com/information-technology/2012/06/reddi...
Not a "gotcha"
There's obviously a massive difference between using sockpuppet accounts to:
* Influence perception on a social media platform as a 3rd party
vs.
* Put content on a social media platform that users are looking for so they return to the platform
It doesn't matter who shares a story with you on social media if the goal is to entertain, but it does matter if the goal is to get you to do something [spend money on their courses]
The "value" of Reddit as a platform is knowing that people ORGANICALLY liked content. Otherwise, just go to an aggregator. These sock puppet accounts were there to deceive, no doubt. Obviously, they are "different" in some ways, but deceptive nonetheless.
They didn't fake voting, they faked submissions.
So you could clearly tell if people liked or didn't like something.
It's so painfully obvious too. On my local subreddit: "what's the best ice cream shop in $CITY?" Check their post history, one "lol" on a cat pic on /r/aww 8 months ago.
4 lines of code could catch this.
> Check their post history, one "lol" on a cat pic on /r/aww 8 months ago.
And now Reddit has made it possible to hide your post history.
Probably because of this exact issue.
The funny thing about that is it's extremely simple to bypass. On old or new reddit, search 'author:example' to find posts by /u/example. Or to see both comments and posts, on new reddit go to the user profile and do blank search like a single space character.
That's using reddit's own site, of course there are other methods like Google dorks.
Well that and some moderators of large subreddits like to ban people based on participation in other subreddits that “disagree” with whatever flavor echo chamber the moderator happens to live in.
And yet I suspect it's super effective, because of the powerful illusion of it being real people.
There's the classic search "hack" of adding site:reddit.com to any product recommendation search, to find "real" recommendations.
Most of the time this is going to find 5-10 posts, each with only a dozen comments and a dozen up-votes. And yet it feels do much more real than whatever at the top of Google that many people will trust these reviews.
And the new feature to hide your post and comment history makes it impossible to even guess at whether someone is a "real" person or not.
Report to your state's Attorney General and the FTC. 404media also would be interested in knowing (Signal info on their site).
https://www.naag.org/find-my-ag/
https://reportfraud.ftc.gov/
https://www.404media.co/
Semi-related opinion, but it really boggles me how people can be such a*holes to leave spam all over the internet and keep turning it into a cesspit. If a company does it in such an obvious way it has to be systematic, meaning that someone somewhere comes up with an idea, his manager gives an okay to proceed and then they chop it into tasks and other employees pick them up. There are multiple people involved. All of them know about what's happening and that it's unethical. And for what? I doubt most people running/working for the YC companies need money that badly. Are they going to buy a boat? A higher Tesla model? Invite their friends to Michelin-starred restaurants to show how successful they are?
"Disruption" comes to the world of junior academia. It was inevitable. Nothing's sacred.
Welcome to Reddit. That, and the code camp thing. Reddit is a terrible anyway.
Coincidentally, I just observed that the USAA subreddit is very likely to be astroturfed.
Reddit is the garbage bin of the internet.
Better to have the garbage collected somewhere than to have it strewn about everywhere.
What is this cheatsheets and predicted exam leaks stuff? I don't mean to sound naive but is cheating a significant part of the test prep space?
I looked into founding a company in this space and steered straight back out of it because yes, by far and away the VAST majority of demand in the market of study tools for high/middle schoolers is cheating. Below that, parents are involved and there's a market there (but a bad one, because of double sales where you have to sell through the parent to the child even though those two actors have misaligned incentives).
https://www.gauthmath.com/
This AI cheating app is currently #8 for "education" in the iOS app store.
That is interesting and kind of what I suspected anecdotally. I think it's unfortunate for people who aren't aware of all this. That is what I will say.
A cheat sheet could be a piece of paper you're allowed to bring to an exam. To make a proper cheat sheet you have to understand the material you're working with anyway so it usually doesn't help you.
I usually does help you by having written it.
That's true, I meant it doesn't help much as a cheat sheet if you didn't put in the work to make it yourself.
What is Astroturfing ? It’s a deceptive, organized effort to make a message, campaign, or movement look like it’s genuine grassroots public support (or opposition) when it is actually planned, funded, and often controlled by a hidden sponsor (such as a corporation, political group, or other organization).
Proposal to change title from "kids" to "teens"?
Teens are kids.
Pickle, another yc backed startup, is also acting really fishy. They claimed they developed a standalone AR device, took money from customers, and now they're saying it requires tethering to your phone. https://x.com/cixliv/status/2008129653467492631
> and now they're saying it requires tethering to your phone
Where are they saying that?
Also what is the second "conclusion" screenshot from? (Who is the "Matthew" and what analysis, mentioned in that screenshot?)
Don't forget about Honey!
YC is full of scams.
They do back a lot of companies. Is there any evidence that they are pushing unethical or illegal business practices on their portfolio companies at a rate higher than non-YC start ups?
They don’t have to say anything. The market speaks for itself: do illegal stuff, don’t get caught, capture enough market share of whatever it is you are pursuing, and you will be rewarded handsomely by investors. The name of the game is capital return at whatever cost necessary. We will be living through the repercussions of this system for decades
Why do you require comparison in order to determine whether or not this practice is unethical?
That's not at all what he said.
no surprise considering YC's fake it till you make it and growth hacking culture
Like the founders telling employees to lie about compliance cuz everyone does it.
There is a line between fake it till you make it and fraud.
Absolutely, I have worked at several.
> Astroturfing: Coordinated campaigns... [to post,] upvote, leave supportive comments, and ask follow-up questions—creating the illusion of organic excitement... Critical comments receive coordinated mass downvotes.
I thought that was the dictionary definition of social media? If it isn't yet, it should be, Reddit is just the tip of the iceberg.
There are laws in the US, and if you're advertising you have to disclose it.
a YC company being unethical, shocking...
I am shocked!
I mean I am shocked that this post didn't get flagged immediately ofc.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Honestly I've posted about some unethical YC companies before and those posts got a lot of attention without being removed. That said we'll see what happens here.
It's OK. Paul Graham is one of the good billionaires.
Does anyone want to make the argument YC does more good than harm? I'd be surprised if that was a tenable position.
This thread will be hidden soon.
Hacker News doesn't memoryhole anti-YC threads.
lol it absolute does
you ever notice how most YC announcements have comments disabled?
Those are job ads (a special type of post), not normal user discussions.
Actual YC announcements do not have comments disabled.
They do however often ask friends and family to upvote and leave comments. Like "we have been very happy using XYZ" which is against HN rules but not that strictly enforced. I feel like it is extra-lax towards YC companies but maybe I'm imagining things.
On the contrary, we particularly tell YC founders not to do this—mostly privately, but there's a public version of it at https://news.ycombinator.com/yli.html (scroll down to "Comments" and see the part in bold. That's me trying to scare them.)
We do tend to be more lenient when there's no evidence of organized manipulation, just friends/fans/users trying to be helpful and not realizing that it's actually unhelpful. What dedicated HN users tend not to realize is that such casual commenters usually have no idea how HN is supposed to work.
But this leniency isn't YC-specific. We're actually less lax when it comes to YC startups, for several reasons.
Okay I have heard different things from some friends that are YC founders and saw them do it in practice (e.g. posting about their HN post on LinkedIn).
I'm not going to out people here but maybe it helps you to know that not everyone plays by the rules. Tbf I also understand that this is just really hard to enforce.
This is a great way to get your posts buried. We have to warn well-wishers not to do this (autonomously) before we post anything major, from bitter experience.
Can you make a case why you'd be an authority on the matter?
I know a bit how HN works: https://github.com/minimaxir/hacker-news-undocumented
But as noted by freehorse, dang has stated it multiple times and I personally have not seen any threads memoryholed and would call out YC if they were.
Downvoted for not recognizing HN royalty, what a charming community.
You are downvoted because there is no reason to ask for credentials before assuming good faith.
Anyone can make wild suppositions in good faith if they don't assign any value to their words. Do you?
To be fair, a moderator has stated this many times https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I don't see how that's "fair". Moderators are not impartial and since they are paid now they also have a vested interest in lying.
You are free to present examples where they are lying in this. From what I have noticed, I have not seen any censorship in this matter, especially in contrast to other topics that reach flagged status very easily.
Occam's razor is more nuanced than blind naivete. It is the nature of suppression that there is no evidence of it. I do not understand how you stand by your words.
Healthy skepticism plus the maturing industry of online propaganda and persuasion campaigns is where I would put Occam's razor a la "minimal assumptions". Every social media site has been manipulated at all levels, moderation notwithstanding, I see no reason to believe HN is immune to this.
It is not just a question of economics for YC to allow and even administrate this kind of manipulation, but of second- and third-order goals like consent/consensus manufacturing, reputation-building, shoring up investments by building "viral" interest, etc. These are immediate logical deductions from the patterns of behavior by humans and the bots that imitate them that are present everywhere on the internet these days.
I do not disagree that tactics like "building viral interest" may happen here (not necessarily by mods or with their help though), as they happen also in reddit, as OP shows, and in general obviously all around the internet, and it is safe to assume that it does happen from time to time. For me the most disturbing part of the post is that the manipulating tactics target teenagers.
But talking about repressive behaviour by mods against YC-related criticism specifically, I do not see that. I understand the prior would be that a popular forum run by YC would want to protect and censor in order to protect interests of the companies they back. I also had that prior. However, this is not "occam's razor", it is a prior. The time I have been around here I have not noticed this kind of behaviour happening though, while I have definitely noticed other kinds of stuff getting repressed, meaning that it is less likely that such repression would go unnoticed all the time. Thus I adjusted my understanding accordingly by shifting the prior according the data. If you find different examples I am willing to take them into account.
hello dang!