One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
<< find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
Honestly, kudos. The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
<< As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And people laughed at Red Reddington when he said he had no email.
There was a post from someone a long time ago who has an email address and name similar to Make Cuban but not quite. He got quite a few cold call emails meant for Cuban. A lot of them were quite sad (people asking for money for medical procedures and such).
> The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
Unless your problem is with the company doing the privacy violations, this doesn’t make any sense.
Where I live, which is not in the USA, I'm confident my doctor's office doesn't sell their contact list - or at least, not without statistical anonymisation and aggregation for research purposes.
They probably outsource processing the data and storing it to other entities, but that will be under contracts which govern how the data may be used and handled. I assume that's not what "sell the data" means in this conversation.
It would be such an egregious violation of local data protection law to sell patient personal details for unrestricted commercial use, including their contact info, and it would make the political news where I live if they were found out.
Also "not in the USA" i actually work on a medical ish application these days (not the in production version, mind but a fork with new features that's entirely separate at the moment).
I have access to ... zero patient data. Our entire test database is synthetic records.
Exactly this was tried by the likes of James Oliver and journalists/comedians of that caliber running ads and gathering data from politicians in Washington.
>One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
>As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
Fortunately this is changing with iOS 18 with "limited contacts" sharing.
The interface also seems specifically designed to push people to allow only a subset of contacts, rather than blindly clicking "allow all".
The far bigger issue is the contact info you share with online retailers. Scraping contact info through apps is very visible, drawing flak from the media and consumers. Most of the time all you get is a name (could be a nickname), and maybe some combination of phone/email/address, depending on how diligent the person in filling out all the fields. On the other hand placing any sort of order online requires you to provide your full name, address, phone number, and email address. You can also be reasonably certain that they're all accurate, because they're plausibly required for delivery/billing purposes. Such data can also be surreptitiously fed to data brokers behind the scenes, without an obvious "tiktok would like access to your contacts" modal.
On android you can choose whether to grant access to contacts. And most apps work fine without.
GrapheneOS, which I use, also has contact scopes, so troublesome apps that refuse to work without access will think they have full access. You can allow them to see no contacts or a small subset.
There's also multiple user profiles, a "private space", and a work profile (shelter) that you can install an app into, which can be completely isolated from your main profile, so no contacts.
It surprises me how far behind iOS is with this stuff. Recently I wanted to install a second instance of an app on my wife's iPhone so she could use multiple logins simultaneously, there didn't really seem to be a way to do it.
The point is that it doesn't matter whether YOU grant access to your contacts. As long as anyone who has you in THEIR contacts decides to just press "share contacts" with any app, you are doxxed and SkyNet is able to identify you for all practical purposes.
Or because they were tricked. eg. LinkedIn’s “Connect with your contacts” onboarding step which sounds like it’ll check your contacts against existing LinkedIn users but actually spam invites anyone on your contact list that doesn’t have an account.
Wasn't this also how some services would connect e.g. your bank accounts? They'd ask for your credentials and log into your bank to scrape its contents.
And I kinda get it, some services external to your bank can help you manage your finances etc. But it's why banks should offer APIs where the user can set limited and timed access to these services. In Europe this is PSD2 (Revised Payment Services Directive).
This is how a load of emails were sent out from my Hotmail account to anyone I had ever contacted (including random websites) asking if I want to connect with them to Facebook. The onboarding seemed to imply it would just check to see if any of my contacts were already using facebook.
Useless without limiting the kind of data I want to share per contact. iOS asks for relationships for example. You can set up your spouse, your kids, have your address or any address associated with contacts. If I want to restrict app access to contacts, I also want to restrict app access to specific contact details.
I think it's not properly appreciated that Apple fully endorses all of this. For two reasons: (1) the provision of the output of billions of dollars of developer time to their users for no up front cost (made back via ads) is super valuable to their platform; and (2) they uniquely could stop this (at the price of devastating their app store), but choose not to.
In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
>I think it's not properly appreciated that Apple fully endorses all of this. [...] they uniquely could stop this (at the price of devastating their app store), but choose not to.
A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps. Just look at f-droid. Most (all?) of the apps there might be privacy respecting, but good luck getting any of the popular apps (eg. facebook, tiktok, google maps) on there.
>In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
What would make you think Apple's pro-privacy changes aren't "about stealing $10B a year or so from Facebook"? At least some people are willing to pay for more privacy, and pro-changes hurts advertisers, so basically any pro-privacy change can be construed as "less about meaningful privacy and far more about stealing".
F-Droid will never have popular apps because it requires them to be open source. In fact F-Droid does the build for you, generating reproducible builds and avoiding the risk of adding trackers to the binary that aren't actually in the source code. With F-Droid the code you see is what you get.
> A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps.
40 years ago apps were sold on floppy disks. 30 years ago they were sold on CD-ROMs. 20 years ago, DVDs.
Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing. Apps being blocked or banned from stores for choosing to not respect your privacy is a good thing.
>Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing.
I'm not sure you're trying say. I specifically acknowledged the existence of f-droid as a "privacy respecting app store" in the quoted comment.
>Apps choosing to not respect your privacy, and being blocked or banned from stores, is a good thing.
"a good thing" doesn't mean much when most people haven't even heard of your app store, and are missing out on all the popular apps that people want. Idealism doesn't mean much when nobody is using it. Apple might not be the paragon of privacy, but they had a greater impact on user privacy than f-droid ever will. To reiterate OP's point: what's the point of having a perfectly private OS and app store, when there's no apps for it, and your normie friends/relatives are going to sell you out anyways by uploading their entire contact list and photos (both with you in it) to google and meta?
Until the app's devs get wise to this, and do not allow the app to function without the network access. It could be as simple as a full screen, non-closable screen that says the app requires network access with a button to the proper setting to correct the issue.
Such "go away" screens are in violation of Apple's AppStore rules. You cannot make a permission a condition of using the app, and stop the user from using it if they don't grant that permission. The app should gracefully do as much as it possibly can without the permission.
Try signing in in any Google app without allowing data sharing with Safari. It's not possible. They don't let you.
It's kind of weird that Apple introduced this big fat tracking consent popup, but they don't really do anything to actually prevent cross-app tracking...
This holds for every app and every permission? Because I'm quite sure I recently used an app that closed for not allowing a permission. May be misremembering..
5.1.1 (iv) Access: Apps must respect the user’s permission settings and not attempt to manipulate, trick, or force people to consent to unnecessary data access. For example, apps that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos. Where possible, provide alternative solutions for users who don’t grant consent. For example, if a user declines to share Location, offer the ability to manually enter an address.
This wording is actually a lot weaker than I remember it back when I wrote iOS apps. The developer also was not allowed to exit the app or close it against the user’s intent, however I can’t find that rule anymore.
I agree with these guidelines (although they could be improved), although I think that some things could be done by the implementation in the system, too.
> For example, if a user declines to share Location, offer the ability to manually enter an address.
This is a reasonable ability, but I think that the operating system should handle it anyways. When it asks for permission for your location, in addition to "allow" and "deny", you can select "manually enter location" and "custom" (the "custom" option would allow the user to specify their own program for handling access to that specific permission (or to simulate error conditions such as no signal); possibly the setting menu can have an option for "show advanced options" before "custom" will be displayed, if you think it would otherwise make it too complicated).
> that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos
This is reasonable, that apps should not be allowed to require microphone access for such a thing.
However, sometimes a warning message makes sense but then to allow it anyways even if permission is not granted; e.g. for a video recording program, it might display a message about "Warning: microphone permission is not allowed for this app; if you proceed without enabling the microphone permission, the audio will not be recorded." Something similar would also apply if you denied camera permission but allowed microphone permission; in that case, only audio will be recorded. It might refuse to work if both permissions are denied, though.
Yeah, "unnecessary" is the word that may as well render the whole section moot unless it's actually properly enforced. If I can remember I'll test it today and see how it goes.
Yeah like the ChatGPT app that doesn't work without a Google account. I have Google play on my phone, just no account logged in. I do have Google play services like firebase push which many apps legitimately need. But ChatGPT just opens the login screen in the play store and exits itself.
I'm always wondering why these idiots force the creation of an account with their direct competitor. It's the only app I have that does this. But anyway I don't use their app for that reason, only use them a bit through API.
Grapheneos lets you pick this for apps before they even launch. You can revoke their network access, as well as define storage scopes for apps at a folder level, so if an app needs access to photos, you can define a folder, and that is the only folder it can scan for photos.
I used that when submitting parental leave at work. I didn't want to provide full access to all my photos and files for work, so all they got was a folder with a pic of a birth certificate.
iOS and Mac also let you do this, for photos, contacts and files.
Apple is also pushing developers toward using native picker components. That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
> That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
This is an interesting contrast with the earlier philosophy of phone OSes that the file system is confusing to users and they should never be allowed to see it.
From an user perspective, photos aren't files. Music isn't files. Contacts aren't files. Apps aren't files. App data isn't files.
The only things that "walk like a file and quack like a file" are documents, downloads, contents of external storage, network drives and cloud drives, and some Airdrop transfers.
Yes, it's technically possible to use the files app to store photos, music etc, but if you do that, "you're holding it wrong."
>Its not. Apple still owns your stuff. There is no difference between Apple and other 3p retailers.
That could be taken to mean anywhere between "Apple controls the software on your iPhone, therefore they control your contacts" and "Apple gives out your data like the data brokers mentioned in the OP". The former wouldn't be surprising at all, and most people would be happy with, and the latter would be scandalous if proven. What specifically are you arguing for?
The "vulnerability" part doesn't seem to be substantiated. From wikipedia:
>The images were initially believed to have been obtained via a breach of Apple's cloud services suite iCloud,[1][2] or a security issue in the iCloud API which allowed them to make unlimited attempts at guessing victims' passwords.[3][4] Apple claimed in a press release that access was gained via spear phishing attacks.[5][6]
Regardless of their security practices, it's a stretch to equate getting hacked with knowingly making available data. Moreover you can opt out of icloud backup, unlike with whatever is happening with apps mentioned in the OP.
> (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
When I was at a medium-sized consumer-facing company whose name you’d recognize if you’re in the tech space (intentionally vague) we had some customers try this. They’d find product managers or directors on LinkedIn then start trying to contact them with phone numbers found on the internet, personal email addresses, or even doing things like finding photos their family members posted and complaining the comments.
We had to start warning them not to do it again, then following up with more drastic actions on the second violation. I remember several cases where we had to get corporate counsel involved right away and there was talk of getting law enforcement involved because some people thought implied threats would get them what they wanted.
So I can see why companies are quick to lock out customers who try these games.
I wonder if it ever evoked an dive into exactly what happened to leave these customers with thinking this was the most likely avenue for success? Hopefully in at least some cases their calls with CSRs were reviewed and in the most optimistic of best cases additional training or policies were put into place to avoid the hopelessness that evokes such drastic actions.
That would require empathy from someone who is, right now, bragging about how they sicced their lawyers and the cops on customers they were fucking over.
I'm going to guess that the answer would be "nope, didn't care." That Cirrus isn't going to pay for itself, friend...and you can't retire at 40 without breaking a few eggs.
I remember when Google was locking accounts because people had the audacity to issue a chargeback after spending hours trying to resolve Google not delivering a working, undamaged phone they'd paid well over half a grand for. Nobody at Google cared, but when the money (that Google never fucking deserved in the first place) was forcibly and legally taken back, the corporation acted with narcissistic rage...
> So I can see why companies are quick to lock out customers who try these games.
Most of the companies who customers try these "games" against are places like Google and Meta that literally do not provide a way for the average customer to reach a human. None.
Those have got it coming for them, the megacorps' stance on this is despicable and far worse than the customers directly reaching execs who could instantly change this but don't because it would cut into their $72 billion per year net profit.
This is a case where laws simply did not catch up to the digital era. In the brick and mortar era it was by definition possible to reach humans.
I get that your company was smaller and probably did allow for a way to reach a human but that's not generalizable.
Long ago when Google tried to launch its very first phone somewhere in Europe I can distinctly remember that it was initially not allowed to because of some regulation that mandated a company selling telephones to have a customer service.
Can't remember if they eventually found a loophole or if the regulations were changed.
My only connection to Amazon support has been for AWS.
Perhaps though this should be an example of good customer service where talking to a human is easy, and not lumped in with the likes of Google where its impossible.
Perhaps your experience with the online shop is different, but frankly they're in my "good" column, not my "bad" column.
Two companies that are so gigantic they combine to a great percentage of number of "company interactions" the average Westerner has on a daily basis.
Anyway, I don't think it contradicts my point? Your company exist, mom and pops exist and there's a whole spectrum between them, so it's not generalizable.
What's funny is that the exec I got on the phone was super supportive and helpful and was genuinely amused to hear from me and hear what was happening. He put me in touch with their "Executive Support Team" and it was after this that I guess someone realized they didn't like the route I had taken.
I feel somewhat vindicated after this announcement (though it does nothing to bring my account back):
> Accessing any kind of customer service for Cash App was a challenge, too, according to the CFPB. Block included a customer service number on Cash App cards and in the app's Terms of Service, but calling it would it ultimately lead users to "a pre-recorded message directing consumers to contact customer support through the app."
>> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone.
I find it funny how easy it is to find scammy websites which promise to remove your data (right...), but how hard it is to find the actual marketplaces where people trade this data. It also makes you think about what other systems have similar asymmetric interfaces for the public and the ones in the know (yes, I know there are plenty).
As a result of sales drones getting hold of my number, I have to put my phone on silent and never pick up unless I recognize the number. Very unfortunate. What if there is an emergency with my kids?
If you're using iOS you can set certain contacts to bypass silent mode so that you still hear their notifications/calls. I know it doesn't help with unknown numbers, but just saying in case you're not aware. I'd be surprised if you can't do the same on Android.
Yes, thanks, I've configured that for kids and other loved ones. But I can't pick up anything else, even sales people from India manage to use a number that appears local (in The Netherlands for me), so I might miss a call from the kid's school.
I've just added the numbers of my kids school onto the list and it's been fine for me. I've never had them contact me from anything other than the schools number, but I'm in the UK and I would be very surprised if a teacher tried to call me from their mobile phone or something.
Oh wow, I knew this was a rampant problem in the US, but I didn't realise we had that at that scale here in the Netherlands as well. Hoping I can dodge the bullet a little longer...
And the combination of contacts are also unique enough to identify you. Even though they change over time. Some fuzzy matching, take in another few bits of fingerprint like device type and country and voila no advertiser ID required.
Ps smart idea to use it for that purpose. If I failed to get proper service I'd just review bomb the company everywhere and soon enough I'd get a call fixing my problem and asking to remove them :)
> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies
The article author claims that you can't get this stuff for under $10k. Where do you find it for pennies?
As a test I downloaded it and got my wife’s full email and cell phone number easily from their free trial. And the full price would be on the order of pennies per contact.
Assuming these marketplaces operate within the bounds of the law, would it break HN’s ToS to post them? I’d be interested in pursuing the same strategy.
I have done this as well. I once got an travel insurance claim rejected by some outsourced handler and found out who the CEO of the insurance startup was. I emailed him and magically it got resolved
Yeah, I wonder if it might help to create a little newsletter for politicians and regulators. Send emails telling them exactly where they are, what apps they use, and so on. And send them the same information about their children.
Eh, California protects politicians from having their real estate holdings posted online by government, and afaik, most county recorders have decided it's easier to not let any of it be online than to figure out who is a politician and only restrict their information.
Of course, much of it is public information so businesses can go in person, get all the info and then list it.
To use a line sometimes attributed to Beria, “give me the man and I will give you the case against him”. By which I mean that I’m sure they will find some means of making you sorry.
I mostly connect through Signal. I do technically have a phone number that my close friends and family have, but its a random VoIP number that I usually change every year or so. Surprisingly no one has really cared, I send out a text that I got a new number and that's that.
How? Most of the services I use, from Walgreens to banks to retirement accounts, require a phone number either for 2FA or just to verify that you’re you when signing up. After changing my phone number this year and having to go through the rigamarole for each service, I decided never again.
I've had limited luck feigning ignorance with a bank recently. "I don't know why I'm not getting a code" "No, I don't have another phone number" "I still can't log in to the web portal". They dropped the phone number requirement in favor to sending the OTP to email in the end, but it took way more effort than is reasonable. I tend to include a request to the CS person to pass along a request for TOTP/authenticator apps but given the request for a phone number is likely intentional I doubt the feedback is getting too far. In my naive mind, if enough people do the same, maybe they'll get the message.
Yeah, companies are not dumb, and they know when you have VoIP number vs a full account with an "accepted" company.
I can kind of see why not allowing 2FA to a number that could be easier to loose, but that's weak argument. Of course they don't want someone from .ru to get a US number with all of the baggage that would entail
There are flaws to their methodology. For half the companies, to change your number from A to B, you first must verify a NONCE with A, then verify a NONCE with B. This just means you have to possess two phone numbers for a period of time — Weeks, or in reality, months — while you change the long list of services over to the new phone number.
There is a simpler/better way and that is to verify you have your email address before allowing you to do a NONCE with B.
If you're US based, there's tons of data broker sites, and you can glue together the information for free as various brokers leak various bits (E.g. Some leak the address, others leak emails, others leak phone numbers). And that's by design for SEO reasons, they want you to be able to google someone with the information you have, so they can sell you the information you don't have.
Some straight up list it all, and instead of selling people's information to other people, they sell removals to the informations owner. Presumably this is a loop hole to whatever legislation made most sites have a "Do Not Sell My Info" opt out.
What you do is look up a data broker opt out guide, and that gives you a handy list of data brokers to search. E.g.
I can relatively easily skip trace people but where are you buying specific peoples information? Do you mean youre skip tracing or buying directly from data brokers?
The thing is...contact details aren't really private information, basically by definition.
The distinction is contact details privacy is based on the desire not be interrupted by people you didn't agree to be interrupted by - i.e. it's a spam problem - and realistically to solve this requires a total revamp of our communications systems (long overdue).
The basic level of this would be forcing businesses to positively identify themselves to contact people - i.e. we need TLS certificates on voice calls, tied to government issued business identifiers. That would have the highest immediate impact, because we could retrain people not to talk to anyone claiming to be a business if there phone doesn't show a certificate - we already teach this for email, so the skill is becoming more widespread.
A more advanced version of this might be to get rid of the notion of fixed phone numbers entirely: i.e. sharing contacts is now just a cryptographic key exchange where I sign their public certificate which the cellphone infrastructure validates to agree to route a call to my device from their device (with some provisioning for chain of trust so a corporate entity can sign legally recognized bodies, but not say, transfer details around).
This would solve a pile of problems, including just business decommissioning - i.e. once a company shuts down, even if you scraped their database you wouldn't be able to use any of the contact information unless you had the hardware call origination gear + the telecom company still recognized the key.
Add an escrow system on top of this so "phone numbers" can still work - i.e. you can get a random number to give to people that will do a "trust on first use" thing, or "trust till revoked" thing (i.e. no one needs to give a fake number anymore, convention would be they're all fake numbers, but blocking the number would also not actually block anyone you still want to talk to).
EDIT: I've sort of inverted the technical vs practical details here I realize - i.e. if I were implementing this, the public marketing campaign would be "you can have as many phone numbers as you want" but your friends don't have to update if you change it. The UI ideally would be "block this contact and revoke this number?" on a phone which would be nice and unambiguous - possibly with a "send a new number to your friends?" option (in fact this could be 150 new numbers, one per friend since under the hood it would all be public key cryptography). I think people would understand this.
What definition of contact details makes them not private?
Contact details (your phone number, email or address) are definitively private information, you should be the one that decides who gets them and who doesn't.
But it's not meant to be shared widely, for most people it's meant to be shared with consideration and/or permission.
Also, it's not just about "a desire not be interrupted by people you didn't agree to be interrupted by", it's about not having the data in the first place, for any reason, including tracking of any sorts.
Pre-internet/cell phone nearly everyone had their name/phone number/address in phone books. Libraries had tons of phone books. And you could pay for the operator to find/connect you to people as well.
Contact info being private is a relatively recent concept.
I'm really happy to see this level of detail and research. So many privacy-related articles either wholly lack in technical skill, or hysterically cannot differentiate between different levels of privacy concerns and risks.
People commonly point to Mozilla's research regarding vehicle's privacy policies. (https://foundation.mozilla.org/en/blog/privacy-nightmare-on-...) But that research only states what the car company's lawyers felt they must include in their privacy policies. These policies imply (and I'm sure, correctly imply) that your conversations will be recorded when you're in the vehicle. But, they never drill down into the real technical details. For instance ..... are car companies recording you the whole time and streaming ALL of your audio from ALL of your driving? Are they just recording you at a random samples? Are they ONLY recording you when you're issuing voice commands, and the lawyers are simply hedging their bets regarding what sort of data _might_ come through accidentally during those instances? Once they record you, where is the data stored, and for how long? Is it sent to 3rd parties, etc? Which of these systems can be disabled, and via what means? Does disabling these systems disable any other functionality of the vehicle, or void its warranty? Lastly, does your insurance shoot up if you have a car without one of these systems? etc ...
The list of questions could go almost indefinitely, and presumably, would vary strongly across manufacturers. So much of the privacy news out there is nothing but scary and often not very substantiated worst case scenarios. Without the details and means to improve privacy, all these stories can do is spread cynicism. I'm really glad to see this level of discourse for the author.
Those aren't questions that have fixed answers. The data available is pretty far beyond what I'm personally comfortable with though.
One OEM I'm familiar with had such a policy. My org determined that we needed a statistical reference to compare against within a certain area. Some calls were made to the right people and shortly after we had a (mildly) anonymized map of high precision tracks for every vehicle of that brand within the area over some period.
I'll answer the, "Does disabling it void your warranty?" question. The answer is almost always "no". Unless the modification you make to something actually directly or indirectly caused damage to it, companies in the US cannot "void the warranty".
> Lastly, does your insurance shoot up if you have a car without one of these systems?
This question I can answer with a reasonable degree of certainty; no, it does not.
Insurance companies increase rates for automobile coverage for many reasons, real or illusionary. But "does your insurance shoot up" strictly for not having a recording device in a vehicle is not one of them.
Do some insurance companies charge less when provided access to policy owner driving patterns which the companies infer reduce their risk? Sure.
> Do some insurance companies charge less when provided access to policy owner driving patterns which the companies infer reduce their risk? Sure.
> But that is a different question.
In what way? A discount for allowing surveillance is identical to an extra charge for disallowing it. They're identical, unless the "base" rate is set externally somehow.
$5 for lemonade, $3 off if you skip the lemon == $2 for sugar water, $3 extra to add lemon.
> A discount for allowing surveillance is identical to an extra charge for disallowing it.
I don't think this is necessarily true. You're right that there's an unknown base rate, but that means you can't say what you're saying as well. And if you have other companies that offer non-driving-pattern policies as well, and they're a similar price, you can see it's a discount not an added cost.
In fact, regardless, other companies are your best bet in combatting rising prices for any reason.
In theory you’re already paying the merchant fee in the “price”. So merchant found a way to improve margins and credit card companies found a new revenue source
Or phrased less inflammatory manner: "Corporations can enter into contracts and engage in legal action just like people can". Even the much maligned Citizens United v. FEC basically boils down to "groups of people (corporations or labor unions) don't lose first amendment protections just because they decided to group up".
Except not everyone in a corporation has the right to speech. I'm prohibited by my employer to say anything on the company's behalf, but the C-suite and board are able to speak on my behalf. So, the company's leadership has a right to free speech, I don't.
You still have that right; you simply entered into a voluntary agreement with your employer not to exercise it in exchange for money. Happens all the time.
Let's bring back indentured servitude, you have a right to not be a slave but you should still be able to enter into a voluntary agreement not to exercise that right.
>Except not everyone in a corporation has the right to speech. I'm prohibited by my employer to say anything on the company's behalf,
Yeah, that's how organizations typically work? You might have "freedom of movement", but that doesn't mean you can work in your CEO's office. Organizations also limit who has access to its bank accounts, but that doesn't mean it's suddenly illegitimate for companies to engage in transactions.
It makes me wonder, if everyone 'owned' their own data, I wonder if it could be used as a form of UBI. Everyone has data from using services, everyone owns it, everyone can sell it to make a living just doing whatever they are doing everyday.
This is only just a shower thought I had the other day though, there are probably many pitfalls when it comes to such an idea.
Unlikely. I'd think the most valuable data is generally the type that can be used to extract money from you. Targeted ads and such. So, your data's value would increase in proportion with your spending power.
Connecting information to that kind of personal gains sounds dangerous. There is probably non-negligible abuse potential, like college kids legally printing money at weird scale.
IDK, I think almost all interesting data has no obvious single owner, because it gets created as a side effect of an interaction between two or more parties.
Take the transaction information from example above. The record of you buying products X, Y, Z for total t=x+y+z at time T, with card C - both you and the store could argue they're entitled to it. It's about you and money you spent and products you received, but it's also about them and the money they received and the products that were taken off their inventory. Then the card issuer will interject saying, "hey, the customer uses a card we provide as a service, so we're at least entitled to know which card was use to pay, to whom, when, an what the total amount was!". Then both yours and stores' banks will chime in, and behind them, also the POS terminal provider.
Truth is, they all have a point. We like to think that paying for groceries with our watch is like a medieval peasant paying for fruit with metal coins at a town market. It's not. Electronic payments always involve multiple steps handled automatically, in the background, by half a dozen service providers linked by their own contracts and with their own legal reporting requirements, and each of them really do need to know at least some details about the payment they're participating in.
A simpler example: this comment. It's obviously mine. It's also a response to you, and it only makes sense in context of the whole subthread. Should anyone reply to it, they'll gain a stake in it, too - and then, arguably, everyone following this discussion have a right to read it, now and in the future. After I hit the "Reply" button, I can't in good conscience claim this comment is mine and only mine. This is why I'm personally against the practice of unilaterally mass-deleting of comments on open discussion boards, like e.g. plenty of people do on Reddit, forever ruining useful discussions for the public.
(It's also why I like HN's approach to GDPR, which is, you can get your account disassociated from your comments, and you can request potentially identifying content be removed, but the site won't just mass-delete your comments automatically.)
Honestly the path to "UBI" is probably just socialized/subsidized basic needs.
Build masses of government housing, make a healthcare public option with sliding-scale costs, and you're 90% of the way there - food and decent low-end broadband are frankly already cheap enough for the government to cover with maybe some "Don't gouge Uncle Sam or else" clauses and that's about everything.
I don't support UBI but that's a fascinating idea. Unfortunately the data is worth micropennies in the individual, so only worth something in aggregate, like a class action settlement where you end up with a cheque for $0.34 for damages which makes it not even worth your time, it'd only be good as the backdrop for a science fiction novel or as an experiment by a YouTube video by a well known creator to see how little money it would make. I would read the hell out of that book and watch that video tho!
This is fairly easily answered through legislation like the GDPR which classes this data as personal data if it’s associated with an identified or identifiable person.
A legislative body writing something down doesn’t mean society has agreed to it.
If someone journals and writes down everyone they met with locations and dates, they will laugh you out of the room if you tell them they are violating GDPR.
This also leads to stupid shit like people not being sure if they can point a camera at their driveway to catch vehicle break-ins.
Finally, classifying something as “personal data” because it’s about me still doesn’t make it “my data”.
Health data in the US is strictly regulated, very personal, but is definitely not mine. I cannot remove things from it or prevent it from being shared between healthcare institutions.
Is there any documentation on this to read further? I.e. what the different levels contain and how much on average is the cost reduction for the merchant.
The cost reduction is very small, it’s applied to interchange fees. I’ve been directly responsible for implementing this functionality on payment gateways for multiple processors because it helps reduce fraud holds as well.
Separate question, what are your ethics around the surveillance of Americans' economic activities by private actors? What "rights" are relevant in this space and which do you subscribe to?
I'm not going to debate you about anything, I just don't get the chance to ask insiders any of these questions.
My ethics are “this is unequivocally wrong without consent”.
Thankfully my work was on payment products that serviced businesses and government entities, so I did not really have to deal with that moral quandary.
However it gets muddier in other spaces as well. There are types of cards, like HSA/FSA that require something similar to level 3 data called IIAS that is used to determine what parts of your purchase are eligible. In the parts of the systems I have worked with, this is covered by HIPAA, but I have no idea if there are “clever” methods to sneak that data out of the chain elsewhere.
That just sounds like a standard cross-merchant loyalty program? I don't think there are many examples in the US, but once you realize it's a loyalty program you really shouldn't be surprised that they're tracking your purchase history. That's basically the entire premise.
In Germany, the major cross-merchant loyalty program Payback gives you one or two rounds of extra consent choices about the tracking, and the type we see here is absolutely not mandatory for participating. It does of course let them give you more personalized and useful coupons, but one can participate while declining that permission.
So called loyalty programs should be illegal on multiple fronts,
- Privacy: There's obvious tracking of purchasing trends. This derails into selling user data to everyone that makes people increasingly easy to track.
- Customer-dependent pricing / Price-discrimination: This is awful for economy, in econ 101 you learn that business want to charge each customer as much as they are willing to pay, but this differentiated pricing is just getting their hands into everyone's pockets.The free market principles rely on perfect knowledge, and every step made to make pricing harder is an attack against self market regulation.
Price discrimination is not a priori bad. A fixed price with enough margin to support the business may be too high for price sensitive consumers. If you can charge more to less price sensitive consumers, you can, at the margin, make a little bit on these price sensitive consumers, and overall everyone is better off - more consumers are satisfied and their marginal willingness to consume a unit of the thing being sold is more equalized.
Yes, this is the reason why it's sort of illegal, but done anyways.
Honestly, beyond paying fewer fees on the bus as a kid, I'm pretty sure I'm being scammed everytime I experience price discrimination.
I feel it's easier to make it illegal and give away reasonable credits to all consumers. I wouldn't discriminate in credits either, I'd rather have public transportation being free for all than claim to save money that society needs to spend anyway.
It doesn't help that lying about the price at any point just makes accounting harder, and creates space for wrong, uncompetitive pricing, or awful deals that would hurt business and society in the longer term anyway.
pricing is all made up to begin with though. your can't take the cost to make an item, add a reasonable amount of profit and that's the "real" price. that's just not the reality of running a successful business. human psychology is far too complicated.
at the end of the day, prices are just a number you make up, and hopefully it's a big enough number that your stay in business. hopefully it's a big enough number that you get rich. but sometimes it's a fire sale and you just end up owing less money to your vendors.
> at the end of the day, prices are just a number you make up, and hopefully it's a big enough number that your stay in business.
The only requirement is to make up a single for all your customers that are getting the same thing back. It'll be made up and account for business factors like risks, profits, etc.
I don't think everyone is better off, at best the "less price sensitive" is unaffected. But then you have to have have some way of stopping arbitrage via the customers paying the lower price through some sort of identity checks or restrictions. I think that's an unavoidable negative outcome and it's not clear that it would always be outweighed by allowing more people to consume the product.
There are ways to adequately approximate that kind of price discrimination without detailed tracking though, like giving discounts to students, seniors, and people receiving various kinds of welfare benefit upon showing proof of status.
Yeah it isn’t as accurate as the privacy-invasive kind of tracking, since students and seniors can be wealthy and eligibility for welfare benefits doesn’t always consider assets or gifts from well-off family. But it’s accurate enough to give the economy most of the same benefit without the privacy downside.
I do think it’s fine for people to opt in to more tracking as a separate consent choice beyond merely participating in a loyalty program, for example to get more personalized and therefore more useful offers, but not as a condition of participation to merely receive at least standard offers and accumulate points. That’s how they generally work in Germany.
>I do think it’s fine for people to opt in to more tracking as a separate consent choice beyond merely participating in a loyalty program, for example to get more personalized and therefore more useful offers, but not as a condition of participation to merely receive at least standard offers and accumulate points. That’s how they generally work in Germany.
Sounds like that'll push retailers to switch from a system where they give points/discounts to everyone, to one where points/discounts are "targeted", which of course requires opting into tracking. Like I said before, the whole premise of loyalty programs is that you're being tracked in exchange for rewards. You really can't expect to have your cake (discounts) and eat it too (not being tracked).
> Sounds like that'll push retailers to switch from a system where they give points/discounts to everyone, to one where points/discounts are "targeted", which of course requires opting into tracking. Like I said before, the whole premise of loyalty programs is that you're being tracked in exchange for rewards. You really can't expect to have your cake (discounts) and eat it too (not being tracked).
As I said, in Germany you can indeed have your cake and eat it too in this regard, if you’re okay with the offers you receive being less targeted and therefore less appealing.
My understanding is that GDPR requires them to offer the option to decline the personalized targeting without being blocked from participation overall, and this is probably the same anywhere in the EU. But I don’t have personal experience with this in other EU countries and could be misunderstanding.
>As I said, in Germany you can indeed have your cake and eat it too in this regard, if you’re okay with the offers you receive being less targeted and therefore less appealing.
The "cake" in this case refers to the offers you had before GDPR came into effect and/or regulators started enforcing it. They might give opt-out people some token offers to appease regulators, but I doubt it'll be anywhere close to the offers they had before.
> They might give opt-out people some token offers to appease regulators
It’s not an opt-out situation. As per GDPR requirements, these programs have a specific opt-in prompt for personalized targeting, separate from the one which is for generally collecting and redeeming points as a member, and it’s not pre-chosen by default.
I think one can assume that many people will decline to opt in, especially in a culturally privacy-focused country like modern Germany and since not opting in is far behaviorally common than explicitly opting out, but also that many others will knowingly consent in exchange for the benefits. So I think they would generally want to give decent offers to both categories of people, since the non-consent group is large enough to matter. Of course the personalized ones would be better, otherwise nobody would want to give that consent.
Myself, I’ve consented to some but not all of the personalized targeting and information sharing from the loyalty programs I participate in here, after reading the descriptions of the requested consents in detail and making a conscious choice. In at least one case I converted a no to a yes after thinking about it longer. It’s good to have that transparency and control, and not to have the legalese surreptitiously remove your right to sue the store should that become necessary as is common in the US (forced arbitration is generally illegal here in B2C agreements).
As for the rest of your most recent comment, I wouldn’t know; I didn’t ever live in Europe before the GDPR.
my grandmother collected green stamps from the grocery store, which she saved for food discounts.. I don't think that there was any customer ID involved at all..
honestly, describing pervasive tracking of purchasing associated with govt ID as "normal" is .. its a sickness and parts of it are illegal now. It is not required or "normal" at all, from this view
It's the normal term, in that it has been normalized as such. But it is otherwise not accurate except in the barest, most monetaristically self-fulfilling-prophecy way.
I believe that's opt-in. At least it seemed to be when my landlord switched to Bilt.
There's a section of your Bilt profile that shows your other credit cards and whether you want them linked. It's pretty freaky to see them listed in the first place.
I definitely keep them off.
Bilt is ultimately a big points/reward program though, so you might get points for having them connected.
I still haven't figured out exactly what Bilt's business plan is, but the main part seems to be trying to get as much financial data on people as possible, and partnering with landlords to do so, and since it's how to pay your rent you can't unenroll completely. (Unless you maybe mail your landlord a paper check?)
It was initially opt in for me, then they made it mandatory.
(Sure, I could pay by check but consumer banking technology/US in the US already feels like is is lagging a decade behind other countries without voluntarily going further back. Paying by check every month would be quite inconvenient.)
I'd already decided to avoid bilt as much as possible, but reading this thread prompted me to try going a little further.
> Request to Know... The specific pieces of Personal Information we collected about you.
> You have the right to opt-out from having your Personal Information and Sensitive Personal Information sold to third parties. You also have the right to opt-out from having your Personal Information and Sensitive Personal Information shared with third parties for purposes of cross-contextual advertising
You might want to discover about sophistication and pervasive facial recognition technology used by major retailers. Paid by cash? It can still be tracked to you. For "fraud prevention", of course.
>Paid by cash? It can still be tracked to you. For "fraud prevention", of course.
They can already track you through your phone and/or credit cards. Why bother setting up a massive facial recognition system for people paying with cash when they only account for 10% (or whatever) of overall shoppers, and have less disposable income than average?
Word of mouth: retailers in China have been using face recognition technologies to identify key customers so that they can be greater by name when delivered their favorite drink upon entering the premises.
The trouble with "word of mouth" is that you can't tell whether something is actually real, or vaporware that some account executive dreamed up to close a deal.
I agree, which is why I qualified it. I was working at a retailer, building it's cloud systems at the time. It was told to me by a colleague who claimed to be told that by a peer from China at a conference.
Facial recognition on a small corpus of known faces (what everyone experiences on Facebook, their phones, etc) is an easy problem.
Walmart picking up a face walking into a store and matching it against 30 million possibilities is going to return so many false positive matches it’s going to be completely useless.
I'm assuming you're using your Bilt card when this happens.
Your Bilt agreement stipulates how itemized transaction data (level 3 in payment terms, with level 2 being "enriched" with subtotals/tax and merchant information- which is what you typically see with your normal bank)
Card networks (Mastercard, VISA) have different fee structures that incentivize more detailed information like level 3 for lower processing fees for merchants - here's more details on levels https://na-gateway.mastercard.com/api/documentation/integrat...
What's most interesting to me about that is that they are willing to disclose that data to your email provider. Amazon, for example, is pretty cagey about what you've bought when sending emails, probably because they don't want Google to be able to use that information to target ads to you. (Not because they are Good and care about your privacy, but because they think they're going to beat Google at advertising. How's that going?)
So yeah, I don't get why they would do this. It gives their advertising competitors valuable data for free, and it pisses off customers by telling them that they're being tracked when they shop at Walgreens. Strange stuff.
Oh, here I thought it was because every time I want to remember info about an order, it forces me back to their platform, rather than simply searching my email like I do for every other item I've ever purchased.
What’s most strange to me is why this Bilt company would pay for that data feed and somehow think it provides some value to you. It’s obviously just creepy way of saying we know too much about you
Loyalty cards are one avenue for data brokers to get your purchase history. Credit cards can also sell your purchase data. Currently the only safe-ish way to be anonymous is with cash. That may disappear with pervasive face recognition and cell phone tracking.
Unfortunately the GDPR is largely toothless if a company without an EU presence chooses to ignore it.
I live in Ireland and my data is in the databases of several US data brokers. Thise conpanies can't be forced to to comply with the GDPR because they simply do not have an EU presence. You don't have to search far to find stories from people people who made complaints to their local Data Protection office about such issues only to be told there's nothing that can be done.
HN rants about it because it’s not a good solution. It identified a problem but caused an idiotic fallout (cookie banners) and failed to actually put in a framework to enforce that companies aren’t just lying.
> failed to actually put in a framework to enforce that companies aren’t just lying.
That's not true. I work in an European company and we were contacted by the agency to give a complete list of partners that we use, reasons for why it is justified, which routines we have for deleting old data etc.
I guess in theory we could have lied and made up data, but only an idiot would risk lying to the government. Everyone at my company took it seriusly and tried to provide as accurate data as possible. There were also several follow up questions that had to be answered.
The mindset of lying to the government to "protect" your employer seems so far fetched. Why should an employee lie to the government? If it turns out that the company was in violation of GDPR the worst case scenario for the company is a fine. If the government finds out you are lying, the employee faces jail time. The trade-off is simply not worth it.
Maybe it's easier to lie to the government in some countries, but not in my country. The government agencies actually checks and verifies your claims.
The lie doesn’t have to be intentional. All it takes is a really simple accidental debug logging flag to collect what amounts to a GDPR violation.
The point is that no effort was made to implement a technical solution to protect privacy. So it’s upsettingly trivial to violate the GDPR unknowingly and any company that is even a little unscrupulous (of which there are hundreds) can easily ignore the law.
> The point is that no effort was made to implement a technical solution to protect privacy.
And you want the government to do that?
Why haven't the companies who at every turn shout how privacy conscious they are haven't done that?
It's now been 8 years of GDPR. Why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser? Oh wait...
I’ve had to deal with Bilt [0]. In case you’re not aware, they have a “feature” called Instant Link that automatically pulls ALL of your personal and sensitive financial data from financial institutions, including your credit card accounts, balances, etc. They apparently do this via a partnership with a company called Method Financial [1].
It’s frankly the most intrusive thing I’ve ever encountered in any software I’ve ever used—I’m not sure how it’s even legal, but this is America where we have no real privacy rights.
Instead of giving you the option to opt in for them to get this level of access, they automatically enroll you into it when your account is created, pull your data, and then allow you to “opt out” afterward, which enables them to have access to your personal and sensitive financial data anyway. And since you literally must have an account with them if your building uses their services for rent payments, they’ve effectively rigged the system to force millions of folks to unknowingly give them access to their personal and sensitive financial data.
Anyway, in your Bilt privacy settings, there are some options you can disable (including Instant Link), and I recommend that you disable ALL of them, although given the dark practices of this company, I don’t even trust that those settings are actually honored.
Side note: Did you know about a company called Method Financial that somehow has real-time access to ALL of your personal and sensitive financial data? Did you know that this company you never heard of that has said access then sells that access to the highest bidder? Do you remember agreeing to any of that anywhere? Yeah, me neither (on all counts)…
Thanks for the heads up. Luckily I can go back to analog with certified funds to pay rent. I suspect, without evidence, this is due to the relatively strong tenant protections in Chicago.
This literally just happened to me last week. I emailed them to ask them how to stop this:
> I understand you want to opt out of all points and rewards and not be tracked.
>
> We're constantly working to make Bilt as rewarding as possible. Currently, we don't have an option to opt out of points or rewards. To prevent your transactions from being tracked, the most effective step is to unlink your card from your Bilt account.
>
> To unlink the card:
>
> Go to the Wallet tab > Scroll down to the Your Linked Cards section.
> Look for the card you would like to unlink and tap View all benefits.
> Click the ellipsis [:] on the top right, then tap Edit > Unlink.
Gah, I hate this service and will avoid renting on buildings that use it in the future.
Hopefully exclude? By whom? At some point, somebody has to decide it was sensitive, by what standards? Does Bilt decide to not use it after they were already sold the data? Does the aggregator after already been sold it by the harvesting seller? Does the harvesting app reduce the appeal of their data by deliberately excluding the data? Does the harvesting app care to spend the money on doing that?
That's what I do, but I assume some stores like Target also track you by Bluetooth, facial recognition, etc, and can correlate any past or future cash purchases if you use your credit card once for maybe a large innocuous purchase.
It would be amazing if you could build and send fake profiles of this information to create fake browser fingerprints and help track the trackers. Similarly, creating a lot of random noise here may help hide the true signal, or at least make their job a lot harder.
Unfortunately fingerprinting prevention/resistance tactics become a readily identifiable signal unto themselves. I.e., the 'random noise' becomes fingerprintable if not widely utilized.
Everyone would need to be generating the same 'random noise' for any such tactics to be truly effective.
A sufficient number of people would need to, not everyone. And if I were the only one then tracking companies wouldn't adjust for just me. Basically, if this were to catch on then ad trackers wouldn't adjust until there was enough traffic for it to work. Also, that doesn't negate the ability to use this to create fake credentials that aids in tracking ads back to their source.
Here's a real-life example:
You show up alone at the airport with a full-face mask and gray coveralls. You are perfectly hidden. But you are the only such hidden person, and there is still old cam footage of you in the airport parking lot, putting on the clothes. The surveillance team can let you act anonymous all you want. They still know who you are, because your disguise IS the unique fingerprint.
Now the scenario you're shooting for here is:
10 people are now walking around the airport in full-face masks and gray coveralls. You think, "well now they DO NOT know if it's ME, or some terrorist, or some random other guy from HN!"
But really, they still have this super-specific fingerprint (there are still less than 1 person in a million with this disguise) and all they need is ONE identifying characteristic (you're taller than the other masked people, maybe) to know who's who.
It's kind of how people used to make fun of the CIA types and "undercover" operatives.
Look for the guy wearing a conspicuously plain leather jacket and baseball cap. "Why hello there average looking stranger I've never met. Psss, 'tis a fair day, but it'll be lovelier this evening.'" "Oh ... it's Murphy the spy you want."
Also, found out the CIA declassified a bunch of jokes several years back in searching to respond. [1] Most are already dead links on CIA.gov, yet there's a few remaining. Nother one on people commenting on the CIA. [2] "These types are swin- Ask in Langley if they work for the CIA. Every- Ask in Langley. They will tells one knows them." 'You, it's the big building behind.'
The garbage in the last sentence of this comment is due to the second link including incorrectly OCR'd text from an image of a newspaper using a two column layout. Both links are very amusing.
I think this is a slightly different case no? If the ad network is using a very high precision variable to soft-link anonymized accounts, then randomizing the values between apps should break that.
Your analogy applies more to things like trying to anonymize your traffic with Tor, where using such an anonymizer flags your IP as doing something weird vs other users. I’m not convinced simply fuzzing the values would be detectable, assuming you pick values that other real users could pick.
Swapping fingerprint details is different than your example since it happens immediately and out of view. You could change fingerprints very often/create a new set for every browser tab. Additionally, as I pointed out before, they won't adjust until there is enough usage and when there is enough usage then the random settings are hard to distinguish because it isn't 1 in 1m. I get that they will keep trying to track down things that make browsing specific, but that is what updates are for. We need to at least make it hard.
Unfortunately the fox is building the hen-house. They 'should' build products that improve my experience but they have very little incentive to do that when they get paid so much for the data they can extract. What would actually do it is regulations similar to financial regulations. OS/browser companies shouldn't be allowed to do business with data brokers. Then they would have one primary customer, the consumer, and competition would focus on the correct outcome. But 'regulation' is an evil word so we aren't likely to see anything like that actually happen.
Technically, information are the bits you DON'T know. Once you know the bits, it isn't "information" in the Shannon sense, in that it takes no energy to reset a message if you know all the bits, but takes N-units of energy for N unknown bits of information. (See; Feynman's lectures on computation)
It's also useful for making ads more effective & manipulation overall. As long as you can connect the data you track & buy, you can use Thompson sampling. In fact, why would we think knowing the name of a person is anything but bad business?
I believe some apps actually have to automatically brighten up your screen when displaying a QR code for scanning, and then reduce back the brightness of its previous setting when moving out of the QR code.
I believe the Whole Foods app does this for its first screen.
Combine this with IP, timestamp, and some behavioral patterns, and you’ve got an extremely robust tracking mechanism that operates outside of explicit consent mechanisms.
Everything listed changes way too often to be useful for tracking. My guess is that it's for anti-fraud purposes. Someone setting up fake devices and/or device farms is likely to get similar values, which means they can be detected via ML or whatever.
> screen brightness, memory amount, current volume and if I'm wearing headphones
None of those are likely to change when you navigate from one website to another, with tracking/ads disabled, which is what they want to be able to track. Otherwise they'd just use their cookies.
One device visits a site where you sell ads. A minute later, an unknown device with identical battery, volume, headphone, brightness, model number, browser version, and boot time to the second arrives on another site you run ads on. There's a pretty good chance they're related, because the odds of all those being the same plus those two sites and recent timings involved is rather low: https://coveryourtracks.eff.org/
Plus it doesn't have to be perfect. It just has to be good enough in bulk to sell.
> Advertising Tracking ID was actually set to 000000-0000... because I "Asked app not to track".
> I checked this by manually disabling and enabling tracking option for the Stack app and comparing requests in both cases.
> And that's the only difference between allowing and disallowing tracking
This is revealing! I'd wondered about Apple's curious wording "Ask App not to track" leaves suspicious wriggle room - apps may not track by an id, but could easily 'fingerprint' users (given how much other data is sent), so even without a unique ID, enough data would be provided for them to know who you are 99% of the time.
Amended Dead Privacy Theory:
The Dead Internet Theory says most activity on the internet is by bots [0]. The Dead Privacy Theory says approximately all private data is not private; but rather is accessible on whim by any data scientist, SWE, analyst, or db admin with access to the database, and third parties.
Apple sets Advertising Tracking ID to 00000-0000 because it's the only technical control they have. However, apps are also supposed to respect the signal with regards to other methods of cross-site/app tracking and disable fingerprinting mechanisms.
>If it was LTE, I bet the lat/lon would be much more precise.
False. Apps don't have access to cellid information unless they also have location permissions, in which case they can just request your location directly.
>the free apps you install and use collect your precise location with timestamp [...]
This is alarmist and contradictory given that the author admits a few paragraphs up that the "location shared was not very precise". It might be possible for the app to request precise location via location services, but the app doesn't request such permissions (at least on android, you can't check for requested permissions on iOS without installing the app and running it), so such apps are most definitely limited to "not very precise" locations.
>At the same time, there is so much data in the requests that I'd expect ad exchanges to find some loophole ID that would allow cross-app tracking without the need for IDFA.
At least in theory they're not supposed to do that, but it'd be hard to enforce.
"If a user resets the Advertising Identifier, then You agree not to combine, correlate, link or otherwise associate, either directly or indirectly, the prior Advertising Identifier and any derived information with the reset Advertising Identifier. "
Cell carriers will gladly sell that information to apps. You can make calls to them over the cellular network (even if Wi-Fi is active!) and they will hand it back to you. No location services is required to do this.
"Precise" has a specific meaning for iOS Location Services and this ain't it. Presumably it's just doing IP geolocation which could be the same post code, or it could be the wrong city entirely. I'd expect it to be much worse on LTE than WiFi.
>Eh. Zip code level location + timestamp is still pretty invasive, even if, pedantically, that’s not very precise.
That's basically sent to multiple parties (ISPs, transit providers, CDNs, analytics/advertising/diagnostics/security vendors) everytime you visit a website. If this counts as "invasive" to you, you shouldn't be connected to the internet at all, much less buying a tracking device (a smartphone) and installing random ad-supported apps on it.
I find it fascinating reading hacker news, full of IT folk who simultaneously build software that enables and profits from the advertising and personal information selling & tracking industry - are also the same people who complain the loudest about it. Unbelievable.
Probably because people like us have more visibility on the huge scope and consequences of this kind of privacy invasion. Most people don't actually see this with their own eyes. They probably know it's happening in the back of their heads but it's not 'real' to them. It's very real when you know you could technically run a report of all your users that also have grindr installed.
I'm sure most of us would prefer not to work somewhere that does it but we need to eat too.. And we have no input in this.
For example recently I was given a presentation on a new IoT product at work. Immediately I asked why we're not supporting open standards stuff like matter as a protocol. And I was told that'll never fly with marketing because they want to have all the customers to have eyes on their app for their 'metrics' and upselling. I told them fine but I'm definitely not using this crap myself. But it was shrugged off. We are too few for them to care about. And it makes us very unpopular in the company too. So it's a risky thing to do that doesn't help anyway. The "don't fight them but join them and change from within" idea is a fallacy.
There’s no code of conduct or rule book that anyone should follow so ethics is determined at the individual level. That quickly turns to, either I build it for them or the next guy will. Resistance is futile type thing.
Most other types of engineering have published rules and standards and industry credentialing including ethics tied into it and loss of credentials for an ethics violation would be career ending in many cases.
(I can only think of straw-man examples. Does the private prison industry have problems getting architects, civil engineers, electrical engineers? Does the pharma industry have problems getting chemical engineers for manufacturing addictive painkillers?)
Yes, because everyone on Hackernews is identical and working on the exact same stuff. It's not like it's a few companies enabling this and each marketing department going like oooooh i want that.
We might not be the same. Every time someone asks for tracking anything I complain and question a lot. People hate me, but if there is no real use case for storing all information we can get I will veto as much as I can.
The IT folks working in the advertising industry are much more the "who cares, everyone has all our data already anyway".
This is one of the many good reasons to avoid the google app store but most apps in general.
Let it be known, having an app to do something which used to be doable by a website is to me a red flag. Although I refuse to install anything other than what I genuinely trust.
> There's no "personal information" here, but honestly this amount of data shared with an arbitrary list of 3rd parties is scary.
Why do they need to know my screen brightness, memory amount, current volume and if I'm wearing headphones?
Screen brightness, boot time, memory, and network operator could probably fingerprint any device all by itself.
Automatic brightness probably helps honestly. It could help confirm whether someone is in fact in an area that has high levels of lighting around them (e.g., in a store, at a beach on a sunny day, etc.)
Everything little piece of data that is gathered and used can help even if it isn't immediately apparent.
Now I could be wrong on this, but I feel like advertisers don't need to know something is true about a user, they just need to be confident something is true about a user and that's where data points like screen brightness can be of help to them.
Kind of a joke, but it could be useful for determining if they should serve light-mode or dark-mode ads. But I suppose they could just detect if dark/light mode are enabled.
How much money is tied back to, or generated from, wifi AP SSID databases for geolocation ?
Because wow that would be simple to spoof and chaff and spam.
It's dinnertime here but if I had a few minutes I could make (my own house) appear indistinguishable from (Chase Center) from the perspective of SSID landscape.
It would cost nothing and is trivially easy. Even if they pair MAC addresses that's not a big hurdle. I'll bet relative signal strengths are not measured.
Google's Geolocation Services used to charge 4$ for 1000 requests to their Cellular/Wi-Fi Geolocationing Service. Essentially you send a list of Wi-Fi MAC addresses and their associated RSSI values and get back a latitude and longitude with an accuracy metric. It was surprisingly good, when GPS wasn't available (sub 50 meters accuracy).
A long time ago I had the idea to create an 'accountability server'. The high level idea was for it to generate unique credentials so that you could track to the source who sold your info. There are some ways to do that now, but I wonder if it is time to start exploring that idea again. If you exposed it as a VPN/proxy+app that ran on a server in your home, so that you could collect your own data and provide unique credentials on account creation, then I wonder how much that combination could figure out. Since it could act as a man in the middle it potentially could annotate credential source and see the ads and potentially track them to source. "This male enhancement pill ad is linked to your tire purchase." There is a lot of hand waving here, but I wonder if something like this could be built. The first step to stopping things like this is showing people who did it to them.
Wouldn't this require access to bid side data? The OP mentions it's pretty easy to get, but any company using this to expose advertisers is going to get their access cut off pretty fast. As the saying goes, "snitches get stitches".
My thought here is that there is likely a lot of leaked data on ads themselves, that is one of the reasons why you would need the VPN/proxy. Additionally you could (potentially) create fake browser fingerprint credentials on the fly to feed sites and have the VPN/proxy track the ads that show up for those credentials. (other credentials like email and the like could also be created by the app for you) You don't see the bid data, but you may be able to control the tracking that spurs it and you can see the results of it so a setup like this could likely make some inferences.
I don't know this industry well and the tech here has long sense eclipsed me so I really don't know what is possible but I imagine there are possibilities with this setup.
Imo, the real takeaway here is that ad-tech isn’t just tracking people — it’s that it's becoming a decentralised surveillance network where no single entity takes any responsibility. Even with "Ask App Not to Track," your IP, geolocation, and device fingerprint still end up being leaked! It shows that tracking isn’t a feature anymore — it’s the business model.
> There's no "personal information" here, but honestly this amount of data shared with an arbitrary list of 3rd parties is scary.
Why do they need to know my screen brightness, memory amount, current volume and if I'm wearing headphones?
> I know the "right" answer - to help companies target their audience better!
For example, if you're promoting a mobile app that is 1 GB of size, and the user only has 500 MB of space left - don't show him the ad, right?
Author jumps to the incorrect conclusion here. The answer is fingerprinting.
I think they are pretty clear if you read the documentation. Accessing to the exact value of these always need some privacy-related privilege on ios and android.
Without those privilege, all you can get is an approximate.
The browser has less access to your system, and usually only if you give a specific website permission to use these features. Mobile operating systems are slowly changing that though.
what should imply checking available web apis? the comments is correct, browser can't access your location without explicit confirmation from the user, the same apply for other web apis, or at least mention a bunch of them which you know don't apply instead of linkin MDN
The more APIs available for JS to interact with, the more granular and detailed browser fingerprinting can be. For example, how your browser renders WebGL can differ depending on what graphics card (and drivers) you have. The resulting values can be read back and stored to create a detailed fingerprint of who you are -- this could potentially be done by Google Fonts or AdSense or any number of the countless ad and analytics frameworks loaded on basically all websites.
Browse the source in the following directory to see a plethora of examples of how web APIs are used to fingerprint users -- and this is just one publicly-accessible library we can easily review the source code of (proprietary, obfuscated ones likely use additional methods): https://github.com/fingerprintjs/fingerprintjs/tree/master/s...
One example used in multiple places in the above repo is "matchMedia"[0] which was a Web API method added a while ago (well, many years ago) to give a programmatic result of whether a given CSS media query matches or not. This can be used to detect, for example, user preferences like whether the display is HDR-capable[1], or the Accessibility setting "reduce motion" is enabled[2].
what is contained in the latest js standard that does let you collect fine grained information of your users without their consent? web apis that have to deal with sensitive data all requires explicit user confirmation to be used
At least on android the browser is limited by the android permission system, i.e. if you dont give browser GPS permissionit cannot give pages dito. In addition the browser will ask if you want to grant an app access to something like positioning data.
Furthermore, it is hard for a web page to run in background and receive user data.
The thing I found I grokked, and think is important from this article is that private browsing doesn't end this information flow. It only marks the JSON data blob as "asked not to be identified or collated" and its substantively an honour system. There are penalties (lawsuit against google for misleading people on the fact data was still collected) but the walls to breach here are low, given that non-PII can be crossmatched, to confirm "who you are" in some sense.
There is no such thing as "private" browsing inside the factory installed browser, with factory installed DNS, and any kind of location data, or other cross-collating information along with your IP. The loss of privacy may be contextual and somewhat statistical, but it would be wrong to assume you weren't identified.
What it does do, is let you see how bidding mechanisms in services like flights and hotels will change bid when the same location as you comes to request service and doesn't have the prior search cookie state. Thats useful I guess.
"find things at a different pricepoint" cookie monster mode?
wow @apokryptein thanks for posting my article here... I'm shocked it's #1 rn.
if anyone has any questions regarding the post - I'm here to answer & talk!
I don’t know the answer to this, which is why I didn’t mention it in the post.
However, I could speculate that these data-broker companies scrape leaked [hacked and stolen] data from various panels and then match records on their end.
kinda OSINT for bad reasons.
Long ago there was XPrivacy project for Android that allowed to granularly set permissions for each app & system service and ensure they won't get the real private data. It's no longer alive these days, I guess.
Can someone share their experience with the alternatives for the modern latest Android?
I would like to bring attention to this project. They aim to function in an application firewall like manner and manage to block connections by category, classified by domain name. Android only though, and the 'full' version is available only on f-droid due to some anti-adblock-like Play store policy. https://trackercontrol.org/
Very interesting and disturbing research, definitely a wake up call for me. Does anyone know/can anyone recommend me software that can block these sorts of requests from going through? I know of pihole which blocks adds but does it also filter out these sorts of things?
You need to have a wifi only android phone, rooted, no google apps, and uninstall anything that talks to the internet. That includes analyzing network traffic, open ports, and so on.
I did this with a Kali Nethunter distro back in the day for "reasons", privacy not being one of them. This makes the phone very hard to use for regular things.
It seems still possible to avoid being tracked (protections, filters, degoogle, etc), and the business is not very interested in the minority willing to trade off functionality, practicality and ease for privacy.
For how long?
Which does absolutely nothing if your device or the app in question is permitted or otherwise not prevented from making DNS-over-HTTPS (or, less commonly because of its discrete port, DNS-over-TLS) queries.
I'm referring to devices and apps that are 'hard-coded' to query specific DoH servers/providers, therefore bypassing and regardless of any user-configured DNS server/s. And because DoH operates on outbound TCP/443, the lookups are indistinguishable from any other 'web' traffic.
Even some of the most popular desktop web browsers are configured to utilize DoH by default nowadays.
The most that a network administrator can do to prevent this is configure firewall IP blocklists of known DoH servers and NAT all outbound 53 (and 853) traffic to a desired resolver (like a local Pi-hole instance, for example).
Facebook hard-code IP addresses when their domains are blocked. I found this out while using NextDNS alongside that logging functionality that iPhones have. It’s insane the lengths that they go to.
It's not insane at all. It is the entirety of their business model, so it makes sense that they will do everything possible to keep that sweet surveillance cash flowing.
- It was a clean state of a somewhat old phone (iPhone 11, factory defaults + new apple id)
- A single (old) app was installed (Stack by KetchApp, 10-12 years old)
- Was sending out an update a second pretty much instantly (5 kB - ~300 KB every second)
- Within a minute: IP, Lat / Lon, country, phone model, carrier / network operator, vendor, OS version, connection type (wifi), headphone status (?), volume setting (?), screen brightness setting (?), battery status (?), CPU count, system RAM, free RAM allocation, free hard drive capacity, system boot time (?)
Might as well just screen grab the Task Manager equivalent and hand it to them. Have better, quicker data about my own current RAM allocation and free hard space than I do. It hands them when the system booted for an ad? The headphone, volume, brightness, and battery was just "what" kind of headshake about invasiveness. Somebody'd hand wave they need it (we want it, we want it). They obviously don't.
Edit: It's almost Remote Desktop, on an iPhone. Realtime (~1 Hz) RAM / ROM allocation. Not sure how many Apple user even know how to check their realtime RAM / ROM allocation. The free hard drive space especially is just asking for botnet downloads.
Edit: Right, and ... disabling tracking doesn't mean anything because numerous updates blatantly ignore the setting ("uc": "1", // User consent for tracking = True;) and it's just a flag while they still send your vendor specific customer identifier anyways.
Really interesting article, and great investigation, just disturbing how much on an effectively clean phone.
I dislike that as a developer, knowing something like the headphone status could be useful for the functionality of the app. But some other unscrupulous person is just exfiltrating it! This is part of the reason I agree with Apple’s stand against apps with sub-apps/“desktop like” due to not fine-grained enough permission settings. There is a significant privacy downside to “superapps” and now Elon is pushing for the X everything app.
Yeah and if you ask for permission for every little thing then users are going to get bombarded even when it's needed for legit purposes. It's a difficult tradeoff to make, even if you want to do the right thing (and I'm not really sure that Apple and especially Google really do)
> The headphone, volume, brightness, and battery was just "what" kind of headshake about invasiveness. Somebody'd hand wave they need it (we want it, we want it). They obviously don't.
Well the why the ad industry wants it is clear: fingerprinting and segmentation. Someone consistently low on battery? Push them ads for powerbanks.
This is actually part of what I find so wrong about this entire idea.
With all this fine granularity, it seems like ads would be incredibly relevant. Specifically about what you need with something that might actually result in a click-through to purchase a product. Especially if they get real-time updates on my hard-drive status and battery state.
I don't remember the last time I got an ad that was actually relevant. Pretty sure the last ad that was even clicked on was one of those little windmills that swirls crazily, cause it seemed like it might make a cool lawn ornament. Turned out it was tiny. Years of online purchases, and they don't even suggest stuff I want.
Reddit app has no permissions on my phone, but the feed suggests communities based on my location never the less. I've been traveling for the last two months, every city I've been has been suggested
Just check https://ipinfo.io/ to see how close your IP points to your location. For most targeted content the city is good enough. And honestly if I'm one of 1 million people it's ok.
This is why I am so against letting Web Browsers have access to so much device information. Every time, a web dev says they should be able to push notifications, or get battery information, or whatever, this is why they should be ignored.
I wonder: to which extent are purchased/brokered app real-time location data feeds used by various intelligence services to target missile strikes in war zones? In e.g. Ukraine/Russia.
The leaked tools that NSA used, like XKEYSCORE, used publicly available data collection methods, including purchasing advertiser lists, to cross correlate all the data and form a profile. So anybody could do this stuff.
Anyone understand why an apparently accurate latitude/longitude showed up in one of those traces despite location services not being enabled for the app in question?
Phones send out probe requests to get a list of open wifis. If you have a static access point, with a known geo location, software can be running on that point to remember a mac address of the phone from a probe and store it. Thus enabling real time tracking.
Im like 60% sure this is how they figured out who the Bomber was in Austin TX.
Thanks for asking. Came here to ask since I was curious about this too. I don't find any of the replies here convincing:
- List of open wifis: AFAIK, and in my experience, apps need special permissions to do anything at the wifi level. And yes, iOS location services use wifi info but it's disabled, that's the point;
- IP back to geo: then why not send the IP itself directly?
- Mozilla location services: same as above, why not send the info you send to Mozilla directly to the data harvester which can call Mozilla itself?
The people who reported about the Gravy analytics leak is 404media. They're an independent techincal media group that has been reporting on stories I haven't seen elsewhere. They're pretty awesome. I've personally paid-subscribed. (I'm not affilated with them, nor am I receiving comp to say this)
You connect to a special WiFi SSID and compares your traffic to known tracking/ad domains (Pi-Hole Lists mostly) and the "food" is the packets being sent to those servers.
its crude and has some high false positive rates, but it does have a chilling effect for me when exploring what data is going where
Even just to look at the picture near the top (which is also repeated near the bottom), if you do not allow the app to track you that only disables one of the items of the information and not all of them. This is explained later in the document, that it is not explained very well to end users. I agree that it could be explained better. Perhaps, "Allow app to track your activity..." can have a option to display a more elaborate description, explaining that it only affects the Advertising Tracking ID (and what that means) and has no effect on other methods of tracking.
And, looking further in the document, we can see there is more.
Some of them, such as IP address and timestamp it is reasonable to use for programs that access the internet (although it should be possible for the user to set up a proxy and/or adjust the clock in order to change these things, the server would still use its own timestamp anyways).
Available memory also makes sense to be readable (although ideally, the user should be allowed to limit the amount of memory available to specific programs, in order that there is enough memory remaining for other programs; the reported total memory should then include only the memory available to this program and not to all programs), and the same should be true of the number of CPU cores and the amount of available disk space.
Others probably should not normally be known by most programs (but some are usefulf or some kind of programs), and even when they are, the operating system ought to allow users to reprogram what information is available and what filters, logging, etc will be used.
The presence of wired headphones probably should not be accessible by software, and the redirection should be handled by hardware. Perhaps an exception makes sense if the settings need to be different, e.g. mono vs stereo, although even then, programs should only see those settings (and only if they have audio output), and the user should be allowed to override them due to preferences (e.g. some users might want mono even if connected to external speakers or headphones; on my computer sometimes only one speaker works and sometimes both, so it is useful to me to be able to switch to mono).
Furthermore, there is the consideration, if the advertisers/spies are stealing your power and network bandwidth and quota in order to do these things; then, that is theft.
> This is the worst thing about these data trades that happen constantly around the world - each small part of it is (or seems) legit. It's the bigger picture that makes them look ugly.
NetGuard with DROP OUTBOUND policy once again proves helpful. The only app that shows ads that I have on my phone is a PDF scanner, and I don't allow it internet access.
I clicked the link at the beginning of your article, that led to the Google sheet with the list of apps. That list had 12,373 lines, not “over 2,000”, fyi. And while most of the apps looked like small time games that I have never downloaded and would probably not download, I saw included there “Microsoft Office 365”. Interesting.
Here's my messy list of all the apps I recognized in that Google Sheet:
meetup, tinder, crunchyroll zynga/wordswithfriends, Microsoft Outlook com.microsoft.office.outlook, Weather channel, Microsoft 365 (Office) com.microsoft.office.officehubrow Opera Mini browser beta com.opera.mini.native.beta, BuzzFeed - Quizzes & News com.buzzfeed.android Tetris® Block Puzzle com.playstudios.tetris4 Sonic the Hedgehog™ Classic com.sega.sonic1px Grindr Flipboard: The Social Magazine Flightradar24 | Flight Tracker Bejeweled Blitz com.ea.BejeweledBlitz_row FarmVille 3 – Farm Animals Plants vs Zombies™ 2 com.ea.game.pvz2_row SimCity BuildIt 913292932 Tetris® 1491074310 Opera Mini: Fast Web Browser com.opera.mini.native TuneIn Radio: Music & Sports 418987775 Yahoo Mail – Organized Email com.yahoo.mobile.client.android.mail Angry Birds 2 com.rovio.baba Skip-Bo 1538912205 CamScanner - PDF Scanner App com.intsig.camscanner Rakuten Viber Messenger com.viber.voip Candy Crush Saga 553834731
agreed, however there are duplicates in that list + same app for ios / android.
if I’m not mistaking I did a simple unique count on it (or catch the 2000 number from 404media post)
I don't understand how this isn't considered an incredible national security issue, e.g., what stops an actor buying data for high value targets known to use certain apps, like the President or Prime Minister of a country?
This is a wonderful write up. The part that isn't clear to me is how they're getting the geolocation data if location services are turned off. Are they just going off geo-ip lookups? If you grant access to Bluetooth or finding devices on your local network, they can get more information to track your location. Absent that, how would they get better than geo-ip?
Android for sure, since version 8 I'm certain but probably even 5 or 4.x (so 10+ years ago)
Always annoys me when I want to use a WiFi scanner to determine the range of an access point in different locations for example and it needs me to turn on location access first before it can get WiFi data. The open source app doesn't have an Internet connection so there's no way for it to send back data to the mothership even if it had an SSID database baked into the apk. For me, and traditionally, the location switch is to turn on or off energy-hungry GPS hardware, not gatekeep when I trust apps to collect my location. I can set those to "only while in use", deny their Internet access, or just not install them if I don't trust them with the location permission
But all it takes is one app with that permission to tie you to all the others. And there are always apps that need your location at some point to provide useful data. At this point I’m not sure there’s any single app I trust.
I'm surprised people think they have any kind of privacy - especially when using free services. They are not free. You pay with whatever data can be extracted from your devices and behavior.
Also, there's a looong list of companies who know the location of your mobile device, starting from the cell phone tower operator to Apple/Google and many in between.
True. But even paid apps have access to these data and can collect it without our knowledge. A genius called Stallman proposed solution decades ago. Free software aka. open source software. But outside of tech community, open source is not a known term. Maybe we should market it wherever possible, if we want true privacy and freedom.
I don't think there is a hope when it comes to our privacy and ads and our data being sold - none. Even if I'm somewhat off the grid or low in activities, the indirect way of targeting me still exists, by my family members, friends, people associated with me. I surrender.
There is hope. The upcoming US State privacy laws are resulting in IP addresses having the last octet blanked out, and IDFA's zeroed out, at least for SSPs and DSPs. Companies such as Apple and Google will still have this info since they control the OS.
Whilst I trust that the author did in fact look at the data of each request eventually, the screenshot they provided of Charles could not have been of the exact requests they intercepted given Charles is indicating that those are not yet SSL proxied (except for the 2 GET requests).
EDIT: please ignore, author did it differently to what I expected.
This technique doesn't work anymore on android because you can no longer add certificates to the system store and apps are free to choose to accept the user store CAs or not. That was changed in Android 7. For "security" they say. Security of Googles business model I'm sure.
Posting here from an anonymized account about Meta. No one probably recalls that meta stopped most of their background location services(Remember Nearby Friends) on the main application ~2021-2022[1]. It was just not even worth a repeat NYT story with this much money on infra to collect locations.
But, this is basically after they figured how to do "good enough" location targetting using IP and a bunch of this info this guy talked about. You don't actually need a lat, long, just the 1 mile radius/city area is good enough to run ads and they have ALL of that.
This was why meta's revenue dropped so much after apple's move, they could not fall back to collecting precise location. This is the last game in town. You shut this down, meta's precise targetting will suffer gravely, ads will become flakey.
One last thing. You may ask, who are the businesses that need precise lat longs? are like this one[2]. These businesses are like whack-a-mole. They saturate the app market steal data get money and shit down when someone yells and in a few months and comeback again, rebranded and come back as another app. They exist not just to collect data but to act as an arbiter on who get eyeballs on IRL activities to influence behavior at the (Top of the funnel) TOFu. In the Worst. Possible. Way.
I think one thing people are discussing a lot here about Privacy around contacts and sharing. Limiting access to contacts , completely or partially, is the wrong way to design such systems. There are two problems with this approach:
1. Having permission to contacts is NOT a capability. Running a function on it that is by design not leak PII is infinitely more valuable and a capability.
2. Asking users to grant permission is broken by design: You are giving a very bad multiple choice to the user: `(a)Creepy? (b). LessCreepy (c). Don't Use App`
Instead if we only granted operation rights and hid the actual information instead it would be so much better. We need a separation of data from the function to empower apps to give better choices to users.
Would be interesting to know how much data leaks on a new iPhone with some of the iOS privacy settings enabled and a handful of popular apps installed (WhatsApp, Instagram, Google Maps, Uber, etc).
And then if you use a commercial VPN with DNS ad-blocking enabled, how much more does this help?
I read an interesting newspaper article about how the police confiscated a hired gun's iPhone and found that he ran a search on the city his victim lived in. It is these little digital breadcrumbs that makes life easy for the prosecution.
Seriously if you are going to do illegal things never ever buy a smartphone.
It looks like these all come from the Reddit embed in the middle of the article. Default uBlock Origin settings blocks 13 urls (and more, due to Reddit's frequent pings), but disable 3rd-party frames brings it down to 1 url (since the original embed was blocked).
I realize this feels like a pipe dream, like a million miles away from our branch of reality in 2025, but I really think the entire online surveillance advertising industry needs to be burnt to the ground and (maybe, partially) rebuilt. Many of the problems we see nowadays are rooted in the fact that data is being collected and used to (supposedly) profitable ends.
Sure, there may be the occasional honest actor in the industry, but they're so marginal and outcompeted by dishonest and shady ones that it really doesn't matter. IMHO the right move is to simply ban any collection that's not strictly necessary. Kind of like GDPR but without the "if the user agrees" exceptions.
Reminds me of a regulation about artificial stone (?) being banned in Australia, not because it's impossible to use safely but because the regulator concluded that the entire supply chain is unwilling to and disincentivized from using the material safely, so the best move at this point was to ban it outright.
Related: has anyone else noticed the practice of using cheap commodity 'living room' appliances to get access to your data? A while ago I bought a ceiling light for my daughters' bedroom, brand unknown to me. It had a built-in speaker controlled via bluetooth, and dozens of light patterns and colors it could emit via a ring of small leds. My daughter was extactic looking at the youtube promo vid. Turned out that to use any of these features, you needed to install their app. Fine okay installing. Then the app demanded access to contacts and camera or it refused to connect to the ceiling light. Fine okay uninstalling the app and returning that crap.
Apple’s “privacy protections” are nothing more than marketing.
“Ask app not to track” is a wash and privacy theater at best. One of the reasons I still run ad blocking on _all_ websites and at the network layer. Sorry “content creators” but you need to get your revenue from elsewhere (ie, sponsored content).
Now I want a phone that scrambles all of this data on a per app (or phone) basis.
Malicious app wants this data? Sure you can have it. But you will get randomized values for every bit of information — resolution, lat/lon, brightness, battery level (user can set range of 90-100%), ….
I paid for pcapdroid, it's a network monitoring app that use the vpn protocol on Android to monitor every packet sent, register which app made the request, to whom, dates and so on.
In it's paid feature, you can select app to block internet connection or you can select country, ip and host.
After browsing my internet logs, it shocked me to see some app I had absolutely no idea were spying so much.
Xiaomi home ? Yeah I knew Xiaomi app would be spyware. But Spotify for instance, how could I guess it sends every few hours data to remote server including Facebook ones.
Until I find replacement for Spotify, but most music streaming app do spy on its user (and I don't mean just learning what music you like), I can still block all the graph.facebook.com tracking.eu.miui.com Google ads.gdoubleclick.net and so on.
It's open source but firewall is paid feature, i highly recommend it if you're on Android.
There is even the possibility to decrypt packet and analyze them although it require root, i did it on another phone and yeah it's similar to what the author found. Every single bit of data, ip adress, since how long the phone is on, the wifi connections, when did I unlock the phone and so on.
Every data taken individually is not important to me but this stream of little data constantly going God knows where is creepy as fuck.
If you have the equipment (e.g. a spare Linux computer and WiFi router) and know-how, you can set up something like mitmproxy (looks very similar feature set to the Android App, but likely requires more effort to set up) to your home network. That's what I did some weeks ago, and then basically the same exercise you did (just my whole network instead of just phone), looking what's going on. And yeah...it's not good.
Even if I trust some companies to be trustworthy, I can't possibly vet a gazillion entities getting telemetry requests, and not all of them can have their shit together, security, privacy or ethics-wise.
It made me ditch some Microsoft software, but overall escaping spying feels like a lost battle, unless you go do spartan Richard Stallman-like computing (IIRC he had pretty hardcore stance over the software he'll use).
Anyway like most things it's a journey, not an on off switch. First you get aware then you make change and the situation gets better, it doesn't have to be perfect to be better.
On my Android phone, I had to make clear cut on which app I could keep after seeing the logs. The apps from Google, microsoft, amazon they are all gone. Even the play services and the play store replaced with aurora.
It cuts at least 2/3 of the network requests.
Then you have the case of individual apps that use Facebook SDK or other advertiser, there are often alternatives in the open source community and when it's not the case there are always less privacy invasive alternative on the store.
For instance, my default Samsung weather app was sending lots and lots of data. The alternative on the froid were not in my taste.
I eventually found out about weawow, it's not open source but it doesn't require any weird permission, no ads, it's not constantly sending data in the background and my logs says it only connect to weather.weawow.com.
I mean it's fine.
After spending weeks with the firewall, i was able to identify the spying app and replace most of them. My network log now is pretty empty when I'm not using the phone.
With GPS off, location can triangulated from cell tower usage to within 3/4 of a square mile (smaller uncertainty in urban areas where cell towers are closer together). I'd heard before that some data brokers do this, but in this article the writer mentions reverse DNS lookup on IP addresses, which they mention is less precise (ZIP-code level).
Only if you don't turn WiFi off. To my understanding even the "soft off" option present in iOS stops the phone from beaconing, and just listens in order to collect data for building augmented location services. I don't know what the Androids do. These days both of them also offer randomized MAC address to curtail such tracking.
Total bs. Do not give location permissions to untrusted apps. If the app insists on it, use mock GPS feature on android which will spoof your location. Can we all please stop exaggerating the slopiness of normies with their pretentious acts of being shocked after not being cautious about their privacy? Privacy is not by default, you have to put some effort into it
Starting earlier this year I've set up a mitmproxy a lot on my entire home network, and often have it on for all traffic at times. I put up an old NAS and I'm abusing it as a mitmproxy tool for my home.
There would be so much to write about what I've seen. I've thought of making a blog post. I use mitmproxy to check on sketchy apps and to learn in general.
The information sent out is fascinating. I knew extensive telemetry is pretty norm these days, but it's another thing to see it with your own eyes.
My exercise has also made the typical "yes, we collect data/telemetry, but it's deanonymized/secured/etc. and deleted after X days so no worries" sound very hollow; even if a company goes in good faith by their own rules, how am I supposed to trust the other 1000 companies who also do data collection. If someone hacked my mitmproxy itself and downloaded all the payloads it collected, they would probably know me better than I do.
Random examples on top of my head from mitmproxy (when I say "chatty" I mean they talk a lot to server somewhere):
I had GitHub CoPilot neovim plugin. I didn't realize how chatty it was until I did this (although I wasn't surprised either, obviously completions are sent out to a server, but it also has your usual telemetry+AB test experiment stuff). I had wanted to ditch that service for a long time so I finally did it after seeing with a local setup since open stuff has mostly caught up. Also it's not actually open source I think? I had no idea (I thought it would just be a simple wrapper to call into some APIs, but: no PRs, no issues, code has blobs of .wasm and .node: https://github.com/github/copilot.vim)
Firefox telemetry, if it's turned on, is a bit concerningly detailed to me. I think I might be completely identifiable on some of the payloads if someone decided to really take a go at analyzing the payloads I send. Also I find it funny that one of the JSON fields says "telemetry is off". Telemetry is actually on on the menu (I leave it on purpose to see stuff like this); just in the JSON for some reason it says off. I'm not sure if that telemetry is meant to be non-identifiable though in the first place.
Unity-made software (also mentioned in the article) send out a Unity piece at start-up that looks similar to the article, although I didn't take a deeper look myself.
Author mentioned the battery: I also noticed that a lot of mobile apps are interested in the battery level. I didn't connect the pieces why but the article mentions Uber 4% battery surcharge, and now it makes a bit more sense.
One app that has at least once been on HN at high scores starts sending out analytics before you've consented to any terms and conditions. One of the fields is your computer hostname (one of my computers has my real name in my hostname...it does not anymore). Usually web pages have "by downloading you accept terms and conditions" but this one only presented that text after you launch app before you get to the main portion. I never clicked it (still haven't), but I allowed the app mellow on background to snoop on its behavior.
Video games: The ones I've tried seen mostly don't do anything too interesting. But I haven't tried any crappy mobile games for example. One unity game on the laptop, Bloons TD 6 sends out analytics at every menu click and a finished game sends a summary and is the "chattiest" game so far, although seems limited to what the game actually needs to do (it has an.online aspect). The payloads had more detailed info on my game stats though, they should add those to the game UI ;)
Apple updates don't work through mitmproxy (won't trust the certificates). Neither do many mobile apps (none of the banking ones did, now I know what a mitm attack would look like to my bank app).
Some requests have a boatload of HTTP headers. I've thought of writing a mitmproxy module to make a top 10 list. I think some Google services might be at the top that I've seen. (I think Google also has developed new HTTP tech, is it so that they can more efficiently set even more cookies? ;)
I think anything Microsoft-tied may be chattiest programs overall on my laptop. But I haven't done stats or anything like that.
Aside from mitmproxy, I'm learning security/cryptography (managed to find real world vulnerabilities although frankly very boring ones so far...), Ghidra, started learning some low-level seccomp() stuff, qemu user emulation, things in that nature to get some skills in this space. Still need to learn: legal side of things (ToSes like to say 'no reverse engineering'), how to not get into trouble if you reverse engineer something someone didn't like. I've not dared to report some things, and to not poke some APIs or even mention them because I don't know enough yet how to cover my ass.
Modern computing privacy and security is a mess.
I've worked a good part of my career at a DSP company (it would be in the box that says "Criteo" on it on the author's article). So I have some idea what companies in that space have as data.
Basically, all these companies, ad networks, data brokers, big tech with absence of basic privacy laws (not to be confused with 4th amendment that binds Fed and State gov only, but does not restraint companies) act with wilful conspiracy with US government regulators, washing each other hand like a monopoly. This data gets enriched and collided and is perpetually on a permanent record.
I think a big part of the ability of these shady companies pulling the brightest minds away from more clearly beneficial fields is that important flavors of ideology necessary to motivate people to take less lucrative work have been stripped from "business". There's a lot fewer appeals to art, history, cultural stewardship, or empire-building present in things like transportation, medicine, construction, etc. than there were in the past. Any flavor of "for the glory of God/the nation/the People/the Art" etc. is pretty effectively stripped out of American business, and I think that's the only kind of thing that would motivate someone who could make $250k in Adware to make $100k in something else.
People are now very well-trained to look out for their own bottom line, and take jobs accordingly.
Doing things because they increase some non-monetary value has fallen out of fashion for sure. A colleague of mine recently shared, in a group social setting, a sense of disappointment that his daughter was studying to be a doctor. I was, as far as I could tell, the only one to note that there is practical utility to having doctors.
But the "be part of our mission" was shown to be hollow over and over too. First and foremost, you as an enployee are making the investors and CEO rich. The mission is usually exploiting the employee, even when it's not exploiting the world. Employees have recognized the real social ethic (money over everything) and are just playing the same game. Which is sad.
Ideally the people who see these choices would make alternative choices that will leave their grandchildren better off in the world. It has taken only a generation for the "greed is good" mentality to drop us into this fetid soup.
I think the phrase you called out--"be a part of our mission"--that most corporations (and, mimicking them, government agencies) regurgitate is itself the approach to socialization that causes people to feel less inclined to work for any non-profit reason. "Part of OUR mission" redefines the company as the entity to be loyal to, rather than casting the company as part of society itself. You can't replace constructs that tend to inspire people to heroism and selflessness with a corporate avatar and expect the fabric of society to remain similarly motivated. It does make a set few people a hell of a lot of money in the short term, though.
Ugh. Eye roll on the whole “make the world better”. So few SV products remotely make the world better. The purpose of the vast majority is to make money at all costs. I also disagree with what most people claim is making the world better. All in all, I think social media is a net negative for the world for any good that might be found. Every SV thing after that is just chasing the recketship to the moon dream.
I have the same question. It did not seem easy for me to find a job where we are at least not writing malware (according to my judgement). But it's far from making the world better :|
It’s funny how in recent headlines the NSA & FBI have been saying to use secure messaging app. Yet the FBI is infamous for claiming the need for back doors into these encrypted apps. What are we to think of the opposing views? Are they really being benevolent for the citizens, or do they no longer need the back door, or do they have a back door whether intentional or not?
They're not homogenous organizations. Not sure about the FBI, but AFAIK the NSA has always been in an awkward spot of being split between defensive and offensive missions. It wouldn't be particularly surprising to have one arm going "you should all use encrypted messaging, it's the most secure" while the other arm is frantically trying to break or backdoor said encrypted messaging.
It seems that they have changed their minds about surveillance back doors, after some devastating attacks, where Chinese state actors (among others) used back doors created for compliance with warrants to get in.
But that was the pre-Trump NSA and FBI. Now the Chinese and the Russians just need to get some DOGE volunteer to give them whatever they want, since Elon now has root on all the government payment systems and is too undisciplined to do things in a secure way.
The world is changing fast and reasons for actions may be more complex
and interesting than you assume.
Were they ever _not_ benevolent to US citizens as a whole, even if
misguided? There may be last-ditch attempts to extend benevolence to
US citizens as a takeover looms. If leaks from the Office of Personnel
Management are to be believed, then right now US government is in the
process of a soft coup, being dismantled along lines of political
loyalty. I expect those working in intelligence and law enforcement
who support democracy see the writing on the wall and will act sooner
or later.
Reliable end to end encryption is an important tool for citizens of a
nation that may need to organise in a hurry. We might see new Edward
Snowden type revelations of programmes, naming key people or giving
clear advice not to trust certain US based entities or services. Civil
servants may act professionally as non-politically as they can, but in
the end, if only to protect their jobs, they're going to come down on
the side of democracy.
May be Steve Jobs was right all along. We dont need Smartphone with App Store. Either 1st Party Apps and Everything else should be on Browser or Apps that uses Browser Engine.
A while ago a co worker told me "why would you care about your privacy? all my data is already out there anyway and what can even be done with it anyway".
What would be the ideal response to such an absurd comment? At the time I found it hard to answer because she surprised me with that opinion.
Edit to note: the explanation should be compatible with a professional context. I don't want to scare my co workers or appear crazy/paranoid.
Losing privacy makes you more vulnerable to economic exploitation (price discrimination, salary negotiation, insurance premiums, etc). Therefore protecting your privacy is a form of economic self-defense.
Just ask for their email password and see what they say. Usually though this comment is just them trying to change the subject because very few people know or care about any of this
There are a few examples I use when I hear such ignorant statements:
1. Not caring about privacy cuz you’ve got nothing to hide is like saying you don’t care about freedom of speech cuz you’ve got nothing to say.
2. If you don’t care about privacy, why don’t you poop with an open door, for everyone to observe?
The problem is, I could not formulate anything in this way in a professional setting. I want my co workers to understand because I feel a bit uneasy working with people who do not but I also don't want to scare them.
Because I don't want to rest of the house to smell?
A different argument that appeals to some is that you might not have something to hide, but what about the people who do? For the greater good of society, whistleblowers are needed to expose malfeasance by the corrupt and it's going to make it much harder for any of them to come forwards if their reward is literally exile to Russia. If you're in support of a slow slide into dystopia, go ahead and argue against all privacy. Whether a given situation rises to that level is an different but adjacent topic, but appealing to something some people can believe in, such as not letting the rich and powerful get away with being utterly corrupt in their dealings is a way to find common ground, with some. not everyone cares about that though, but it's an additional argument for privacy.
Seriously, anyone who ever says they have nothing to hide, show them this story.
"A Redding Police Department officer in 2021 was charged with six misdemeanors after being accused of accessing CLETS to set up a traffic stop for his fiancée's ex-husband, resulting in the man's car being towed and impounded, the local outlet A News Cafe reported. Court records show the officer was fired, but he was ultimately acquitted by a jury in the criminal case. He now works for a different police department 30 miles away."
One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
<< find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
Honestly, kudos. The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
<< As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And people laughed at Red Reddington when he said he had no email.
It's odd that of the two replies referencing people, both got their names obviously wrong. Is that a new phishing tactic?
Russian bot tactic? Guessing it’s an easy way to farm interaction as people comment back to correct the mistake.
New AI tactic.
salting the fields are we? total informational warfare, the digital equivalent of Sherburne's March to the sea during the American Civil War.
There was a post from someone a long time ago who has an email address and name similar to Make Cuban but not quite. He got quite a few cold call emails meant for Cuban. A lot of them were quite sad (people asking for money for medical procedures and such).
Where do you buy their details from?
Right now, my goto is signalhire
Thank you, though I have a feeling that they get their data from their own sign up form.
> The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
Unless your problem is with the company doing the privacy violations, this doesn’t make any sense.
Pretty much all companies are doing the privacy violations. You think your doctors office doesn't sell their contact list?
Where I live, which is not in the USA, I'm confident my doctor's office doesn't sell their contact list - or at least, not without statistical anonymisation and aggregation for research purposes.
They probably outsource processing the data and storing it to other entities, but that will be under contracts which govern how the data may be used and handled. I assume that's not what "sell the data" means in this conversation.
It would be such an egregious violation of local data protection law to sell patient personal details for unrestricted commercial use, including their contact info, and it would make the political news where I live if they were found out.
Also "not in the USA" i actually work on a medical ish application these days (not the in production version, mind but a fork with new features that's entirely separate at the moment).
I have access to ... zero patient data. Our entire test database is synthetic records.
In my country (and I suspect most Western Countries) my doctor would lose his medical licence for selling my contact information.
Exactly this was tried by the likes of James Oliver and journalists/comedians of that caliber running ads and gathering data from politicians in Washington.
It was some years ago and resulted in nothing
Do you mean John Oliver?
or Jamie Olive Oil?
>One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
>As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
Fortunately this is changing with iOS 18 with "limited contacts" sharing.
https://mobiledevmemo.com/wp-content/uploads/2024/09/image.p...
The interface also seems specifically designed to push people to allow only a subset of contacts, rather than blindly clicking "allow all".
The far bigger issue is the contact info you share with online retailers. Scraping contact info through apps is very visible, drawing flak from the media and consumers. Most of the time all you get is a name (could be a nickname), and maybe some combination of phone/email/address, depending on how diligent the person in filling out all the fields. On the other hand placing any sort of order online requires you to provide your full name, address, phone number, and email address. You can also be reasonably certain that they're all accurate, because they're plausibly required for delivery/billing purposes. Such data can also be surreptitiously fed to data brokers behind the scenes, without an obvious "tiktok would like access to your contacts" modal.
On android you can choose whether to grant access to contacts. And most apps work fine without.
GrapheneOS, which I use, also has contact scopes, so troublesome apps that refuse to work without access will think they have full access. You can allow them to see no contacts or a small subset.
There's also multiple user profiles, a "private space", and a work profile (shelter) that you can install an app into, which can be completely isolated from your main profile, so no contacts.
It surprises me how far behind iOS is with this stuff. Recently I wanted to install a second instance of an app on my wife's iPhone so she could use multiple logins simultaneously, there didn't really seem to be a way to do it.
The point is that it doesn't matter whether YOU grant access to your contacts. As long as anyone who has you in THEIR contacts decides to just press "share contacts" with any app, you are doxxed and SkyNet is able to identify you for all practical purposes.
People will share their whole list because it’s simpler
Or because they were tricked. eg. LinkedIn’s “Connect with your contacts” onboarding step which sounds like it’ll check your contacts against existing LinkedIn users but actually spam invites anyone on your contact list that doesn’t have an account.
Linkedin is so terribly evil these days.
I also see the shenanigans of adding new 'privacy' settings and setting them open by default. Another typical Microsoft ploy by the way.
They were evil before.
Previously they’d take your LinkedIn password and try using that to log in to your email account to grab your contacts.
Wasn't this also how some services would connect e.g. your bank accounts? They'd ask for your credentials and log into your bank to scrape its contents.
And I kinda get it, some services external to your bank can help you manage your finances etc. But it's why banks should offer APIs where the user can set limited and timed access to these services. In Europe this is PSD2 (Revised Payment Services Directive).
This is a big thing, is there any evidence? Not implausible unfortunately…
https://en.wikipedia.org/wiki/LinkedIn#Use_of_e-mail_account...
This sounds absolutely insane.
This is how a load of emails were sent out from my Hotmail account to anyone I had ever contacted (including random websites) asking if I want to connect with them to Facebook. The onboarding seemed to imply it would just check to see if any of my contacts were already using facebook.
Doesn’t help against your cousin who shares your data.
Useless without limiting the kind of data I want to share per contact. iOS asks for relationships for example. You can set up your spouse, your kids, have your address or any address associated with contacts. If I want to restrict app access to contacts, I also want to restrict app access to specific contact details.
Interesting thing is that security practices mention that you should always grant the minimal set of permissions.
So in case Apple allowed for “share all” it means that they did it by design and are changing it now only because of backlash.
I think it's not properly appreciated that Apple fully endorses all of this. For two reasons: (1) the provision of the output of billions of dollars of developer time to their users for no up front cost (made back via ads) is super valuable to their platform; and (2) they uniquely could stop this (at the price of devastating their app store), but choose not to.
In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
>I think it's not properly appreciated that Apple fully endorses all of this. [...] they uniquely could stop this (at the price of devastating their app store), but choose not to.
A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps. Just look at f-droid. Most (all?) of the apps there might be privacy respecting, but good luck getting any of the popular apps (eg. facebook, tiktok, google maps) on there.
>In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
What would make you think Apple's pro-privacy changes aren't "about stealing $10B a year or so from Facebook"? At least some people are willing to pay for more privacy, and pro-changes hurts advertisers, so basically any pro-privacy change can be construed as "less about meaningful privacy and far more about stealing".
F-Droid will never have popular apps because it requires them to be open source. In fact F-Droid does the build for you, generating reproducible builds and avoiding the risk of adding trackers to the binary that aren't actually in the source code. With F-Droid the code you see is what you get.
> A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps.
40 years ago apps were sold on floppy disks. 30 years ago they were sold on CD-ROMs. 20 years ago, DVDs.
Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing. Apps being blocked or banned from stores for choosing to not respect your privacy is a good thing.
>Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing.
I'm not sure you're trying say. I specifically acknowledged the existence of f-droid as a "privacy respecting app store" in the quoted comment.
>Apps choosing to not respect your privacy, and being blocked or banned from stores, is a good thing.
"a good thing" doesn't mean much when most people haven't even heard of your app store, and are missing out on all the popular apps that people want. Idealism doesn't mean much when nobody is using it. Apple might not be the paragon of privacy, but they had a greater impact on user privacy than f-droid ever will. To reiterate OP's point: what's the point of having a perfectly private OS and app store, when there's no apps for it, and your normie friends/relatives are going to sell you out anyways by uploading their entire contact list and photos (both with you in it) to google and meta?
Then there wouldn't be any free ones.
> good luck getting any of the popular apps (eg. facebook, tiktok, google maps) on there
That makes sense, considering they’re not privacy respecting.
How about a no/limited internet setting? So many apps spy on you and they don’t need network at all to function.
Fully denying internet access for an app is actually in iOS and has been there for many years.
But it's only available in China.
https://tinyapps.org/blog/202209100700_ios_disable_wifi_per_...
Until the app's devs get wise to this, and do not allow the app to function without the network access. It could be as simple as a full screen, non-closable screen that says the app requires network access with a button to the proper setting to correct the issue.
Such "go away" screens are in violation of Apple's AppStore rules. You cannot make a permission a condition of using the app, and stop the user from using it if they don't grant that permission. The app should gracefully do as much as it possibly can without the permission.
Try signing in in any Google app without allowing data sharing with Safari. It's not possible. They don't let you.
It's kind of weird that Apple introduced this big fat tracking consent popup, but they don't really do anything to actually prevent cross-app tracking...
This holds for every app and every permission? Because I'm quite sure I recently used an app that closed for not allowing a permission. May be misremembering..
5.1.1 (iv) Access: Apps must respect the user’s permission settings and not attempt to manipulate, trick, or force people to consent to unnecessary data access. For example, apps that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos. Where possible, provide alternative solutions for users who don’t grant consent. For example, if a user declines to share Location, offer the ability to manually enter an address.
https://developer.apple.com/app-store/review/guidelines/
This wording is actually a lot weaker than I remember it back when I wrote iOS apps. The developer also was not allowed to exit the app or close it against the user’s intent, however I can’t find that rule anymore.
I agree with these guidelines (although they could be improved), although I think that some things could be done by the implementation in the system, too.
> For example, if a user declines to share Location, offer the ability to manually enter an address.
This is a reasonable ability, but I think that the operating system should handle it anyways. When it asks for permission for your location, in addition to "allow" and "deny", you can select "manually enter location" and "custom" (the "custom" option would allow the user to specify their own program for handling access to that specific permission (or to simulate error conditions such as no signal); possibly the setting menu can have an option for "show advanced options" before "custom" will be displayed, if you think it would otherwise make it too complicated).
> that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos
This is reasonable, that apps should not be allowed to require microphone access for such a thing.
However, sometimes a warning message makes sense but then to allow it anyways even if permission is not granted; e.g. for a video recording program, it might display a message about "Warning: microphone permission is not allowed for this app; if you proceed without enabling the microphone permission, the audio will not be recorded." Something similar would also apply if you denied camera permission but allowed microphone permission; in that case, only audio will be recorded. It might refuse to work if both permissions are denied, though.
Yeah, "unnecessary" is the word that may as well render the whole section moot unless it's actually properly enforced. If I can remember I'll test it today and see how it goes.
You can't do this, because some users are genuinely offline sometimes.
Yeah like the ChatGPT app that doesn't work without a Google account. I have Google play on my phone, just no account logged in. I do have Google play services like firebase push which many apps legitimately need. But ChatGPT just opens the login screen in the play store and exits itself.
I'm always wondering why these idiots force the creation of an account with their direct competitor. It's the only app I have that does this. But anyway I don't use their app for that reason, only use them a bit through API.
Grapheneos lets you pick this for apps before they even launch. You can revoke their network access, as well as define storage scopes for apps at a folder level, so if an app needs access to photos, you can define a folder, and that is the only folder it can scan for photos.
I used that when submitting parental leave at work. I didn't want to provide full access to all my photos and files for work, so all they got was a folder with a pic of a birth certificate.
iOS and Mac also let you do this, for photos, contacts and files.
Apple is also pushing developers toward using native picker components. That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
> That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
This is an interesting contrast with the earlier philosophy of phone OSes that the file system is confusing to users and they should never be allowed to see it.
They still (mostly) aren't.
From an user perspective, photos aren't files. Music isn't files. Contacts aren't files. Apps aren't files. App data isn't files.
The only things that "walk like a file and quack like a file" are documents, downloads, contents of external storage, network drives and cloud drives, and some Airdrop transfers.
Yes, it's technically possible to use the files app to store photos, music etc, but if you do that, "you're holding it wrong."
I would love an iOS setting that blocks all network access for certain apps
GrapheneOS has that. It asks every time you install a new app whether it should have network permissions.
Android can do this
>Fortunately this is changing with iOS 18 with "limited contacts" sharing.
Its not. Apple still owns your stuff. There is no difference between Apple and other 3p retailers. Apple just wants more of your money.
>Its not. Apple still owns your stuff. There is no difference between Apple and other 3p retailers.
That could be taken to mean anywhere between "Apple controls the software on your iPhone, therefore they control your contacts" and "Apple gives out your data like the data brokers mentioned in the OP". The former wouldn't be surprising at all, and most people would be happy with, and the latter would be scandalous if proven. What specifically are you arguing for?
Why do you inherently trust Apple?
Remember, the big celebrity photo leak happened because of a vulnerability within Apple Software.
The "vulnerability" part doesn't seem to be substantiated. From wikipedia:
>The images were initially believed to have been obtained via a breach of Apple's cloud services suite iCloud,[1][2] or a security issue in the iCloud API which allowed them to make unlimited attempts at guessing victims' passwords.[3][4] Apple claimed in a press release that access was gained via spear phishing attacks.[5][6]
Regardless of their security practices, it's a stretch to equate getting hacked with knowingly making available data. Moreover you can opt out of icloud backup, unlike with whatever is happening with apps mentioned in the OP.
> (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
When I was at a medium-sized consumer-facing company whose name you’d recognize if you’re in the tech space (intentionally vague) we had some customers try this. They’d find product managers or directors on LinkedIn then start trying to contact them with phone numbers found on the internet, personal email addresses, or even doing things like finding photos their family members posted and complaining the comments.
We had to start warning them not to do it again, then following up with more drastic actions on the second violation. I remember several cases where we had to get corporate counsel involved right away and there was talk of getting law enforcement involved because some people thought implied threats would get them what they wanted.
So I can see why companies are quick to lock out customers who try these games.
I realize why this is bad. Full stop.
I wonder if it ever evoked an dive into exactly what happened to leave these customers with thinking this was the most likely avenue for success? Hopefully in at least some cases their calls with CSRs were reviewed and in the most optimistic of best cases additional training or policies were put into place to avoid the hopelessness that evokes such drastic actions.
That would require empathy from someone who is, right now, bragging about how they sicced their lawyers and the cops on customers they were fucking over.
I'm going to guess that the answer would be "nope, didn't care." That Cirrus isn't going to pay for itself, friend...and you can't retire at 40 without breaking a few eggs.
I remember when Google was locking accounts because people had the audacity to issue a chargeback after spending hours trying to resolve Google not delivering a working, undamaged phone they'd paid well over half a grand for. Nobody at Google cared, but when the money (that Google never fucking deserved in the first place) was forcibly and legally taken back, the corporation acted with narcissistic rage...
> So I can see why companies are quick to lock out customers who try these games.
Most of the companies who customers try these "games" against are places like Google and Meta that literally do not provide a way for the average customer to reach a human. None.
Those have got it coming for them, the megacorps' stance on this is despicable and far worse than the customers directly reaching execs who could instantly change this but don't because it would cut into their $72 billion per year net profit.
This is a case where laws simply did not catch up to the digital era. In the brick and mortar era it was by definition possible to reach humans.
I get that your company was smaller and probably did allow for a way to reach a human but that's not generalizable.
Regarding the evolution of the law:
Long ago when Google tried to launch its very first phone somewhere in Europe I can distinctly remember that it was initially not allowed to because of some regulation that mandated a company selling telephones to have a customer service.
Can't remember if they eventually found a loophole or if the regulations were changed.
> but that's not generalizable.
You only referenced two companies...
EBay, Amazon, Walmart, CVS, etc.
Name a major company, then try to contact customer service and interact with an actual human.
Even if they do have a contact phone number, good luck navigating the mazes of voice prompts.
Amazon isn't actually so bad about this, but I couldn't tell you if their CSR chat bot is an actual person or mid-level AI by now.
My only connection to Amazon support has been for AWS.
Perhaps though this should be an example of good customer service where talking to a human is easy, and not lumped in with the likes of Google where its impossible.
Perhaps your experience with the online shop is different, but frankly they're in my "good" column, not my "bad" column.
AWS was, indeed, very easy to reach when i did an oopsie and spent over my desired limit.
Two companies that are so gigantic they combine to a great percentage of number of "company interactions" the average Westerner has on a daily basis.
Anyway, I don't think it contradicts my point? Your company exist, mom and pops exist and there's a whole spectrum between them, so it's not generalizable.
> CashApp terminated my account for this shenanigans
Did you call to complain about the termination?
What's funny is that the exec I got on the phone was super supportive and helpful and was genuinely amused to hear from me and hear what was happening. He put me in touch with their "Executive Support Team" and it was after this that I guess someone realized they didn't like the route I had taken.
I feel somewhat vindicated after this announcement (though it does nothing to bring my account back):
https://www.engadget.com/cybersecurity/cfpb-fines-block-175m...
> Accessing any kind of customer service for Cash App was a challenge, too, according to the CFPB. Block included a customer service number on Cash App cards and in the app's Terms of Service, but calling it would it ultimately lead users to "a pre-recorded message directing consumers to contact customer support through the app."
Thanks for sharing this approach. I gotta say that I’d love to hear more about how/where you are buying this data, and the CashApp story too.
So your data is only as private as the least privacy-conscious person in your social circle.
>> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone.
I find it funny how easy it is to find scammy websites which promise to remove your data (right...), but how hard it is to find the actual marketplaces where people trade this data. It also makes you think about what other systems have similar asymmetric interfaces for the public and the ones in the know (yes, I know there are plenty).
As a result of sales drones getting hold of my number, I have to put my phone on silent and never pick up unless I recognize the number. Very unfortunate. What if there is an emergency with my kids?
If you're using iOS you can set certain contacts to bypass silent mode so that you still hear their notifications/calls. I know it doesn't help with unknown numbers, but just saying in case you're not aware. I'd be surprised if you can't do the same on Android.
Yes, thanks, I've configured that for kids and other loved ones. But I can't pick up anything else, even sales people from India manage to use a number that appears local (in The Netherlands for me), so I might miss a call from the kid's school.
I've just added the numbers of my kids school onto the list and it's been fine for me. I've never had them contact me from anything other than the schools number, but I'm in the UK and I would be very surprised if a teacher tried to call me from their mobile phone or something.
Oh wow, I knew this was a rampant problem in the US, but I didn't realise we had that at that scale here in the Netherlands as well. Hoping I can dodge the bullet a little longer...
You can. There's DND mode and Favorite contacts. I use auto-DND scheduling option too.
> find an exec and buy their details for pennies and call them up on their cellphone
There is a vendor for this very thing in relation to business and government position called “zoominfo”
And the combination of contacts are also unique enough to identify you. Even though they change over time. Some fuzzy matching, take in another few bits of fingerprint like device type and country and voila no advertiser ID required.
Ps smart idea to use it for that purpose. If I failed to get proper service I'd just review bomb the company everywhere and soon enough I'd get a call fixing my problem and asking to remove them :)
Could you please tell me how to buy this stuff? This sounds like a great way to get customer service when normal channels fail.
> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies
The article author claims that you can't get this stuff for under $10k. Where do you find it for pennies?
Try cold outreach software like Apollo.io
As a test I downloaded it and got my wife’s full email and cell phone number easily from their free trial. And the full price would be on the order of pennies per contact.
Right now I use SignalHire most of the time
Assuming these marketplaces operate within the bounds of the law, would it break HN’s ToS to post them? I’d be interested in pursuing the same strategy.
Most online forums ban "doxing" even if it's theoretically legal.
Technically it's one step removed from doxxing, but I'll take your point.
More than one step unless you planned to post it publically. "I need a phone number, where can I get it" is all good!
I stopped keeping a mobile number many years ago.
Phone is wifi only.
In particular, I do not use the contacts functionality built into the phone.
(This is /e/OS, which helps, but I'll be moving to Mobian as soon as it is viable.)
What do you do when the car repairman wants to call/text you when the car is finished? Or any other similar situations?
How do you do online banking?
I have done this as well. I once got an travel insurance claim rejected by some outsourced handler and found out who the CEO of the insurance startup was. I emailed him and magically it got resolved
Actually this could prove very useful for a resistance movement. Take them down with with their own medicine.
Yeah, I wonder if it might help to create a little newsletter for politicians and regulators. Send emails telling them exactly where they are, what apps they use, and so on. And send them the same information about their children.
They would make adjustments so that their details are protected, but the regular user is not.
Eh, California protects politicians from having their real estate holdings posted online by government, and afaik, most county recorders have decided it's easier to not let any of it be online than to figure out who is a politician and only restrict their information.
Of course, much of it is public information so businesses can go in person, get all the info and then list it.
I’m relatively sure they would bring the hammer down on the sender.
What hammer? It's legal, that's the problem.
To use a line sometimes attributed to Beria, “give me the man and I will give you the case against him”. By which I mean that I’m sure they will find some means of making you sorry.
Getting access to someone’s contact info legally doesn’t mean you can do whatever you want.
This would very easily be covered under harassment laws, which are a separate matter.
Doesn't mean you won't get hammered my friend.
I'm not familiar with these marketplaces. Could you name a few examples?
I'm using SignalHire most of the time recently
How do you do this? This is genius
It is possible to just not use a phone number.
I mostly connect through Signal. I do technically have a phone number that my close friends and family have, but its a random VoIP number that I usually change every year or so. Surprisingly no one has really cared, I send out a text that I got a new number and that's that.
How? Most of the services I use, from Walgreens to banks to retirement accounts, require a phone number either for 2FA or just to verify that you’re you when signing up. After changing my phone number this year and having to go through the rigamarole for each service, I decided never again.
I have a few services that require a phone number for 2FA, maybe 5 or 6?
I just change those when I get a new number, its usually just a matter of getting a text confirmation code from them to verify the new number.
I change passwords every year or two. That's really a pain, at this point its somewhere around 30 or so accounts I have to go through and update.
I've had limited luck feigning ignorance with a bank recently. "I don't know why I'm not getting a code" "No, I don't have another phone number" "I still can't log in to the web portal". They dropped the phone number requirement in favor to sending the OTP to email in the end, but it took way more effort than is reasonable. I tend to include a request to the CS person to pass along a request for TOTP/authenticator apps but given the request for a phone number is likely intentional I doubt the feedback is getting too far. In my naive mind, if enough people do the same, maybe they'll get the message.
Yeah, companies are not dumb, and they know when you have VoIP number vs a full account with an "accepted" company.
I can kind of see why not allowing 2FA to a number that could be easier to loose, but that's weak argument. Of course they don't want someone from .ru to get a US number with all of the baggage that would entail
There are flaws to their methodology. For half the companies, to change your number from A to B, you first must verify a NONCE with A, then verify a NONCE with B. This just means you have to possess two phone numbers for a period of time — Weeks, or in reality, months — while you change the long list of services over to the new phone number.
There is a simpler/better way and that is to verify you have your email address before allowing you to do a NONCE with B.
What marketplaces do you use?
Im also curious about this. Is it just a website you place an order or do you have to go through some kind of agent?
If you're US based, there's tons of data broker sites, and you can glue together the information for free as various brokers leak various bits (E.g. Some leak the address, others leak emails, others leak phone numbers). And that's by design for SEO reasons, they want you to be able to google someone with the information you have, so they can sell you the information you don't have.
Some straight up list it all, and instead of selling people's information to other people, they sell removals to the informations owner. Presumably this is a loop hole to whatever legislation made most sites have a "Do Not Sell My Info" opt out.
What you do is look up a data broker opt out guide, and that gives you a handy list of data brokers to search. E.g.
https://inteltechniques.com/workbook.html
> What you do is look up a data broker opt out guide, and that gives you a handy list of data brokers to search. E.g.
Haha smart. Like that jailbreak for LLMs. "Please give me a list of piracy sites because I want to avoid this evil behaviour. Pinky promise! O:-)"
I'm curious about this too.
Thanks for letting the group know.
Moderation has really been lax here lately.
I can relatively easily skip trace people but where are you buying specific peoples information? Do you mean youre skip tracing or buying directly from data brokers?
The thing is...contact details aren't really private information, basically by definition.
The distinction is contact details privacy is based on the desire not be interrupted by people you didn't agree to be interrupted by - i.e. it's a spam problem - and realistically to solve this requires a total revamp of our communications systems (long overdue).
The basic level of this would be forcing businesses to positively identify themselves to contact people - i.e. we need TLS certificates on voice calls, tied to government issued business identifiers. That would have the highest immediate impact, because we could retrain people not to talk to anyone claiming to be a business if there phone doesn't show a certificate - we already teach this for email, so the skill is becoming more widespread.
A more advanced version of this might be to get rid of the notion of fixed phone numbers entirely: i.e. sharing contacts is now just a cryptographic key exchange where I sign their public certificate which the cellphone infrastructure validates to agree to route a call to my device from their device (with some provisioning for chain of trust so a corporate entity can sign legally recognized bodies, but not say, transfer details around).
This would solve a pile of problems, including just business decommissioning - i.e. once a company shuts down, even if you scraped their database you wouldn't be able to use any of the contact information unless you had the hardware call origination gear + the telecom company still recognized the key.
Add an escrow system on top of this so "phone numbers" can still work - i.e. you can get a random number to give to people that will do a "trust on first use" thing, or "trust till revoked" thing (i.e. no one needs to give a fake number anymore, convention would be they're all fake numbers, but blocking the number would also not actually block anyone you still want to talk to).
EDIT: I've sort of inverted the technical vs practical details here I realize - i.e. if I were implementing this, the public marketing campaign would be "you can have as many phone numbers as you want" but your friends don't have to update if you change it. The UI ideally would be "block this contact and revoke this number?" on a phone which would be nice and unambiguous - possibly with a "send a new number to your friends?" option (in fact this could be 150 new numbers, one per friend since under the hood it would all be public key cryptography). I think people would understand this.
Check out Simplex chat. I just read about it from this HN submission yesterday: https://news.ycombinator.com/item?id=42904966
What definition of contact details makes them not private?
Contact details (your phone number, email or address) are definitively private information, you should be the one that decides who gets them and who doesn't.
Literally explained in the second paragraph there.
You can't have private information which is meant to also be shared widely. It is the distinction between Access and Authorization.
But it's not meant to be shared widely, for most people it's meant to be shared with consideration and/or permission.
Also, it's not just about "a desire not be interrupted by people you didn't agree to be interrupted by", it's about not having the data in the first place, for any reason, including tracking of any sorts.
Pre-internet/cell phone nearly everyone had their name/phone number/address in phone books. Libraries had tons of phone books. And you could pay for the operator to find/connect you to people as well.
Contact info being private is a relatively recent concept.
Thread is discussing business contact details.
Give me your personal phone number
What kind of marketplace do you use?
I'm really happy to see this level of detail and research. So many privacy-related articles either wholly lack in technical skill, or hysterically cannot differentiate between different levels of privacy concerns and risks.
People commonly point to Mozilla's research regarding vehicle's privacy policies. (https://foundation.mozilla.org/en/blog/privacy-nightmare-on-...) But that research only states what the car company's lawyers felt they must include in their privacy policies. These policies imply (and I'm sure, correctly imply) that your conversations will be recorded when you're in the vehicle. But, they never drill down into the real technical details. For instance ..... are car companies recording you the whole time and streaming ALL of your audio from ALL of your driving? Are they just recording you at a random samples? Are they ONLY recording you when you're issuing voice commands, and the lawyers are simply hedging their bets regarding what sort of data _might_ come through accidentally during those instances? Once they record you, where is the data stored, and for how long? Is it sent to 3rd parties, etc? Which of these systems can be disabled, and via what means? Does disabling these systems disable any other functionality of the vehicle, or void its warranty? Lastly, does your insurance shoot up if you have a car without one of these systems? etc ...
The list of questions could go almost indefinitely, and presumably, would vary strongly across manufacturers. So much of the privacy news out there is nothing but scary and often not very substantiated worst case scenarios. Without the details and means to improve privacy, all these stories can do is spread cynicism. I'm really glad to see this level of discourse for the author.
Those aren't questions that have fixed answers. The data available is pretty far beyond what I'm personally comfortable with though.
One OEM I'm familiar with had such a policy. My org determined that we needed a statistical reference to compare against within a certain area. Some calls were made to the right people and shortly after we had a (mildly) anonymized map of high precision tracks for every vehicle of that brand within the area over some period.
That’s pretty interesting. What was the purpose of the statistical sample? What did your company want to know precisely?
I’m assuming insurance or commercial trucking? Or both?
I'll answer the, "Does disabling it void your warranty?" question. The answer is almost always "no". Unless the modification you make to something actually directly or indirectly caused damage to it, companies in the US cannot "void the warranty".
I'm sure the company will argue the warranty is voided akin to how trucks have "not liable for damage from rocks" or w/e (they are).
IIRC, this is under the Magnuson-Moss act but I didn't find it when skimming wikipedia.
https://en.wikipedia.org/wiki/Magnuson%E2%80%93Moss_Warranty...
This is usually side-stepped by being a violation of Terms of Service, which is a much lower legal hurdle.
The warranty is intact, but the device is bricked, because it can't bypass any of the authentication that is required to do... Pretty much everything.
> Lastly, does your insurance shoot up if you have a car without one of these systems?
This question I can answer with a reasonable degree of certainty; no, it does not.
Insurance companies increase rates for automobile coverage for many reasons, real or illusionary. But "does your insurance shoot up" strictly for not having a recording device in a vehicle is not one of them.
Do some insurance companies charge less when provided access to policy owner driving patterns which the companies infer reduce their risk? Sure.
But that is a different question.
> Do some insurance companies charge less when provided access to policy owner driving patterns which the companies infer reduce their risk? Sure.
> But that is a different question.
In what way? A discount for allowing surveillance is identical to an extra charge for disallowing it. They're identical, unless the "base" rate is set externally somehow.
$5 for lemonade, $3 off if you skip the lemon == $2 for sugar water, $3 extra to add lemon.
> A discount for allowing surveillance is identical to an extra charge for disallowing it.
I don't think this is necessarily true. You're right that there's an unknown base rate, but that means you can't say what you're saying as well. And if you have other companies that offer non-driving-pattern policies as well, and they're a similar price, you can see it's a discount not an added cost.
In fact, regardless, other companies are your best bet in combatting rising prices for any reason.
>> Do some insurance companies charge less when provided access to policy owner driving patterns which the companies infer reduce their risk? Sure.
>> But that is a different question.
> In what way? A discount for allowing surveillance is identical to an extra charge for disallowing it.
In this case, the discount is "opt-in."
> $5 for lemonade, $3 off if you skip the lemon == $2 for sugar water, $3 extra to add lemon.
I believe a better analogy is:
Defaults matter.
There are quite a few interesting tracking flows out there.
My rent is paid through a company called Bilt.
I discovered that when I shop at Walgreens now, Bilt sends me an email containing the full receipt of what I bought like so:
Ostensibly (hopefully) it would exclude sensitive items, plan B, condoms, etc...I'm curious how this data flows from Walgreens to my rent company, but maybe I'd rather not know and just use cash/certified check from now on.
This is called Level 3 data, and any merchant can choose to provide it for a reduction in the transaction fees they pay.
Here's a small comment thread from a few months back: https://news.ycombinator.com/item?id=41213632
So in essence the merchant pays with my data?
In theory you’re already paying the merchant fee in the “price”. So merchant found a way to improve margins and credit card companies found a new revenue source
Yes, though people also welcome the extra cash back or other card benefits.
Apple Card does not sell this data, IIRC. But offers a lower cash back than many other cards.
True, while Google sees roughly 85% of all American cardholder swipes and doesn't need to sell it since they're making the ad market...
How on earth is this legal
Things that aren't explicitly made illegal are legal. Who would invest the resources necessary to get a law banning this passed?
Corporations are people, too.
Or phrased less inflammatory manner: "Corporations can enter into contracts and engage in legal action just like people can". Even the much maligned Citizens United v. FEC basically boils down to "groups of people (corporations or labor unions) don't lose first amendment protections just because they decided to group up".
Except not everyone in a corporation has the right to speech. I'm prohibited by my employer to say anything on the company's behalf, but the C-suite and board are able to speak on my behalf. So, the company's leadership has a right to free speech, I don't.
You still have that right; you simply entered into a voluntary agreement with your employer not to exercise it in exchange for money. Happens all the time.
Let's bring back indentured servitude, you have a right to not be a slave but you should still be able to enter into a voluntary agreement not to exercise that right.
>Except not everyone in a corporation has the right to speech. I'm prohibited by my employer to say anything on the company's behalf,
Yeah, that's how organizations typically work? You might have "freedom of movement", but that doesn't mean you can work in your CEO's office. Organizations also limit who has access to its bank accounts, but that doesn't mean it's suddenly illegitimate for companies to engage in transactions.
This comment is quoting Mitt Romney
> while Google sees roughly 85% of all American cardholder swipe
I'm probably not reading this properly, can you say that a different way?
Google buys transaction data from credit card companies (Visa, Mastercard, etc). They almost certainly know what you spend money on
For every 20 Americans with a credit card, 17 have all their purchases sent to Google.
This is the real reason why they can afford to give you cash back.
It’s honestly crazy that we allow companies to sell our data — and even financially incentivize companies to share our data like this.
The problem is that to you it seems like your data but to Walgreens they see it as theirs. They generated it with their point of sale system.
The data is about a transaction that you made, but they generated all of it.
Until we have agreement as a society about what “my data” means, this kind of stuff is going to run rampant.
>what “my data” means
It makes me wonder, if everyone 'owned' their own data, I wonder if it could be used as a form of UBI. Everyone has data from using services, everyone owns it, everyone can sell it to make a living just doing whatever they are doing everyday.
This is only just a shower thought I had the other day though, there are probably many pitfalls when it comes to such an idea.
Like adverts in general the value of your data or your attention is tiny.
The average American spends $200 (via higher costs for products) for TV each year and receives how many hundreds of hours of adverts in return?
The superbowl for example gets $5 for every viewer, for about an hour of adverts. What’s the average hour of time worth?
Facebook might suck up your data and flog it for a few cents, you’ve probably got more cash down the back of the sofa.
Unlikely. I'd think the most valuable data is generally the type that can be used to extract money from you. Targeted ads and such. So, your data's value would increase in proportion with your spending power.
This idea is the subject of the 2013 book "Who Owns the Future?" By Jaron Lanier.
Connecting information to that kind of personal gains sounds dangerous. There is probably non-negligible abuse potential, like college kids legally printing money at weird scale.
IDK, I think almost all interesting data has no obvious single owner, because it gets created as a side effect of an interaction between two or more parties.
Take the transaction information from example above. The record of you buying products X, Y, Z for total t=x+y+z at time T, with card C - both you and the store could argue they're entitled to it. It's about you and money you spent and products you received, but it's also about them and the money they received and the products that were taken off their inventory. Then the card issuer will interject saying, "hey, the customer uses a card we provide as a service, so we're at least entitled to know which card was use to pay, to whom, when, an what the total amount was!". Then both yours and stores' banks will chime in, and behind them, also the POS terminal provider.
Truth is, they all have a point. We like to think that paying for groceries with our watch is like a medieval peasant paying for fruit with metal coins at a town market. It's not. Electronic payments always involve multiple steps handled automatically, in the background, by half a dozen service providers linked by their own contracts and with their own legal reporting requirements, and each of them really do need to know at least some details about the payment they're participating in.
A simpler example: this comment. It's obviously mine. It's also a response to you, and it only makes sense in context of the whole subthread. Should anyone reply to it, they'll gain a stake in it, too - and then, arguably, everyone following this discussion have a right to read it, now and in the future. After I hit the "Reply" button, I can't in good conscience claim this comment is mine and only mine. This is why I'm personally against the practice of unilaterally mass-deleting of comments on open discussion boards, like e.g. plenty of people do on Reddit, forever ruining useful discussions for the public.
(It's also why I like HN's approach to GDPR, which is, you can get your account disassociated from your comments, and you can request potentially identifying content be removed, but the site won't just mass-delete your comments automatically.)
Honestly the path to "UBI" is probably just socialized/subsidized basic needs.
Build masses of government housing, make a healthcare public option with sliding-scale costs, and you're 90% of the way there - food and decent low-end broadband are frankly already cheap enough for the government to cover with maybe some "Don't gouge Uncle Sam or else" clauses and that's about everything.
I don't support UBI but that's a fascinating idea. Unfortunately the data is worth micropennies in the individual, so only worth something in aggregate, like a class action settlement where you end up with a cheque for $0.34 for damages which makes it not even worth your time, it'd only be good as the backdrop for a science fiction novel or as an experiment by a YouTube video by a well known creator to see how little money it would make. I would read the hell out of that book and watch that video tho!
>to you it seems like your data but to Walgreens they see it as theirs
the value of this data comes from what did I buy, what else do I buy, where am I, who I am, etc.
to your point, Walgreens does not sell to their competitor CVS data about what they sell, when, and where.
so if that really is their argument, it's refutable.
This is fairly easily answered through legislation like the GDPR which classes this data as personal data if it’s associated with an identified or identifiable person.
A legislative body writing something down doesn’t mean society has agreed to it.
If someone journals and writes down everyone they met with locations and dates, they will laugh you out of the room if you tell them they are violating GDPR.
This also leads to stupid shit like people not being sure if they can point a camera at their driveway to catch vehicle break-ins.
Finally, classifying something as “personal data” because it’s about me still doesn’t make it “my data”.
Health data in the US is strictly regulated, very personal, but is definitely not mine. I cannot remove things from it or prevent it from being shared between healthcare institutions.
You seem not to know much of anything about the laws regarding personal information in the US or Europe.
It’s amazing how little control we have over information that is the most personal essence of our lives.
Why do we have zero insight, no control. Nothing.
I hate it so much.
Thanks for the details.
> choose to provide it for a reduction in the transaction fees they pay.
That would explain why I can use my credit card for rent without a transaction fee! No free lunch!
Who is Level 3 data shared with, ie who is the aggregator? Is it the credit card bank then aggregates and sells it?
Is there any documentation on this to read further? I.e. what the different levels contain and how much on average is the cost reduction for the merchant.
Here is implementation documentation from Mastercard about l3: https://na-gateway.mastercard.com/api/documentation/integrat...
The cost reduction is very small, it’s applied to interchange fees. I’ve been directly responsible for implementing this functionality on payment gateways for multiple processors because it helps reduce fraud holds as well.
Is this data requestable via a GDPR takeout?
searching for “mastercard level 3 data takeout” and such bring up the same 5 pages that are not relevant.
Separate question, what are your ethics around the surveillance of Americans' economic activities by private actors? What "rights" are relevant in this space and which do you subscribe to?
I'm not going to debate you about anything, I just don't get the chance to ask insiders any of these questions.
Do you think there are different ethical concerns when dealing with non Americans?
Also a great question.
My ethics are “this is unequivocally wrong without consent”.
Thankfully my work was on payment products that serviced businesses and government entities, so I did not really have to deal with that moral quandary.
However it gets muddier in other spaces as well. There are types of cards, like HSA/FSA that require something similar to level 3 data called IIAS that is used to determine what parts of your purchase are eligible. In the parts of the systems I have worked with, this is covered by HIPAA, but I have no idea if there are “clever” methods to sneak that data out of the chain elsewhere.
"Bilt Members can earn points on Walgreens purchases made using any card linked to their Bilt account."
https://support.biltrewards.com/hc/en-us/articles/2901187842...
There's that FSA/HSA benefit section at the bottom which explicitly states that Bilt receives item-level data:
https://www.biltrewards.com/terms/walgreens
That just sounds like a standard cross-merchant loyalty program? I don't think there are many examples in the US, but once you realize it's a loyalty program you really shouldn't be surprised that they're tracking your purchase history. That's basically the entire premise.
In Germany, the major cross-merchant loyalty program Payback gives you one or two rounds of extra consent choices about the tracking, and the type we see here is absolutely not mandatory for participating. It does of course let them give you more personalized and useful coupons, but one can participate while declining that permission.
> it's a loyalty program
calling something loyalty does not make it "loyalty" ..
So called loyalty programs should be illegal on multiple fronts,
- Privacy: There's obvious tracking of purchasing trends. This derails into selling user data to everyone that makes people increasingly easy to track.
- Customer-dependent pricing / Price-discrimination: This is awful for economy, in econ 101 you learn that business want to charge each customer as much as they are willing to pay, but this differentiated pricing is just getting their hands into everyone's pockets.The free market principles rely on perfect knowledge, and every step made to make pricing harder is an attack against self market regulation.
Price discrimination is illegal even in Lobby-land, https://www.law.cornell.edu/uscode/text/15/13
Price discrimination is not a priori bad. A fixed price with enough margin to support the business may be too high for price sensitive consumers. If you can charge more to less price sensitive consumers, you can, at the margin, make a little bit on these price sensitive consumers, and overall everyone is better off - more consumers are satisfied and their marginal willingness to consume a unit of the thing being sold is more equalized.
Yes, this is the reason why it's sort of illegal, but done anyways.
Honestly, beyond paying fewer fees on the bus as a kid, I'm pretty sure I'm being scammed everytime I experience price discrimination.
I feel it's easier to make it illegal and give away reasonable credits to all consumers. I wouldn't discriminate in credits either, I'd rather have public transportation being free for all than claim to save money that society needs to spend anyway.
It doesn't help that lying about the price at any point just makes accounting harder, and creates space for wrong, uncompetitive pricing, or awful deals that would hurt business and society in the longer term anyway.
pricing is all made up to begin with though. your can't take the cost to make an item, add a reasonable amount of profit and that's the "real" price. that's just not the reality of running a successful business. human psychology is far too complicated.
at the end of the day, prices are just a number you make up, and hopefully it's a big enough number that your stay in business. hopefully it's a big enough number that you get rich. but sometimes it's a fire sale and you just end up owing less money to your vendors.
> at the end of the day, prices are just a number you make up, and hopefully it's a big enough number that your stay in business.
The only requirement is to make up a single for all your customers that are getting the same thing back. It'll be made up and account for business factors like risks, profits, etc.
I don't think everyone is better off, at best the "less price sensitive" is unaffected. But then you have to have have some way of stopping arbitrage via the customers paying the lower price through some sort of identity checks or restrictions. I think that's an unavoidable negative outcome and it's not clear that it would always be outweighed by allowing more people to consume the product.
There are ways to adequately approximate that kind of price discrimination without detailed tracking though, like giving discounts to students, seniors, and people receiving various kinds of welfare benefit upon showing proof of status.
Yeah it isn’t as accurate as the privacy-invasive kind of tracking, since students and seniors can be wealthy and eligibility for welfare benefits doesn’t always consider assets or gifts from well-off family. But it’s accurate enough to give the economy most of the same benefit without the privacy downside.
I do think it’s fine for people to opt in to more tracking as a separate consent choice beyond merely participating in a loyalty program, for example to get more personalized and therefore more useful offers, but not as a condition of participation to merely receive at least standard offers and accumulate points. That’s how they generally work in Germany.
>I do think it’s fine for people to opt in to more tracking as a separate consent choice beyond merely participating in a loyalty program, for example to get more personalized and therefore more useful offers, but not as a condition of participation to merely receive at least standard offers and accumulate points. That’s how they generally work in Germany.
Sounds like that'll push retailers to switch from a system where they give points/discounts to everyone, to one where points/discounts are "targeted", which of course requires opting into tracking. Like I said before, the whole premise of loyalty programs is that you're being tracked in exchange for rewards. You really can't expect to have your cake (discounts) and eat it too (not being tracked).
> Sounds like that'll push retailers to switch from a system where they give points/discounts to everyone, to one where points/discounts are "targeted", which of course requires opting into tracking. Like I said before, the whole premise of loyalty programs is that you're being tracked in exchange for rewards. You really can't expect to have your cake (discounts) and eat it too (not being tracked).
As I said, in Germany you can indeed have your cake and eat it too in this regard, if you’re okay with the offers you receive being less targeted and therefore less appealing.
My understanding is that GDPR requires them to offer the option to decline the personalized targeting without being blocked from participation overall, and this is probably the same anywhere in the EU. But I don’t have personal experience with this in other EU countries and could be misunderstanding.
>As I said, in Germany you can indeed have your cake and eat it too in this regard, if you’re okay with the offers you receive being less targeted and therefore less appealing.
The "cake" in this case refers to the offers you had before GDPR came into effect and/or regulators started enforcing it. They might give opt-out people some token offers to appease regulators, but I doubt it'll be anywhere close to the offers they had before.
> They might give opt-out people some token offers to appease regulators
It’s not an opt-out situation. As per GDPR requirements, these programs have a specific opt-in prompt for personalized targeting, separate from the one which is for generally collecting and redeeming points as a member, and it’s not pre-chosen by default.
I think one can assume that many people will decline to opt in, especially in a culturally privacy-focused country like modern Germany and since not opting in is far behaviorally common than explicitly opting out, but also that many others will knowingly consent in exchange for the benefits. So I think they would generally want to give decent offers to both categories of people, since the non-consent group is large enough to matter. Of course the personalized ones would be better, otherwise nobody would want to give that consent.
Myself, I’ve consented to some but not all of the personalized targeting and information sharing from the loyalty programs I participate in here, after reading the descriptions of the requested consents in detail and making a conscious choice. In at least one case I converted a no to a yes after thinking about it longer. It’s good to have that transparency and control, and not to have the legalese surreptitiously remove your right to sue the store should that become necessary as is common in the US (forced arbitration is generally illegal here in B2C agreements).
As for the rest of your most recent comment, I wouldn’t know; I didn’t ever live in Europe before the GDPR.
search term "green stamps" (edit)https://en.wikipedia.org/wiki/S&H_Green_Stamps
my grandmother collected green stamps from the grocery store, which she saved for food discounts.. I don't think that there was any customer ID involved at all..
honestly, describing pervasive tracking of purchasing associated with govt ID as "normal" is .. its a sickness and parts of it are illegal now. It is not required or "normal" at all, from this view
That's just the standard term for such programs https://en.wikipedia.org/wiki/Loyalty_program
It's the normal term, in that it has been normalized as such. But it is otherwise not accurate except in the barest, most monetaristically self-fulfilling-prophecy way.
I believe that's opt-in. At least it seemed to be when my landlord switched to Bilt.
There's a section of your Bilt profile that shows your other credit cards and whether you want them linked. It's pretty freaky to see them listed in the first place.
I definitely keep them off.
Bilt is ultimately a big points/reward program though, so you might get points for having them connected.
I still haven't figured out exactly what Bilt's business plan is, but the main part seems to be trying to get as much financial data on people as possible, and partnering with landlords to do so, and since it's how to pay your rent you can't unenroll completely. (Unless you maybe mail your landlord a paper check?)
It was opt-out for me. Or at least, I was never given informed consent that this data exchange going to take place.
The landlord of course makes it _seem_ like you have no other modes of paying rent. The cashier’s check option is buried in the fine print.
Dark patterns all around IMO.
It was initially opt in for me, then they made it mandatory.
(Sure, I could pay by check but consumer banking technology/US in the US already feels like is is lagging a decade behind other countries without voluntarily going further back. Paying by check every month would be quite inconvenient.)
I'd already decided to avoid bilt as much as possible, but reading this thread prompted me to try going a little further.
Looking through their privacy policy it talks about what California residents can do under CCPA: https://legal.biltrewards.com/policies
> Request to Know... The specific pieces of Personal Information we collected about you.
> You have the right to opt-out from having your Personal Information and Sensitive Personal Information sold to third parties. You also have the right to opt-out from having your Personal Information and Sensitive Personal Information shared with third parties for purposes of cross-contextual advertising
Might as well give this a go.
> just use cash/certified check from now on
You might want to discover about sophistication and pervasive facial recognition technology used by major retailers. Paid by cash? It can still be tracked to you. For "fraud prevention", of course.
>Paid by cash? It can still be tracked to you. For "fraud prevention", of course.
They can already track you through your phone and/or credit cards. Why bother setting up a massive facial recognition system for people paying with cash when they only account for 10% (or whatever) of overall shoppers, and have less disposable income than average?
I don’t know about the US but in the UK they did it ostensibly to catch shoplifters.
We have a major problem with “professional” thieves stealing because the big chains don’t want to pay cashiers anymore.
You see a screen with your face on it in places like Waitrose self service checkouts now. It’s their way of saying “we know who you are”.
Tracking cash purchases is just a side bonus for them.
idk why, but they do
Got a source on retailers actively doing this?
Its very well known that Target, Wallgreens use facial recognition for shoplifting.
Its harder to prove any specific stores are using any specific survailence product for marketing, but plenty of companies are offering it. Here' Samsung's take: https://web.archive.org/web/20230410052807/https://www.samsu...
That’s really interesting- thanks for the link!
Word of mouth: retailers in China have been using face recognition technologies to identify key customers so that they can be greater by name when delivered their favorite drink upon entering the premises.
The trouble with "word of mouth" is that you can't tell whether something is actually real, or vaporware that some account executive dreamed up to close a deal.
I agree, which is why I qualified it. I was working at a retailer, building it's cloud systems at the time. It was told to me by a colleague who claimed to be told that by a peer from China at a conference.
https://www.facewatch.co.uk/
I meant more for marketing - definitely used lots for loss prevention.
Are you aware of cases where it is used for more that theft prevention/manual review of CCTV?
I'm not aware of any big retailers using facial data for targeting vouchers or anything similar.
Simple things like "did walk through the door with a child" would be pretty valuable data, yet as far as I know, nobody uses it.
Is there actual evidence of this, like anywhere?
Facial recognition on a small corpus of known faces (what everyone experiences on Facebook, their phones, etc) is an easy problem.
Walmart picking up a face walking into a store and matching it against 30 million possibilities is going to return so many false positive matches it’s going to be completely useless.
I'm assuming you're using your Bilt card when this happens. Your Bilt agreement stipulates how itemized transaction data (level 3 in payment terms, with level 2 being "enriched" with subtotals/tax and merchant information- which is what you typically see with your normal bank)
Card networks (Mastercard, VISA) have different fee structures that incentivize more detailed information like level 3 for lower processing fees for merchants - here's more details on levels https://na-gateway.mastercard.com/api/documentation/integrat...
https://support.biltrewards.com/hc/en-us/articles/5536526023...
Perhaps more interesting in your case is that if you had your card issues in or before 2022, it's likely with Evolve bank which was breached - https://medium.com/@HackLaddy/when-your-bank-doxxes-you-9152...
What's most interesting to me about that is that they are willing to disclose that data to your email provider. Amazon, for example, is pretty cagey about what you've bought when sending emails, probably because they don't want Google to be able to use that information to target ads to you. (Not because they are Good and care about your privacy, but because they think they're going to beat Google at advertising. How's that going?)
So yeah, I don't get why they would do this. It gives their advertising competitors valuable data for free, and it pisses off customers by telling them that they're being tracked when they shop at Walgreens. Strange stuff.
Oh, here I thought it was because every time I want to remember info about an order, it forces me back to their platform, rather than simply searching my email like I do for every other item I've ever purchased.
(And no, I don't use gmail.)
What’s most strange to me is why this Bilt company would pay for that data feed and somehow think it provides some value to you. It’s obviously just creepy way of saying we know too much about you
Loyalty cards are one avenue for data brokers to get your purchase history. Credit cards can also sell your purchase data. Currently the only safe-ish way to be anonymous is with cash. That may disappear with pervasive face recognition and cell phone tracking.
The best part of every post in the ycombinator is the comments. Always learn a lot.
I think another big problem is pharmacies. The amount of data shared with health insurance companies must be huge.
Things like that are on my mind when HN rants about GDPR. Something like this would be wildly illegal where I live.
FWIW in Illinois, where I’ve experienced this, there is a bill https://www.ilga.gov/ftp/legislation/102/billstatus/HTML/102... that appears to be GDPR-esque or CCPA-esque. Seems to have little interest though.
Unfortunately the GDPR is largely toothless if a company without an EU presence chooses to ignore it.
I live in Ireland and my data is in the databases of several US data brokers. Thise conpanies can't be forced to to comply with the GDPR because they simply do not have an EU presence. You don't have to search far to find stories from people people who made complaints to their local Data Protection office about such issues only to be told there's nothing that can be done.
A common discussion these days is the threat of a foreign app (TikTok) being used by a hostile government to track and influence Americans.
From my non-American perspective, the same thing is happening here. I distrust non-EU software by default.
HN rants about it because it’s not a good solution. It identified a problem but caused an idiotic fallout (cookie banners) and failed to actually put in a framework to enforce that companies aren’t just lying.
I agree but small stick to beat them is better than none.
I guess best solution would be usage of some proxy which intercepts these calls or provide fake data to them. As op in the article did.
> failed to actually put in a framework to enforce that companies aren’t just lying.
That's not true. I work in an European company and we were contacted by the agency to give a complete list of partners that we use, reasons for why it is justified, which routines we have for deleting old data etc.
I guess in theory we could have lied and made up data, but only an idiot would risk lying to the government. Everyone at my company took it seriusly and tried to provide as accurate data as possible. There were also several follow up questions that had to be answered.
The mindset of lying to the government to "protect" your employer seems so far fetched. Why should an employee lie to the government? If it turns out that the company was in violation of GDPR the worst case scenario for the company is a fine. If the government finds out you are lying, the employee faces jail time. The trade-off is simply not worth it.
Maybe it's easier to lie to the government in some countries, but not in my country. The government agencies actually checks and verifies your claims.
The lie doesn’t have to be intentional. All it takes is a really simple accidental debug logging flag to collect what amounts to a GDPR violation.
The point is that no effort was made to implement a technical solution to protect privacy. So it’s upsettingly trivial to violate the GDPR unknowingly and any company that is even a little unscrupulous (of which there are hundreds) can easily ignore the law.
> The point is that no effort was made to implement a technical solution to protect privacy.
And you want the government to do that?
Why haven't the companies who at every turn shout how privacy conscious they are haven't done that?
It's now been 8 years of GDPR. Why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser? Oh wait...
I’ve had to deal with Bilt [0]. In case you’re not aware, they have a “feature” called Instant Link that automatically pulls ALL of your personal and sensitive financial data from financial institutions, including your credit card accounts, balances, etc. They apparently do this via a partnership with a company called Method Financial [1].
It’s frankly the most intrusive thing I’ve ever encountered in any software I’ve ever used—I’m not sure how it’s even legal, but this is America where we have no real privacy rights.
Instead of giving you the option to opt in for them to get this level of access, they automatically enroll you into it when your account is created, pull your data, and then allow you to “opt out” afterward, which enables them to have access to your personal and sensitive financial data anyway. And since you literally must have an account with them if your building uses their services for rent payments, they’ve effectively rigged the system to force millions of folks to unknowingly give them access to their personal and sensitive financial data.
Anyway, in your Bilt privacy settings, there are some options you can disable (including Instant Link), and I recommend that you disable ALL of them, although given the dark practices of this company, I don’t even trust that those settings are actually honored.
Side note: Did you know about a company called Method Financial that somehow has real-time access to ALL of your personal and sensitive financial data? Did you know that this company you never heard of that has said access then sells that access to the highest bidder? Do you remember agreeing to any of that anywhere? Yeah, me neither (on all counts)…
[0]: https://www.biltrewards.com
[1]: https://methodfi.com
Thanks for the heads up. Luckily I can go back to analog with certified funds to pay rent. I suspect, without evidence, this is due to the relatively strong tenant protections in Chicago.
This literally just happened to me last week. I emailed them to ask them how to stop this:
Gah, I hate this service and will avoid renting on buildings that use it in the future.Preparation H, they should practice some level of sensitivity with such information.
Hopefully exclude? By whom? At some point, somebody has to decide it was sensitive, by what standards? Does Bilt decide to not use it after they were already sold the data? Does the aggregator after already been sold it by the harvesting seller? Does the harvesting app reduce the appeal of their data by deliberately excluding the data? Does the harvesting app care to spend the money on doing that?
So paying by cash is the easiest way to generally avoid this?
Clearly you can decide not to use Bilt, but maybe you get caught out some other way (bank, ...) - too difficult to track the trackers.
That's what I do, but I assume some stores like Target also track you by Bluetooth, facial recognition, etc, and can correlate any past or future cash purchases if you use your credit card once for maybe a large innocuous purchase.
If you find the condoms overly sensitive you can try one of the "long lasting" versions.
Is that my personality or my looks? :-)
> Why do they need to know my screen brightness, memory amount, current volume and if I'm wearing headphones?
This is clearly adding entropy to de-anonymize users between apps, rather than to add specificity to ad bids.
It would be amazing if you could build and send fake profiles of this information to create fake browser fingerprints and help track the trackers. Similarly, creating a lot of random noise here may help hide the true signal, or at least make their job a lot harder.
Unfortunately fingerprinting prevention/resistance tactics become a readily identifiable signal unto themselves. I.e., the 'random noise' becomes fingerprintable if not widely utilized.
Everyone would need to be generating the same 'random noise' for any such tactics to be truly effective.
A sufficient number of people would need to, not everyone. And if I were the only one then tracking companies wouldn't adjust for just me. Basically, if this were to catch on then ad trackers wouldn't adjust until there was enough traffic for it to work. Also, that doesn't negate the ability to use this to create fake credentials that aids in tracking ads back to their source.
They don't need to adjust.
Here's a real-life example: You show up alone at the airport with a full-face mask and gray coveralls. You are perfectly hidden. But you are the only such hidden person, and there is still old cam footage of you in the airport parking lot, putting on the clothes. The surveillance team can let you act anonymous all you want. They still know who you are, because your disguise IS the unique fingerprint.
Now the scenario you're shooting for here is:
10 people are now walking around the airport in full-face masks and gray coveralls. You think, "well now they DO NOT know if it's ME, or some terrorist, or some random other guy from HN!"
But really, they still have this super-specific fingerprint (there are still less than 1 person in a million with this disguise) and all they need is ONE identifying characteristic (you're taller than the other masked people, maybe) to know who's who.
They didn't need to adjust their system one bit.
It's kind of how people used to make fun of the CIA types and "undercover" operatives.
Look for the guy wearing a conspicuously plain leather jacket and baseball cap. "Why hello there average looking stranger I've never met. Psss, 'tis a fair day, but it'll be lovelier this evening.'" "Oh ... it's Murphy the spy you want."
Also, found out the CIA declassified a bunch of jokes several years back in searching to respond. [1] Most are already dead links on CIA.gov, yet there's a few remaining. Nother one on people commenting on the CIA. [2] "These types are swin- Ask in Langley if they work for the CIA. Every- Ask in Langley. They will tells one knows them." 'You, it's the big building behind.'
[1] https://nationalpost.com/news/the-cia-has-declassified-a-bun...
[2] https://www.cia.gov/readingroom/document/cia-rdp75-00149r000...
The garbage in the last sentence of this comment is due to the second link including incorrectly OCR'd text from an image of a newspaper using a two column layout. Both links are very amusing.
I think this is a slightly different case no? If the ad network is using a very high precision variable to soft-link anonymized accounts, then randomizing the values between apps should break that.
Your analogy applies more to things like trying to anonymize your traffic with Tor, where using such an anonymizer flags your IP as doing something weird vs other users. I’m not convinced simply fuzzing the values would be detectable, assuming you pick values that other real users could pick.
Swapping fingerprint details is different than your example since it happens immediately and out of view. You could change fingerprints very often/create a new set for every browser tab. Additionally, as I pointed out before, they won't adjust until there is enough usage and when there is enough usage then the random settings are hard to distinguish because it isn't 1 in 1m. I get that they will keep trying to track down things that make browsing specific, but that is what updates are for. We need to at least make it hard.
That's why it should be the browsers & OS's that enforce such privacy measures... it shouldn't be an option that my Grandma needs to enable...
Unfortunately the fox is building the hen-house. They 'should' build products that improve my experience but they have very little incentive to do that when they get paid so much for the data they can extract. What would actually do it is regulations similar to financial regulations. OS/browser companies shouldn't be allowed to do business with data brokers. Then they would have one primary customer, the consumer, and competition would focus on the correct outcome. But 'regulation' is an evil word so we aren't likely to see anything like that actually happen.
> adding entropy to de-anonymize users
_removing_ entropy, by adding more information bits
Technically, information are the bits you DON'T know. Once you know the bits, it isn't "information" in the Shannon sense, in that it takes no energy to reset a message if you know all the bits, but takes N-units of energy for N unknown bits of information. (See; Feynman's lectures on computation)
It's also useful for making ads more effective & manipulation overall. As long as you can connect the data you track & buy, you can use Thompson sampling. In fact, why would we think knowing the name of a person is anything but bad business?
Straight up fingerprinting us without consent it’s pure insanity.
They’ve basically turned every phone into a tracking beacon
I'm sure there is a choir of "told you so"-singers somewhere.
I believe some apps actually have to automatically brighten up your screen when displaying a QR code for scanning, and then reduce back the brightness of its previous setting when moving out of the QR code. I believe the Whole Foods app does this for its first screen.
Surely that could be done without sending the brightness to some 3rd party.
Combine this with IP, timestamp, and some behavioral patterns, and you’ve got an extremely robust tracking mechanism that operates outside of explicit consent mechanisms.
Everything listed changes way too often to be useful for tracking. My guess is that it's for anti-fraud purposes. Someone setting up fake devices and/or device farms is likely to get similar values, which means they can be detected via ML or whatever.
> screen brightness, memory amount, current volume and if I'm wearing headphones
None of those are likely to change when you navigate from one website to another, with tracking/ads disabled, which is what they want to be able to track. Otherwise they'd just use their cookies.
One device visits a site where you sell ads. A minute later, an unknown device with identical battery, volume, headphone, brightness, model number, browser version, and boot time to the second arrives on another site you run ads on. There's a pretty good chance they're related, because the odds of all those being the same plus those two sites and recent timings involved is rather low: https://coveryourtracks.eff.org/
Plus it doesn't have to be perfect. It just has to be good enough in bulk to sell.
> Advertising Tracking ID was actually set to 000000-0000... because I "Asked app not to track".
> I checked this by manually disabling and enabling tracking option for the Stack app and comparing requests in both cases.
> And that's the only difference between allowing and disallowing tracking
This is revealing! I'd wondered about Apple's curious wording "Ask App not to track" leaves suspicious wriggle room - apps may not track by an id, but could easily 'fingerprint' users (given how much other data is sent), so even without a unique ID, enough data would be provided for them to know who you are 99% of the time.
Amended Dead Privacy Theory:
The Dead Internet Theory says most activity on the internet is by bots [0]. The Dead Privacy Theory says approximately all private data is not private; but rather is accessible on whim by any data scientist, SWE, analyst, or db admin with access to the database, and third parties.
[0] https://en.wikipedia.org/wiki/Dead_Internet_theory
Apple sets Advertising Tracking ID to 00000-0000 because it's the only technical control they have. However, apps are also supposed to respect the signal with regards to other methods of cross-site/app tracking and disable fingerprinting mechanisms.
See https://developer.apple.com/app-store/user-privacy-and-data-... for details
How do I dump a list of installed app identifiers to cross reference with the list of apps selling this data? [0]
0: https://docs.google.com/spreadsheets/d/1Ukgd0gIWd9gpV6bOx2pc...
>If it was LTE, I bet the lat/lon would be much more precise.
False. Apps don't have access to cellid information unless they also have location permissions, in which case they can just request your location directly.
>the free apps you install and use collect your precise location with timestamp [...]
This is alarmist and contradictory given that the author admits a few paragraphs up that the "location shared was not very precise". It might be possible for the app to request precise location via location services, but the app doesn't request such permissions (at least on android, you can't check for requested permissions on iOS without installing the app and running it), so such apps are most definitely limited to "not very precise" locations.
>At the same time, there is so much data in the requests that I'd expect ad exchanges to find some loophole ID that would allow cross-app tracking without the need for IDFA.
At least in theory they're not supposed to do that, but it'd be hard to enforce.
"If a user resets the Advertising Identifier, then You agree not to combine, correlate, link or otherwise associate, either directly or indirectly, the prior Advertising Identifier and any derived information with the reset Advertising Identifier. "
https://developer.apple.com/support/terms/apple-developer-pr...
Cell carriers will gladly sell that information to apps. You can make calls to them over the cellular network (even if Wi-Fi is active!) and they will hand it back to you. No location services is required to do this.
Eh. Zip code level location + timestamp is still pretty invasive, even if, pedantically, that’s not very precise.
We should compare if there a differences in the data sent in countries with better data privacy laws.
"Precise" has a specific meaning for iOS Location Services and this ain't it. Presumably it's just doing IP geolocation which could be the same post code, or it could be the wrong city entirely. I'd expect it to be much worse on LTE than WiFi.
>Eh. Zip code level location + timestamp is still pretty invasive, even if, pedantically, that’s not very precise.
That's basically sent to multiple parties (ISPs, transit providers, CDNs, analytics/advertising/diagnostics/security vendors) everytime you visit a website. If this counts as "invasive" to you, you shouldn't be connected to the internet at all, much less buying a tracking device (a smartphone) and installing random ad-supported apps on it.
I find it fascinating reading hacker news, full of IT folk who simultaneously build software that enables and profits from the advertising and personal information selling & tracking industry - are also the same people who complain the loudest about it. Unbelievable.
Probably because people like us have more visibility on the huge scope and consequences of this kind of privacy invasion. Most people don't actually see this with their own eyes. They probably know it's happening in the back of their heads but it's not 'real' to them. It's very real when you know you could technically run a report of all your users that also have grindr installed.
I'm sure most of us would prefer not to work somewhere that does it but we need to eat too.. And we have no input in this.
For example recently I was given a presentation on a new IoT product at work. Immediately I asked why we're not supporting open standards stuff like matter as a protocol. And I was told that'll never fly with marketing because they want to have all the customers to have eyes on their app for their 'metrics' and upselling. I told them fine but I'm definitely not using this crap myself. But it was shrugged off. We are too few for them to care about. And it makes us very unpopular in the company too. So it's a risky thing to do that doesn't help anyway. The "don't fight them but join them and change from within" idea is a fallacy.
There’s no code of conduct or rule book that anyone should follow so ethics is determined at the individual level. That quickly turns to, either I build it for them or the next guy will. Resistance is futile type thing.
Most other types of engineering have published rules and standards and industry credentialing including ethics tied into it and loss of credentials for an ethics violation would be career ending in many cases.
Can you give a few examples where that helped?
(I can only think of straw-man examples. Does the private prison industry have problems getting architects, civil engineers, electrical engineers? Does the pharma industry have problems getting chemical engineers for manufacturing addictive painkillers?)
Yes, because everyone on Hackernews is identical and working on the exact same stuff. It's not like it's a few companies enabling this and each marketing department going like oooooh i want that.
We might not be the same. Every time someone asks for tracking anything I complain and question a lot. People hate me, but if there is no real use case for storing all information we can get I will veto as much as I can.
The IT folks working in the advertising industry are much more the "who cares, everyone has all our data already anyway".
That seems like trying to tie two separate groups together. Most people are not working on software remotely adjacent to this.
This is one of the many good reasons to avoid the google app store but most apps in general.
Let it be known, having an app to do something which used to be doable by a website is to me a red flag. Although I refuse to install anything other than what I genuinely trust.
> There's no "personal information" here, but honestly this amount of data shared with an arbitrary list of 3rd parties is scary. Why do they need to know my screen brightness, memory amount, current volume and if I'm wearing headphones?
Screen brightness, boot time, memory, and network operator could probably fingerprint any device all by itself.
A lot of people have "autobrightness" on. I'd think brightness doesn't help much here
Automatic brightness probably helps honestly. It could help confirm whether someone is in fact in an area that has high levels of lighting around them (e.g., in a store, at a beach on a sunny day, etc.)
Everything little piece of data that is gathered and used can help even if it isn't immediately apparent.
Now I could be wrong on this, but I feel like advertisers don't need to know something is true about a user, they just need to be confident something is true about a user and that's where data points like screen brightness can be of help to them.
Kind of a joke, but it could be useful for determining if they should serve light-mode or dark-mode ads. But I suppose they could just detect if dark/light mode are enabled.
Why would you serve dark-mode ads? Clearly you would pick the brightest option all the time.
How much money is tied back to, or generated from, wifi AP SSID databases for geolocation ?
Because wow that would be simple to spoof and chaff and spam.
It's dinnertime here but if I had a few minutes I could make (my own house) appear indistinguishable from (Chase Center) from the perspective of SSID landscape.
It would cost nothing and is trivially easy. Even if they pair MAC addresses that's not a big hurdle. I'll bet relative signal strengths are not measured.
It might be a good flushing action[1].
[1] https://kozubik.com/items/FlushingAction/
Google's Geolocation Services used to charge 4$ for 1000 requests to their Cellular/Wi-Fi Geolocationing Service. Essentially you send a list of Wi-Fi MAC addresses and their associated RSSI values and get back a latitude and longitude with an accuracy metric. It was surprisingly good, when GPS wasn't available (sub 50 meters accuracy).
A long time ago I had the idea to create an 'accountability server'. The high level idea was for it to generate unique credentials so that you could track to the source who sold your info. There are some ways to do that now, but I wonder if it is time to start exploring that idea again. If you exposed it as a VPN/proxy+app that ran on a server in your home, so that you could collect your own data and provide unique credentials on account creation, then I wonder how much that combination could figure out. Since it could act as a man in the middle it potentially could annotate credential source and see the ads and potentially track them to source. "This male enhancement pill ad is linked to your tire purchase." There is a lot of hand waving here, but I wonder if something like this could be built. The first step to stopping things like this is showing people who did it to them.
There's no question about who the players involved are.
https://developers.google.com/authorized-buyers/rtb/openrtb-...
Wouldn't this require access to bid side data? The OP mentions it's pretty easy to get, but any company using this to expose advertisers is going to get their access cut off pretty fast. As the saying goes, "snitches get stitches".
My thought here is that there is likely a lot of leaked data on ads themselves, that is one of the reasons why you would need the VPN/proxy. Additionally you could (potentially) create fake browser fingerprint credentials on the fly to feed sites and have the VPN/proxy track the ads that show up for those credentials. (other credentials like email and the like could also be created by the app for you) You don't see the bid data, but you may be able to control the tracking that spurs it and you can see the results of it so a setup like this could likely make some inferences.
I don't know this industry well and the tech here has long sense eclipsed me so I really don't know what is possible but I imagine there are possibilities with this setup.
Imo, the real takeaway here is that ad-tech isn’t just tracking people — it’s that it's becoming a decentralised surveillance network where no single entity takes any responsibility. Even with "Ask App Not to Track," your IP, geolocation, and device fingerprint still end up being leaked! It shows that tracking isn’t a feature anymore — it’s the business model.
> There's no "personal information" here, but honestly this amount of data shared with an arbitrary list of 3rd parties is scary. Why do they need to know my screen brightness, memory amount, current volume and if I'm wearing headphones?
> I know the "right" answer - to help companies target their audience better! For example, if you're promoting a mobile app that is 1 GB of size, and the user only has 500 MB of space left - don't show him the ad, right?
Author jumps to the incorrect conclusion here. The answer is fingerprinting.
The mention of a "right" answer, quotes and all, makes it clear they understand this argument is a farce, doesn't it?
The “right” answer is not the one I end up with but rather the version sold by the vendor. this is what Apple and Google would say.
I think they are pretty clear if you read the documentation. Accessing to the exact value of these always need some privacy-related privilege on ios and android.
Without those privilege, all you can get is an approximate.
Yeah my mind also immediately jumped to fingerprinting. Somewhat required for anti-fraud to some extent, but also obviously used for more than that.
Don't use mobile apps that could just be websites.
Extended to add don’t use websites without blockers. If they are willing to track via app, why would we think they would not track via browser?
The browser has less access to your system, and usually only if you give a specific website permission to use these features. Mobile operating systems are slowly changing that though.
Have you looked at the latest JS standards?
(and if you haven't... check out the APIs available to the developers/owners of all the websites you browse: https://developer.mozilla.org/en-US/docs/Web/API )
what should imply checking available web apis? the comments is correct, browser can't access your location without explicit confirmation from the user, the same apply for other web apis, or at least mention a bunch of them which you know don't apply instead of linkin MDN
The more APIs available for JS to interact with, the more granular and detailed browser fingerprinting can be. For example, how your browser renders WebGL can differ depending on what graphics card (and drivers) you have. The resulting values can be read back and stored to create a detailed fingerprint of who you are -- this could potentially be done by Google Fonts or AdSense or any number of the countless ad and analytics frameworks loaded on basically all websites.
Good overview about how fingerprinting works: https://www.privacyaffairs.com/browser-fingerprinting/
Browse the source in the following directory to see a plethora of examples of how web APIs are used to fingerprint users -- and this is just one publicly-accessible library we can easily review the source code of (proprietary, obfuscated ones likely use additional methods): https://github.com/fingerprintjs/fingerprintjs/tree/master/s...
One example used in multiple places in the above repo is "matchMedia"[0] which was a Web API method added a while ago (well, many years ago) to give a programmatic result of whether a given CSS media query matches or not. This can be used to detect, for example, user preferences like whether the display is HDR-capable[1], or the Accessibility setting "reduce motion" is enabled[2].
[0] https://developer.mozilla.org/en-US/docs/Web/API/Window/matc...
[1] https://github.com/fingerprintjs/fingerprintjs/blob/master/s...
[2] https://github.com/fingerprintjs/fingerprintjs/blob/master/s...
what is contained in the latest js standard that does let you collect fine grained information of your users without their consent? web apis that have to deal with sensitive data all requires explicit user confirmation to be used
The more access to your system that a web page has, the better it can fingerprint you. All of those ApIs aren’t going to be opt in
At least on android the browser is limited by the android permission system, i.e. if you dont give browser GPS permissionit cannot give pages dito. In addition the browser will ask if you want to grant an app access to something like positioning data.
Furthermore, it is hard for a web page to run in background and receive user data.
How will that help? It’s even easier to track across websites than apps and websites can also get your ip address to track you.
The thing I found I grokked, and think is important from this article is that private browsing doesn't end this information flow. It only marks the JSON data blob as "asked not to be identified or collated" and its substantively an honour system. There are penalties (lawsuit against google for misleading people on the fact data was still collected) but the walls to breach here are low, given that non-PII can be crossmatched, to confirm "who you are" in some sense.
There is no such thing as "private" browsing inside the factory installed browser, with factory installed DNS, and any kind of location data, or other cross-collating information along with your IP. The loss of privacy may be contextual and somewhat statistical, but it would be wrong to assume you weren't identified.
What it does do, is let you see how bidding mechanisms in services like flights and hotels will change bid when the same location as you comes to request service and doesn't have the prior search cookie state. Thats useful I guess.
"find things at a different pricepoint" cookie monster mode?
"Private browsing" is just a polite request for companies to pretend they didn’t see you before
wow @apokryptein thanks for posting my article here... I'm shocked it's #1 rn. if anyone has any questions regarding the post - I'm here to answer & talk!
how the MAID/IDfV gets into the PII-ID databases?
It seems that part is completely missing. (Or I missed it.)
So for example can/do airlines sell it? Or telecom/utility companies, when you use their app?
I don’t know the answer to this, which is why I didn’t mention it in the post. However, I could speculate that these data-broker companies scrape leaked [hacked and stolen] data from various panels and then match records on their end. kinda OSINT for bad reasons.
Great article, thanks for taking the time to research and write it all up! Definitely learned some new things from it.
[emotions redacted]
Long ago there was XPrivacy project for Android that allowed to granularly set permissions for each app & system service and ensure they won't get the real private data. It's no longer alive these days, I guess. Can someone share their experience with the alternatives for the modern latest Android?
Punish Unity and Facebook even more.
I would like to bring attention to this project. They aim to function in an application firewall like manner and manage to block connections by category, classified by domain name. Android only though, and the 'full' version is available only on f-droid due to some anti-adblock-like Play store policy. https://trackercontrol.org/
There was a nice talk on the recent Chaos Communication Congress (from Chaos Computer Club):
https://media.ccc.de/v/38c3-databroker-files-wie-uns-apps-un...
(English audio available)
> was shocked by the amount and types of data sent with the bids to ad exchanges.
Can this data fall into the hands of "evil" state actors, and why is Congress OK with that and not with tiktok?
Are the police evil state actors? They purchase location data from private companies to target protesters[1]
[1]https://www.eff.org/deeplinks/2022/08/how-law-enforcement-ar...
Blaming China is easy and has bipartisan appeal while actual data privacy regulations are hard and do not.
Congress is pretty ok with whatever Trump says, we have always been at war/peace with TikTokia.
yes and China/make money.
Very interesting and disturbing research, definitely a wake up call for me. Does anyone know/can anyone recommend me software that can block these sorts of requests from going through? I know of pihole which blocks adds but does it also filter out these sorts of things?
You need to have a wifi only android phone, rooted, no google apps, and uninstall anything that talks to the internet. That includes analyzing network traffic, open ports, and so on.
I did this with a Kali Nethunter distro back in the day for "reasons", privacy not being one of them. This makes the phone very hard to use for regular things.
It seems still possible to avoid being tracked (protections, filters, degoogle, etc), and the business is not very interested in the minority willing to trade off functionality, practicality and ease for privacy. For how long?
I'm a very happy paying customer of NextDNS (https://nextdns.io) which blocks known adware and tracking hosts across all mobile and desktop platforms.
Which does absolutely nothing if your device or the app in question is permitted or otherwise not prevented from making DNS-over-HTTPS (or, less commonly because of its discrete port, DNS-over-TLS) queries.
Don't all the ad-blocking DNS providers also support DNS-over-HTTPS now as well? I use it with AdGuard Home, and I saw PiHole supports it as well.
I'm referring to devices and apps that are 'hard-coded' to query specific DoH servers/providers, therefore bypassing and regardless of any user-configured DNS server/s. And because DoH operates on outbound TCP/443, the lookups are indistinguishable from any other 'web' traffic.
Even some of the most popular desktop web browsers are configured to utilize DoH by default nowadays.
The most that a network administrator can do to prevent this is configure firewall IP blocklists of known DoH servers and NAT all outbound 53 (and 853) traffic to a desired resolver (like a local Pi-hole instance, for example).
> The most that a network administrator can do to prevent this is configure firewall IP blocklists of known DoH servers ...
A firewall (which must also host a resolver) can choose to block requests to IPs it hasn't resolved domain names for.
This is something I implemented for an Android firewall app I co-develop; it works nicely enough.
what app?
https://github.com/celzero/rethink-app
Facebook hard-code IP addresses when their domains are blocked. I found this out while using NextDNS alongside that logging functionality that iPhones have. It’s insane the lengths that they go to.
It's not insane at all. It is the entirety of their business model, so it makes sense that they will do everything possible to keep that sweet surveillance cash flowing.
One more reason I don’t use Facebook and will never install their app on my phone.
> Facebook hard-code IP addresses when their domains are blocked
Sounds like an anti-censorship or a generic connectivity robustness feature [0]? WhatsApp and Instagram do this, too.
[0] https://news.ycombinator.com/item?id=41959945
Parts I found relevant:
Might as well just screen grab the Task Manager equivalent and hand it to them. Have better, quicker data about my own current RAM allocation and free hard space than I do. It hands them when the system booted for an ad? The headphone, volume, brightness, and battery was just "what" kind of headshake about invasiveness. Somebody'd hand wave they need it (we want it, we want it). They obviously don't.Edit: It's almost Remote Desktop, on an iPhone. Realtime (~1 Hz) RAM / ROM allocation. Not sure how many Apple user even know how to check their realtime RAM / ROM allocation. The free hard drive space especially is just asking for botnet downloads.
Edit: Right, and ... disabling tracking doesn't mean anything because numerous updates blatantly ignore the setting ("uc": "1", // User consent for tracking = True;) and it's just a flag while they still send your vendor specific customer identifier anyways.
Really interesting article, and great investigation, just disturbing how much on an effectively clean phone.
I dislike that as a developer, knowing something like the headphone status could be useful for the functionality of the app. But some other unscrupulous person is just exfiltrating it! This is part of the reason I agree with Apple’s stand against apps with sub-apps/“desktop like” due to not fine-grained enough permission settings. There is a significant privacy downside to “superapps” and now Elon is pushing for the X everything app.
Yeah and if you ask for permission for every little thing then users are going to get bombarded even when it's needed for legit purposes. It's a difficult tradeoff to make, even if you want to do the right thing (and I'm not really sure that Apple and especially Google really do)
> The headphone, volume, brightness, and battery was just "what" kind of headshake about invasiveness. Somebody'd hand wave they need it (we want it, we want it). They obviously don't.
Well the why the ad industry wants it is clear: fingerprinting and segmentation. Someone consistently low on battery? Push them ads for powerbanks.
This is actually part of what I find so wrong about this entire idea.
With all this fine granularity, it seems like ads would be incredibly relevant. Specifically about what you need with something that might actually result in a click-through to purchase a product. Especially if they get real-time updates on my hard-drive status and battery state.
I don't remember the last time I got an ad that was actually relevant. Pretty sure the last ad that was even clicked on was one of those little windmills that swirls crazily, cause it seemed like it might make a cool lawn ornament. Turned out it was tiny. Years of online purchases, and they don't even suggest stuff I want.
It just seems like an excuse.
Reddit app has no permissions on my phone, but the feed suggests communities based on my location never the less. I've been traveling for the last two months, every city I've been has been suggested
Just check https://ipinfo.io/ to see how close your IP points to your location. For most targeted content the city is good enough. And honestly if I'm one of 1 million people it's ok.
It’s just an IP look up. I travel a lot and you can tell where certain podcast episodes were downloaded based on dynamic ad insertion
This is why I am so against letting Web Browsers have access to so much device information. Every time, a web dev says they should be able to push notifications, or get battery information, or whatever, this is why they should be ignored.
The fact that your "do not track" preference makes almost no difference in practice? Depressing but not surprising...
I wonder: to which extent are purchased/brokered app real-time location data feeds used by various intelligence services to target missile strikes in war zones? In e.g. Ukraine/Russia.
https://taskandpurpose.com/news/russia-ukraine-cell-phones-t...
Not riding off ad geolocation but hijacking a UA app: https://www.crowdstrike.com/wp-content/brochures/FancyBearTr...
The leaked tools that NSA used, like XKEYSCORE, used publicly available data collection methods, including purchasing advertiser lists, to cross correlate all the data and form a profile. So anybody could do this stuff.
Anyone understand why an apparently accurate latitude/longitude showed up in one of those traces despite location services not being enabled for the app in question?
Phones send out probe requests to get a list of open wifis. If you have a static access point, with a known geo location, software can be running on that point to remember a mac address of the phone from a probe and store it. Thus enabling real time tracking.
Im like 60% sure this is how they figured out who the Bomber was in Austin TX.
Probably Mozilla location services (which I happily block) which does pretty accurate passive location tracking.
Thanks for asking. Came here to ask since I was curious about this too. I don't find any of the replies here convincing:
- List of open wifis: AFAIK, and in my experience, apps need special permissions to do anything at the wifi level. And yes, iOS location services use wifi info but it's disabled, that's the point;
- IP back to geo: then why not send the IP itself directly?
- Mozilla location services: same as above, why not send the info you send to Mozilla directly to the data harvester which can call Mozilla itself?
A previous request maybe mapped their IP back to the geo, and that data was used subsequently, maybe?
How many people only turn on GPS for maps? Aside from that all tracking methods are pretty inaccurate anyway.
> Moloco ads is a DSP network
A digital signal processing network? A bit annoying to introduce an acronym without defining it. Great article otherwise.
The people who reported about the Gravy analytics leak is 404media. They're an independent techincal media group that has been reporting on stories I haven't seen elsewhere. They're pretty awesome. I've personally paid-subscribed. (I'm not affilated with them, nor am I receiving comp to say this)
Anyone working on this tech used for tracking people should feel bad.
I was just screwing around today with data like this. I was making it to have as a exhibit at a up coming event at my university
https://bsky.app/profile/balsa.info/post/3lh7z776lbk2w
You connect to a special WiFi SSID and compares your traffic to known tracking/ad domains (Pi-Hole Lists mostly) and the "food" is the packets being sent to those servers.
its crude and has some high false positive rates, but it does have a chilling effect for me when exploring what data is going where
Even just to look at the picture near the top (which is also repeated near the bottom), if you do not allow the app to track you that only disables one of the items of the information and not all of them. This is explained later in the document, that it is not explained very well to end users. I agree that it could be explained better. Perhaps, "Allow app to track your activity..." can have a option to display a more elaborate description, explaining that it only affects the Advertising Tracking ID (and what that means) and has no effect on other methods of tracking.
And, looking further in the document, we can see there is more.
Some of them, such as IP address and timestamp it is reasonable to use for programs that access the internet (although it should be possible for the user to set up a proxy and/or adjust the clock in order to change these things, the server would still use its own timestamp anyways).
Available memory also makes sense to be readable (although ideally, the user should be allowed to limit the amount of memory available to specific programs, in order that there is enough memory remaining for other programs; the reported total memory should then include only the memory available to this program and not to all programs), and the same should be true of the number of CPU cores and the amount of available disk space.
Others probably should not normally be known by most programs (but some are usefulf or some kind of programs), and even when they are, the operating system ought to allow users to reprogram what information is available and what filters, logging, etc will be used.
The presence of wired headphones probably should not be accessible by software, and the redirection should be handled by hardware. Perhaps an exception makes sense if the settings need to be different, e.g. mono vs stereo, although even then, programs should only see those settings (and only if they have audio output), and the user should be allowed to override them due to preferences (e.g. some users might want mono even if connected to external speakers or headphones; on my computer sometimes only one speaker works and sometimes both, so it is useful to me to be able to switch to mono).
Furthermore, there is the consideration, if the advertisers/spies are stealing your power and network bandwidth and quota in order to do these things; then, that is theft.
There is more to it when a user disallows an app to track your activity. From Apple and enforced in their review guidelines:
“The app is also not permitted to track your activity using other information that identifies you or your device, like your email address.”
> if the advertisers/spies are stealing your power and network bandwidth and quota in order to do these things
They're not. The app developer is.
From TFA:
> This is the worst thing about these data trades that happen constantly around the world - each small part of it is (or seems) legit. It's the bigger picture that makes them look ugly.
No it doesn't seem legit to me at all. Any of it.
NetGuard with DROP OUTBOUND policy once again proves helpful. The only app that shows ads that I have on my phone is a PDF scanner, and I don't allow it internet access.
GrapheneOS also allows you to toggle the Network permission of any app
I clicked the link at the beginning of your article, that led to the Google sheet with the list of apps. That list had 12,373 lines, not “over 2,000”, fyi. And while most of the apps looked like small time games that I have never downloaded and would probably not download, I saw included there “Microsoft Office 365”. Interesting.
Here's my messy list of all the apps I recognized in that Google Sheet: meetup, tinder, crunchyroll zynga/wordswithfriends, Microsoft Outlook com.microsoft.office.outlook, Weather channel, Microsoft 365 (Office) com.microsoft.office.officehubrow Opera Mini browser beta com.opera.mini.native.beta, BuzzFeed - Quizzes & News com.buzzfeed.android Tetris® Block Puzzle com.playstudios.tetris4 Sonic the Hedgehog™ Classic com.sega.sonic1px Grindr Flipboard: The Social Magazine Flightradar24 | Flight Tracker Bejeweled Blitz com.ea.BejeweledBlitz_row FarmVille 3 – Farm Animals Plants vs Zombies™ 2 com.ea.game.pvz2_row SimCity BuildIt 913292932 Tetris® 1491074310 Opera Mini: Fast Web Browser com.opera.mini.native TuneIn Radio: Music & Sports 418987775 Yahoo Mail – Organized Email com.yahoo.mobile.client.android.mail Angry Birds 2 com.rovio.baba Skip-Bo 1538912205 CamScanner - PDF Scanner App com.intsig.camscanner Rakuten Viber Messenger com.viber.voip Candy Crush Saga 553834731
agreed, however there are duplicates in that list + same app for ios / android. if I’m not mistaking I did a simple unique count on it (or catch the 2000 number from 404media post)
Minor nitpick, but aren't ads sold to the highest bidder?
I don't understand how this isn't considered an incredible national security issue, e.g., what stops an actor buying data for high value targets known to use certain apps, like the President or Prime Minister of a country?
This is a wonderful write up. The part that isn't clear to me is how they're getting the geolocation data if location services are turned off. Are they just going off geo-ip lookups? If you grant access to Bluetooth or finding devices on your local network, they can get more information to track your location. Absent that, how would they get better than geo-ip?
https://www.maxmind.com/en/home
Is WiFi access point geolocation by SSID or MAC address? Do mobile OS's require additional permissions for apps to get either of these data points?
>Do mobile OS's require additional permissions for apps to get either of these data points?
Yes on both iOS and Android as far as I can remember.
Android for sure, since version 8 I'm certain but probably even 5 or 4.x (so 10+ years ago)
Always annoys me when I want to use a WiFi scanner to determine the range of an access point in different locations for example and it needs me to turn on location access first before it can get WiFi data. The open source app doesn't have an Internet connection so there's no way for it to send back data to the mothership even if it had an SSID database baked into the apk. For me, and traditionally, the location switch is to turn on or off energy-hungry GPS hardware, not gatekeep when I trust apps to collect my location. I can set those to "only while in use", deny their Internet access, or just not install them if I don't trust them with the location permission
On iOS at least you can set the MAC address to be different per access point or to rotate on the same access point or to use your normal MAC
iOS definitely requires Precise location permission allowed for the app to see your WiFi name / ssid and other WiFi around you
But all it takes is one app with that permission to tie you to all the others. And there are always apps that need your location at some point to provide useful data. At this point I’m not sure there’s any single app I trust.
> apps that need your location
And, triangulating client's public IP address will also give away location with decent precision: https://news.ycombinator.com/item?id=37507355
btw, we need a securephones.io [0] part 2 focusing on apps.
[0] cert has expired
I'm surprised people think they have any kind of privacy - especially when using free services. They are not free. You pay with whatever data can be extracted from your devices and behavior.
Also, there's a looong list of companies who know the location of your mobile device, starting from the cell phone tower operator to Apple/Google and many in between.
True. But even paid apps have access to these data and can collect it without our knowledge. A genius called Stallman proposed solution decades ago. Free software aka. open source software. But outside of tech community, open source is not a known term. Maybe we should market it wherever possible, if we want true privacy and freedom.
The article does not explain how PII gets into the databases, and the critical link the IDfV... how do databases connect it to people's name/address?
It gets leaked if the app I use gets my name/address, like Uber, right?
I don't think there is a hope when it comes to our privacy and ads and our data being sold - none. Even if I'm somewhat off the grid or low in activities, the indirect way of targeting me still exists, by my family members, friends, people associated with me. I surrender.
There is hope. The upcoming US State privacy laws are resulting in IP addresses having the last octet blanked out, and IDFA's zeroed out, at least for SSPs and DSPs. Companies such as Apple and Google will still have this info since they control the OS.
Whilst I trust that the author did in fact look at the data of each request eventually, the screenshot they provided of Charles could not have been of the exact requests they intercepted given Charles is indicating that those are not yet SSL proxied (except for the 2 GET requests).
EDIT: please ignore, author did it differently to what I expected.
I decrypted them all by installing Charles SSL cert on the iphone. This is why the requests seem not SSL proxied.
This technique doesn't work anymore on android because you can no longer add certificates to the system store and apps are free to choose to accept the user store CAs or not. That was changed in Android 7. For "security" they say. Security of Googles business model I'm sure.
My apologies, thank you for clarifying and thank you for the brilliant article. Have updated my comment.
Posting here from an anonymized account about Meta. No one probably recalls that meta stopped most of their background location services(Remember Nearby Friends) on the main application ~2021-2022[1]. It was just not even worth a repeat NYT story with this much money on infra to collect locations.
But, this is basically after they figured how to do "good enough" location targetting using IP and a bunch of this info this guy talked about. You don't actually need a lat, long, just the 1 mile radius/city area is good enough to run ads and they have ALL of that.
This was why meta's revenue dropped so much after apple's move, they could not fall back to collecting precise location. This is the last game in town. You shut this down, meta's precise targetting will suffer gravely, ads will become flakey.
One last thing. You may ask, who are the businesses that need precise lat longs? are like this one[2]. These businesses are like whack-a-mole. They saturate the app market steal data get money and shit down when someone yells and in a few months and comeback again, rebranded and come back as another app. They exist not just to collect data but to act as an arbiter on who get eyeballs on IRL activities to influence behavior at the (Top of the funnel) TOFu. In the Worst. Possible. Way.
[1] https://techcrunch.co/2022/05/09/facebook-to-shutter-its-nea... [2] https://www.joinpogo.com/
I think one thing people are discussing a lot here about Privacy around contacts and sharing. Limiting access to contacts , completely or partially, is the wrong way to design such systems. There are two problems with this approach:
1. Having permission to contacts is NOT a capability. Running a function on it that is by design not leak PII is infinitely more valuable and a capability.
2. Asking users to grant permission is broken by design: You are giving a very bad multiple choice to the user: `(a)Creepy? (b). LessCreepy (c). Don't Use App`
Instead if we only granted operation rights and hid the actual information instead it would be so much better. We need a separation of data from the function to empower apps to give better choices to users.
Would be interesting to know how much data leaks on a new iPhone with some of the iOS privacy settings enabled and a handful of popular apps installed (WhatsApp, Instagram, Google Maps, Uber, etc).
And then if you use a commercial VPN with DNS ad-blocking enabled, how much more does this help?
Going by TFA, not much.
I read an interesting newspaper article about how the police confiscated a hired gun's iPhone and found that he ran a search on the city his victim lived in. It is these little digital breadcrumbs that makes life easy for the prosecution.
Seriously if you are going to do illegal things never ever buy a smartphone.
>the police confiscated a hired gun's iPhone and found that he ran a search on the city his victim lived in.
It's far more likely they just looked at his browser history rather than doing some skullduggery with databrokers or whatever.
It's a bit ironic that Privacy Badger extension finds 7 trackers on the article's page.
It looks like these all come from the Reddit embed in the middle of the article. Default uBlock Origin settings blocks 13 urls (and more, due to Reddit's frequent pings), but disable 3rd-party frames brings it down to 1 url (since the original embed was blocked).
oops, here comes the price of a nice-looking Reddit embed. next time I’ll stick to a screenshot :) thanks!
I realize this feels like a pipe dream, like a million miles away from our branch of reality in 2025, but I really think the entire online surveillance advertising industry needs to be burnt to the ground and (maybe, partially) rebuilt. Many of the problems we see nowadays are rooted in the fact that data is being collected and used to (supposedly) profitable ends.
Sure, there may be the occasional honest actor in the industry, but they're so marginal and outcompeted by dishonest and shady ones that it really doesn't matter. IMHO the right move is to simply ban any collection that's not strictly necessary. Kind of like GDPR but without the "if the user agrees" exceptions.
Reminds me of a regulation about artificial stone (?) being banned in Australia, not because it's impossible to use safely but because the regulator concluded that the entire supply chain is unwilling to and disincentivized from using the material safely, so the best move at this point was to ban it outright.
Edit: found that article
https://news.ycombinator.com/item?id=38634213
By "location", the OP means country. So not exactly a GPS point to 6 decimals... This is the second time I've seen a post with this problem here.
There's also, latitude, longitude and IP.
But that latitude and longitude might still be the middle of some city or country and not an exact location. It's not really clear how close it is.
Another "funny" thing regarding Microsoft xandr
I sent them my uid2 and they still say they can't link to an identity and don't have a match in their database
does incogni work to erase your data and prevent these SSP/DSPs from never collecting your data?
Related: has anyone else noticed the practice of using cheap commodity 'living room' appliances to get access to your data? A while ago I bought a ceiling light for my daughters' bedroom, brand unknown to me. It had a built-in speaker controlled via bluetooth, and dozens of light patterns and colors it could emit via a ring of small leds. My daughter was extactic looking at the youtube promo vid. Turned out that to use any of these features, you needed to install their app. Fine okay installing. Then the app demanded access to contacts and camera or it refused to connect to the ceiling light. Fine okay uninstalling the app and returning that crap.
Apple’s “privacy protections” are nothing more than marketing.
“Ask app not to track” is a wash and privacy theater at best. One of the reasons I still run ad blocking on _all_ websites and at the network layer. Sorry “content creators” but you need to get your revenue from elsewhere (ie, sponsored content).
Now I want a phone that scrambles all of this data on a per app (or phone) basis.
Malicious app wants this data? Sure you can have it. But you will get randomized values for every bit of information — resolution, lat/lon, brightness, battery level (user can set range of 90-100%), ….
That you can get an approximate address based on an IP address is not very new, and is in fact a feature baked into the very IP protocol.
I paid for pcapdroid, it's a network monitoring app that use the vpn protocol on Android to monitor every packet sent, register which app made the request, to whom, dates and so on.
In it's paid feature, you can select app to block internet connection or you can select country, ip and host.
After browsing my internet logs, it shocked me to see some app I had absolutely no idea were spying so much.
Xiaomi home ? Yeah I knew Xiaomi app would be spyware. But Spotify for instance, how could I guess it sends every few hours data to remote server including Facebook ones.
Until I find replacement for Spotify, but most music streaming app do spy on its user (and I don't mean just learning what music you like), I can still block all the graph.facebook.com tracking.eu.miui.com Google ads.gdoubleclick.net and so on.
It's open source but firewall is paid feature, i highly recommend it if you're on Android.
https://f-droid.org/fr/packages/com.emanuelef.remote_capture...
There is even the possibility to decrypt packet and analyze them although it require root, i did it on another phone and yeah it's similar to what the author found. Every single bit of data, ip adress, since how long the phone is on, the wifi connections, when did I unlock the phone and so on.
Every data taken individually is not important to me but this stream of little data constantly going God knows where is creepy as fuck.
If you have the equipment (e.g. a spare Linux computer and WiFi router) and know-how, you can set up something like mitmproxy (looks very similar feature set to the Android App, but likely requires more effort to set up) to your home network. That's what I did some weeks ago, and then basically the same exercise you did (just my whole network instead of just phone), looking what's going on. And yeah...it's not good.
Even if I trust some companies to be trustworthy, I can't possibly vet a gazillion entities getting telemetry requests, and not all of them can have their shit together, security, privacy or ethics-wise.
It made me ditch some Microsoft software, but overall escaping spying feels like a lost battle, unless you go do spartan Richard Stallman-like computing (IIRC he had pretty hardcore stance over the software he'll use).
Well I believe it's Feasible.
Anyway like most things it's a journey, not an on off switch. First you get aware then you make change and the situation gets better, it doesn't have to be perfect to be better.
On my Android phone, I had to make clear cut on which app I could keep after seeing the logs. The apps from Google, microsoft, amazon they are all gone. Even the play services and the play store replaced with aurora.
It cuts at least 2/3 of the network requests.
Then you have the case of individual apps that use Facebook SDK or other advertiser, there are often alternatives in the open source community and when it's not the case there are always less privacy invasive alternative on the store.
For instance, my default Samsung weather app was sending lots and lots of data. The alternative on the froid were not in my taste.
I eventually found out about weawow, it's not open source but it doesn't require any weird permission, no ads, it's not constantly sending data in the background and my logs says it only connect to weather.weawow.com.
I mean it's fine.
After spending weeks with the firewall, i was able to identify the spying app and replace most of them. My network log now is pretty empty when I'm not using the phone.
This makes me wonder - is there not a ‘Little Snitch’ equivalent for iOS?
What does happen if I turn off location/gps? I guess that location has to be quite imprecise.
With GPS off, location can triangulated from cell tower usage to within 3/4 of a square mile (smaller uncertainty in urban areas where cell towers are closer together). I'd heard before that some data brokers do this, but in this article the writer mentions reverse DNS lookup on IP addresses, which they mention is less precise (ZIP-code level).
It’s not that accurate. E911 location was a few miles off.
5G is far more precise as well
Stores have WiFi hotspots with very short range to track you that way while you're in the isle.
Only if you don't turn WiFi off. To my understanding even the "soft off" option present in iOS stops the phone from beaconing, and just listens in order to collect data for building augmented location services. I don't know what the Androids do. These days both of them also offer randomized MAC address to curtail such tracking.
right, they were saying it's enough just to turn off location though.
tho tbh if i really cared, no phone/battery out/faraday cage is still the gold standard
Total bs. Do not give location permissions to untrusted apps. If the app insists on it, use mock GPS feature on android which will spoof your location. Can we all please stop exaggerating the slopiness of normies with their pretentious acts of being shocked after not being cautious about their privacy? Privacy is not by default, you have to put some effort into it
Maybe privacy should be by default.
How is the MAID <> IDFA link constructed?
MAID = IDFA on ios + GAID on Android [https://www.start.io/glossary/mobile-ad-id-maid/]
I think he meant MAID <> PII
This write up is fantastic
Starting earlier this year I've set up a mitmproxy a lot on my entire home network, and often have it on for all traffic at times. I put up an old NAS and I'm abusing it as a mitmproxy tool for my home.
There would be so much to write about what I've seen. I've thought of making a blog post. I use mitmproxy to check on sketchy apps and to learn in general.
The information sent out is fascinating. I knew extensive telemetry is pretty norm these days, but it's another thing to see it with your own eyes. My exercise has also made the typical "yes, we collect data/telemetry, but it's deanonymized/secured/etc. and deleted after X days so no worries" sound very hollow; even if a company goes in good faith by their own rules, how am I supposed to trust the other 1000 companies who also do data collection. If someone hacked my mitmproxy itself and downloaded all the payloads it collected, they would probably know me better than I do.
Random examples on top of my head from mitmproxy (when I say "chatty" I mean they talk a lot to server somewhere):
I had GitHub CoPilot neovim plugin. I didn't realize how chatty it was until I did this (although I wasn't surprised either, obviously completions are sent out to a server, but it also has your usual telemetry+AB test experiment stuff). I had wanted to ditch that service for a long time so I finally did it after seeing with a local setup since open stuff has mostly caught up. Also it's not actually open source I think? I had no idea (I thought it would just be a simple wrapper to call into some APIs, but: no PRs, no issues, code has blobs of .wasm and .node: https://github.com/github/copilot.vim)
Firefox telemetry, if it's turned on, is a bit concerningly detailed to me. I think I might be completely identifiable on some of the payloads if someone decided to really take a go at analyzing the payloads I send. Also I find it funny that one of the JSON fields says "telemetry is off". Telemetry is actually on on the menu (I leave it on purpose to see stuff like this); just in the JSON for some reason it says off. I'm not sure if that telemetry is meant to be non-identifiable though in the first place.
Unity-made software (also mentioned in the article) send out a Unity piece at start-up that looks similar to the article, although I didn't take a deeper look myself.
Author mentioned the battery: I also noticed that a lot of mobile apps are interested in the battery level. I didn't connect the pieces why but the article mentions Uber 4% battery surcharge, and now it makes a bit more sense.
One app that has at least once been on HN at high scores starts sending out analytics before you've consented to any terms and conditions. One of the fields is your computer hostname (one of my computers has my real name in my hostname...it does not anymore). Usually web pages have "by downloading you accept terms and conditions" but this one only presented that text after you launch app before you get to the main portion. I never clicked it (still haven't), but I allowed the app mellow on background to snoop on its behavior.
Video games: The ones I've tried seen mostly don't do anything too interesting. But I haven't tried any crappy mobile games for example. One unity game on the laptop, Bloons TD 6 sends out analytics at every menu click and a finished game sends a summary and is the "chattiest" game so far, although seems limited to what the game actually needs to do (it has an.online aspect). The payloads had more detailed info on my game stats though, they should add those to the game UI ;)
Apple updates don't work through mitmproxy (won't trust the certificates). Neither do many mobile apps (none of the banking ones did, now I know what a mitm attack would look like to my bank app).
Some requests have a boatload of HTTP headers. I've thought of writing a mitmproxy module to make a top 10 list. I think some Google services might be at the top that I've seen. (I think Google also has developed new HTTP tech, is it so that they can more efficiently set even more cookies? ;)
I think anything Microsoft-tied may be chattiest programs overall on my laptop. But I haven't done stats or anything like that.
Aside from mitmproxy, I'm learning security/cryptography (managed to find real world vulnerabilities although frankly very boring ones so far...), Ghidra, started learning some low-level seccomp() stuff, qemu user emulation, things in that nature to get some skills in this space. Still need to learn: legal side of things (ToSes like to say 'no reverse engineering'), how to not get into trouble if you reverse engineer something someone didn't like. I've not dared to report some things, and to not poke some APIs or even mention them because I don't know enough yet how to cover my ass.
Modern computing privacy and security is a mess.
I've worked a good part of my career at a DSP company (it would be in the box that says "Criteo" on it on the author's article). So I have some idea what companies in that space have as data.
This is wild.
What do you think a design would look like for the web and internet where users actually have privacy on it?
How did were the apps able to collect geolocation even though location services were disabled?
Probably from IP address geolocation.
Indeed, the article notes that the lat/long wasn't "GPS exact", more like "postal code exact".
Doing that on the client side sounds pointless though, if you are already transmitting the ip.
I am using AdGuard DNS and its absolutely sickening how many ads and trackers it is blocking on a daily basis. Just on my smartphone alone.
most of the modern internet just seems to be the greatest inescapable surveillance system in all of human history.
They say the leaked list is 2000 apps, but the linked spreadsheet has over 12000...
And are these getting pulled from the store due to privacy violations? I'd assume Google would be aware at this point?
All these companies operate from within USA jurisdiction, that should send the world clear message.
Snowden in one of interviews talks about exactly this kind of tracking with Amazon example (ts 01:18:00) https://x.com/JohnStossel/status/1885382675810181612
Basically, all these companies, ad networks, data brokers, big tech with absence of basic privacy laws (not to be confused with 4th amendment that binds Fed and State gov only, but does not restraint companies) act with wilful conspiracy with US government regulators, washing each other hand like a monopoly. This data gets enriched and collided and is perpetually on a permanent record.
https://en.wikipedia.org/wiki/Permanent_Record_(autobiograph...
So next time you talk about totalitarian regimes around the world look in the window.
This is honestly a masterpiece.
great writeup. keep forensic faraday bags around.
No they don't, because:
I have location services disabled 99% of the time
And I often leave the house without my smartphone (when walking or bicycling), much to the dismay of my better half... I am sorry, I forget!
Shame on all of the people working on those systems. Legit companies trying to make the world better struggle to find competent people.
I think a big part of the ability of these shady companies pulling the brightest minds away from more clearly beneficial fields is that important flavors of ideology necessary to motivate people to take less lucrative work have been stripped from "business". There's a lot fewer appeals to art, history, cultural stewardship, or empire-building present in things like transportation, medicine, construction, etc. than there were in the past. Any flavor of "for the glory of God/the nation/the People/the Art" etc. is pretty effectively stripped out of American business, and I think that's the only kind of thing that would motivate someone who could make $250k in Adware to make $100k in something else.
People are now very well-trained to look out for their own bottom line, and take jobs accordingly.
Doing things because they increase some non-monetary value has fallen out of fashion for sure. A colleague of mine recently shared, in a group social setting, a sense of disappointment that his daughter was studying to be a doctor. I was, as far as I could tell, the only one to note that there is practical utility to having doctors.
But the "be part of our mission" was shown to be hollow over and over too. First and foremost, you as an enployee are making the investors and CEO rich. The mission is usually exploiting the employee, even when it's not exploiting the world. Employees have recognized the real social ethic (money over everything) and are just playing the same game. Which is sad.
Ideally the people who see these choices would make alternative choices that will leave their grandchildren better off in the world. It has taken only a generation for the "greed is good" mentality to drop us into this fetid soup.
I think the phrase you called out--"be a part of our mission"--that most corporations (and, mimicking them, government agencies) regurgitate is itself the approach to socialization that causes people to feel less inclined to work for any non-profit reason. "Part of OUR mission" redefines the company as the entity to be loyal to, rather than casting the company as part of society itself. You can't replace constructs that tend to inspire people to heroism and selflessness with a corporate avatar and expect the fabric of society to remain similarly motivated. It does make a set few people a hell of a lot of money in the short term, though.
Aren't doctors extraordinarily well-paid? Was this outside the US?
American business has never cared about any of that stuff. It was always lip service.
Also if you haven’t heard, the US government is currently a shit show right now run by Musk as far as employment
i believe there are somewhat fewer practicing freemasons in those places of layte.
Ugh. Eye roll on the whole “make the world better”. So few SV products remotely make the world better. The purpose of the vast majority is to make money at all costs. I also disagree with what most people claim is making the world better. All in all, I think social media is a net negative for the world for any good that might be found. Every SV thing after that is just chasing the recketship to the moon dream.
I believe this was parodied multiple times in season 1 of Silicon Valley.
Direct link to the scene: https://www.youtube.com/watch?v=B8C5sjjhsso
That scene in the pilot with the way over enthusiastic founder on stage with Kid Rock was pretty much when I knew I would <3 that show
One of my favorite seasons of TV <3
> So few SV products remotely make the world better
Because devs have to eat in order to survive?
You don't need to work on adtech or work at Meta to survive. Don't pretend that it has nothing to do with the lavish salaries.
Right, because SV is epitomized for the struggling dev as much as Haight Ashbury is known for struggling artists.
Ah, yes, the profession that most struggles to make ends meet.
What companies?
I work in cancer research. Some companies are there just for greed as well but at least the end product could be helpful
I have the same question. It did not seem easy for me to find a job where we are at least not writing malware (according to my judgement). But it's far from making the world better :|
Related?
NSA Warns iPhone and Android Users–Disable Location Tracking https://news.ycombinator.com/item?id=42713536
Hackers Claim Breach of Location Data Giant, Threaten to Leak Data https://news.ycombinator.com/item?id=42627336
It’s funny how in recent headlines the NSA & FBI have been saying to use secure messaging app. Yet the FBI is infamous for claiming the need for back doors into these encrypted apps. What are we to think of the opposing views? Are they really being benevolent for the citizens, or do they no longer need the back door, or do they have a back door whether intentional or not?
> What are we to think of the opposing views?
They're not homogenous organizations. Not sure about the FBI, but AFAIK the NSA has always been in an awkward spot of being split between defensive and offensive missions. It wouldn't be particularly surprising to have one arm going "you should all use encrypted messaging, it's the most secure" while the other arm is frantically trying to break or backdoor said encrypted messaging.
It seems that they have changed their minds about surveillance back doors, after some devastating attacks, where Chinese state actors (among others) used back doors created for compliance with warrants to get in.
But that was the pre-Trump NSA and FBI. Now the Chinese and the Russians just need to get some DOGE volunteer to give them whatever they want, since Elon now has root on all the government payment systems and is too undisciplined to do things in a secure way.
The world is changing fast and reasons for actions may be more complex and interesting than you assume.
Were they ever _not_ benevolent to US citizens as a whole, even if misguided? There may be last-ditch attempts to extend benevolence to US citizens as a takeover looms. If leaks from the Office of Personnel Management are to be believed, then right now US government is in the process of a soft coup, being dismantled along lines of political loyalty. I expect those working in intelligence and law enforcement who support democracy see the writing on the wall and will act sooner or later.
Reliable end to end encryption is an important tool for citizens of a nation that may need to organise in a hurry. We might see new Edward Snowden type revelations of programmes, naming key people or giving clear advice not to trust certain US based entities or services. Civil servants may act professionally as non-politically as they can, but in the end, if only to protect their jobs, they're going to come down on the side of democracy.
Logged in to upvote this
May be Steve Jobs was right all along. We dont need Smartphone with App Store. Either 1st Party Apps and Everything else should be on Browser or Apps that uses Browser Engine.
Steve jobs embraced web apps as much as he could.[0]. But people weren't willing to give up desktop apps.
[0] https://web.archive.org/web/20070809174426/http://www.apple....
Websites tend to have more ads and tracking than native apps, and they're still getting your ip address.
What? [0]
[0] Steve Jobs introduces App Store - https://www.youtube.com/watch?v=eU3X6Fu5JiE
Probably referring to Jobs' initial stance that HTML5 is more than sufficient for "apps". Of course, he acquiesced and eventually added the App Store.
A while ago a co worker told me "why would you care about your privacy? all my data is already out there anyway and what can even be done with it anyway".
What would be the ideal response to such an absurd comment? At the time I found it hard to answer because she surprised me with that opinion.
Edit to note: the explanation should be compatible with a professional context. I don't want to scare my co workers or appear crazy/paranoid.
See https://www.reddit.com/r/linux/comments/1drrjtu/i_dont_have_....
My examples:
- You get an HIV diagnosis (or other terrible disease). Do you want everyone you meet to know?
- You feel depressed or burnt out. Should your employer know?
- You're financially in a bad place. Do you want your kids to know? Do you want your kids' friends to know?
- Do you share your salary with everyone?
- If someone's gay, should this be public information?
- Should your religion be public? Your political points of view?
Losing privacy makes you more vulnerable to economic exploitation (price discrimination, salary negotiation, insurance premiums, etc). Therefore protecting your privacy is a form of economic self-defense.
Just ask for their email password and see what they say. Usually though this comment is just them trying to change the subject because very few people know or care about any of this
There are a few examples I use when I hear such ignorant statements: 1. Not caring about privacy cuz you’ve got nothing to hide is like saying you don’t care about freedom of speech cuz you’ve got nothing to say. 2. If you don’t care about privacy, why don’t you poop with an open door, for everyone to observe?
The problem is, I could not formulate anything in this way in a professional setting. I want my co workers to understand because I feel a bit uneasy working with people who do not but I also don't want to scare them.
Because I don't want to rest of the house to smell?
A different argument that appeals to some is that you might not have something to hide, but what about the people who do? For the greater good of society, whistleblowers are needed to expose malfeasance by the corrupt and it's going to make it much harder for any of them to come forwards if their reward is literally exile to Russia. If you're in support of a slow slide into dystopia, go ahead and argue against all privacy. Whether a given situation rises to that level is an different but adjacent topic, but appealing to something some people can believe in, such as not letting the rich and powerful get away with being utterly corrupt in their dealings is a way to find common ground, with some. not everyone cares about that though, but it's an additional argument for privacy.
Seriously, anyone who ever says they have nothing to hide, show them this story.
"A Redding Police Department officer in 2021 was charged with six misdemeanors after being accused of accessing CLETS to set up a traffic stop for his fiancée's ex-husband, resulting in the man's car being towed and impounded, the local outlet A News Cafe reported. Court records show the officer was fired, but he was ultimately acquitted by a jury in the criminal case. He now works for a different police department 30 miles away."
California Law Enforcement Misused State Databases More Than 7,000 Times in 2023 https://www.eff.org/deeplinks/2025/01/california-police-misu...
neat details shared, might check out later.
Does this apply to Brave browser?