"You can only turn off this setting 3 times a year."
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
Can someone explain to me why the immediate perception is that this is some kind of bad, negative, evil thing? I don't understand it.
My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.
Their disclaimer already suggests they don't train on your photos.
This is Microsoft. They have a proven record of turning these toggles back on automatically without your consent.
So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.
And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.
Of course, the problem with having your data available even for a day or so, lets say because that day you didn't read your e-mails, will mean, that your data will be trained on, used for M$ purposes. They will have powerful server farms at the ready holding your data at gun point, so that the moment they manage to fabricate fake consent, they are there to process your data, before you can even finish reading any late notification e-mail, if any.
Someone show me any cases, where big tech has successfully removed such data from already trained models, or in case of being unable to do that with the blackboxes they create, removed the whole blackbox, because a few people complain about their data being in those black boxes. No one can, because this has not happened. Just like ML models are used as laundering devices, they are also used as responsibility shields for big tech, who rake in the big money.
This is M$ real intention here. Lets not fool ourselves.
A bug, or a dialog box that says ”Windows has reviewed your photo settings and found possible issues. Press Accept now to reset settings to secure defaults”
This is how my parents get Binged a few times per year
This feels different though. Every time you turn it off and then on again it has a substantial processing cost for MS. If MS "accidentally" turns it on and then doesn't allow you to turn it off it raises the bar for them successfully defending these actions in court.
So to me it looks like MS tries to avoid that users ram MS's infrastructure with repeated expensive full scans of their library. I would have worded it differently and said "you can only turn ON this setting 4 times a year". But maybe they do want to leave the door open to "accidentally" pushing a wrong setting to the users.
As stated many times elsewhere here, if that were the case, it'd be an opt in limit. Instead it's an opt out limit from a company that has a proven record of forcing users into an agreement against their will and requiring an opt out (that often doesn't work) after the fact.
Nobody really believes the fiction about processing being heavy and that's why they limit opt outs.
If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.
Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.
> You are trying to reach really far out to find a plausible
This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.
Their spokes person also avoided answering why they are doing this.
On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.
Yeah exactly. Some people have 100k photo collections. The cost of scanning isn’t trivial.
They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.
If it was that simple, there would be no practical reason to limit that scrub to three ( and in such a confusion inducing ways ). If I want to waste my time scrubbing, that should be up to me -- assuming it is indeed just scrubbing tagged data, because if anything should have been learned by now, it is that:
worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company
Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.
No, but the scanning is happening on Microsoft servers, not locally, I am guessing.
So if you enable the feature, it sends your photos to MS to scan... If you turn it off, they delete that data, meaning if you turn it on again, they have to process the photos again. Every time you enable it, you are using server resources.
However, this should mean that they don't let you re-enable it after you turn it off 3 times, not that you can't turn it off if you have enabled it 3 times.
where does it say turning it off deletes the data? it doesn't even say that turning it off stops them scanning your photos. the option is "do you want to see the AI tags" Google search history is the same. Turning off or deleting history only affects your copy of the data.
Just because you can't personally think of a reason why the number shall be 3, and no more than 4, accepting that thou hast first counted 1 and 2, it doesn't mean that the reason is unthinkable.
I feel like you're way too emotionally invested in whatever this is to assess it without bias. I don't care what the emotions are around it, that's a marketing issue. I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
It's probably opt-out, because most users don't want to wait 24 hours for their photos to get analyzed when they just want to search for that dog photo from 15 years ago using their phone, because their dog just died and they want to share old photos with the family.
This doesn't apply to your encrypted vault files. Throw your files in there if you don't want to toggle off any given processing option they might add 3 years from now.
Clearly, you personally can't think of a reason yourself based on that 'probably' alone.
<< I feel like you're way too emotionally invested
I think. You feel. I am not invested at all. I have.. limited encounters with windows these days. But it would be silly to simply dismiss it. Why? For the children man. Think of the poor children who were not raised free from this silliness.
<< I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
I can respect that. What are those technical details? MS was a little light on the details.
"Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies. This helps you quickly and easily organize photos of friends and family. Only you can see your face groupings. If you share a photo or album with another individual, face groupings will not be shared.
Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall. Any data you provide is only used to help triage and improve the results of your account, no one else's.
While the feature is on, Microsoft uses this data to group faces in your photos. You can turn this feature off at any time through Settings. When you turn off this feature in your OneDrive settings, all facial grouping data will be permanently removed within 30 days. Microsoft will further protect you by deleting your data after a period of inactivity. See the Microsoft account activity policy for more information."
I turn all Co-Pilot things off and I've got all those AI/tagging settings off in OneDrive, but I'm not worried about the settings being disingenuous currently.
There's always a worry that some day, a company will change and then you're screwed, because they have all your data and they aren't who you thought they were anymore. That's always a risk. Just right now, I'm less worried about Microsoft in that way than I am with other companies.
In a way, being anti-government is GOOD, because overly relying on government is dangerous. The same applies to all these mega-platforms. At the same time, I know a lot of people who have lots a lot of data, because they never had it backed up anywhere, and people who have the data, but can't find anything, because there's so much of it and none of it is organized. These are just, actual real world problems and Microsoft legitimately sees that the technology is there now to solve these problems.
Then you would limit the number of times the feature can be turned on, not turned off. Turned off uses less resources, while turned on potentially continues using their resources. Also I doubt if they actually remove data that requires processing to obtain, I wouldn't expect them to delete it until they're actually required to do so, especially considering the metadata obtained is likely insignificant in size compared to the average image.
It's an illusion of choice. For over a decade now companies either are spamming you with modals/notifications up until you give up and agree to a compromising your privacy settings or "accidentally" turn these on and pretend that change happened by a mistake or bug.
Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.
Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.
Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".
And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.
---
And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:
> Microsoft's publicist chose not to answer this question
and
> We have nothing more to share at this time
but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.
It sounds like you have revoked their permission to tag(verb) the photos, why should this interfere with what tag(noun) the photo already has?
But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.
> Their disclaimer already suggests they don't train on your photos.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
> Why is Microsoft so eager to also be able to know this?
A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.
Actually, most users probably don't understand, that this ridiculous policy is more effort to implement. They just blindly follow whatever MS prescribes and have long given up on making any sense of the digital world.
It's hilarious that they actually say that right on the settings screen. I wonder why they picked 3 instead of 2 or 4. Like, some product manager actually sat down and thought about just how ridiculous they could be and have it still be acceptable.
My guess is it was an arbitrary guess and the limit is due to creating a mass scan of photos. Depending on if they purge old data when turned off, it could mean toggling the switch tells microsoft's servers to re-scan every photo in your (possibly very large) library.
Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.
I would be sceptical too, if I was still using Windows.
I’ve seen reports in the past that people found that syncing to the cloud was turned back on automatically after installing Windows updates.
I would not be surprised if Microsoft accidentally flip the setting back on for people who opted out of AI photo scanning.
And so if you can only turn it back off three times a year, it only takes Microsoft messing up and opting you back in three times in a year against your will and then you are stuck opted in to AI scanning for the rest of the year.
Like you said, they should be limiting the number of times it can be turned back on, not the number of times it can be turned off.
Yep. I have clients who operate under HIPAA rules who called me out of the blue wondering where their documents had gone. Microsoft left a cheery note on the desktop saying they had very helpfully uploaded ALL of their protected patient health data into an unauthorized cloud storage account without prior warning following one a Windows 10 update.
When I used to work as a technician at a medical school circa 2008, updating OS versions was a huge deal that required months of preparations and lots of employee training to ensure things like this didn't happen.
Not trying to say that you could have prevented this; I would not be surprised if Windows 10 enterprise decided to "helpfully" turn on auto updates and updated itself with its fun new "features" on next computer restart.
3 is the smallest odd prime number. 3 is a HOLY number. It symbolizes divine perfection, completeness, and unity in many religions: the Holy Trinity in Christianity, the Trimurti in Hinduism, the Tao Te Ching in Taoism (and half a dozen others)
I'd rather guess that they've pick 3 as a passive-aggressive attempt to provide a false pretense of choice in "you can change it but in the end it's gonna be our way" style than thinking they're attributing some cultural significance of number 3 behind this option. But that's still interesting concept tho
Honestly, I hated when they removed automatic photo tagging. It was handy as hell when uploading hundreds of pictures from a family event, which is about all I use it for.
Precisely. The logic could just as easily be "you can only turn this ON three times a year." You should be able to turn it off as many times as you want and no hidden counter should prevent you from doing so.
If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.
I agree with you but there's nothing astonishing about any of this unfortunately, it was bound to happen. Almost all of cautionary statements about AI abuse fall on deaf ears of HN's overenthusiastic and ill-informed rabble, stultified by YC tech lobbyists.
Worst part about it was all the people fretting on about ridiculous threats like the chatbot turning into skynet sucked the oxygen out of the room for the more realistic corporate threats
Right. But then the AI firms did that deliberately, didn't they? Started the big philosophical argument to move the focus away from the things they were doing (epic misappropriation of intellectual property) and the very things their customers intended to do: fire huge numbers of staff on an international, multi-industry scale, replace them with AI, and replace already limited human accountability with simple disclaimers.
The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.
And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.
Yes, any non E2EE cloud storage system has strict scanning for CSAM. And it's based on perceptual hashes, not AI (because AI systems can be tricked with normal-looking adversarial images pretty easily)
I built a similar photo ID system, not for this purpose or content, and the idea of platforms using perceptual hashes to potentially ruin people's lives is horrifying.
Depending on the algorithm and parameters, you can easily get a scary amount of false positives, especially using algorithms that shrink images during hashing, which is a lot of them.
I imagine you'd add more heuristics and various types of hashes? If the file is just sitting there, rarely accessed and unshared, or if the file only triggers on 2/10 hashes, it's probably a false alarm. If the file is on a public share, you can probably run an actual image comparison...
A lot of classic perceptual hash algorithms do "squinty" comparisons, where if an image kind of looks like one you've hashed against, you can get false positives.
I'd imagine outside of egregious abuse and truly unique images, you could squint at a legal image and say it looks very much like another illegal image, and get a false positive.
From what I'm reading about PhotoDNA, it's your standard phashing system from 15 years ago, which is terrifying.
But yes, you can add heuristics, but you will still get false positives.
I thought Apple’s approach was very promising. Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
> Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked
Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
> the conversation was dominated by uninformed outrage about things that weren’t happening.
The thing that wasn't happening yet was mission creep beyond the original targets. Because expanding-beyond-originally-stated-parameters is thing that happens with far reaching monitoring systems. Because it happens with the type of regularity that is typically limited to physics.
There were 2ndary concerns about how false positives would be handled. There were concerns about what the procedures were for any positive. Given Gov propensities to ruin lives now and ignore that harm (or craft a justification) later, the concerns seem valid.
That's what I recall the concerned voices were on about. To me, they didn't seem outraged.
> Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
Yes. Completely wrong. Not even close.
Why don’t you just go and read about it instead of guessing? Seriously, the point of my comment was that discussion with people who are just guessing is worthless.
> Why don't you just explain what you want people to know instead of making everyone else guess what you are thinking?
I’m not making people guess. I explained directly what I wanted people to know very, very plainly.
You are replying now as if the discussion we are having is whether it’s a good system or not. That is not the discussion we are having.
This is the point I was making:
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
The discussion is about the ignorance, not about the system itself. If you knew how it worked and disagreed with it, then I would completely support that. I’m not 100% convinced myself! But you don’t know how it works, you just assumed – and you got it very wrong. So did a lot of other people. And collectively, that drowned out any discussion of how it actually worked, because you were all mad about something imaginary.
You are perfectly capable of reading how it worked. You do not need me to waste a lot of time re-writing Apple’s materials on a complex system in this small text box on Hacker News so you can then post a one sentence shallow dismissal. There is no value in doing that at all, it just places an asymmetric burden on me to continue the conversation.
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage
I would not care if it worked 100% accurately. My outrage is informed by people like you who think it is OK in any form whatever.
They could have avoided the negative press by changing the requirement to be that you can’t re-enable the feature after switching it off 3 times per year.
It’s not hard to guess the problem: Steady state operation will only incur scanning costs for newly uploaded photos, but toggling the feature off and then on would trigger a rescan of every photo in the library. That’s a potentially very expensive operation.
If you’ve ever studied user behavior you’ve discovered situations where users toggle things on and off in attempts to fix some issue. Normally this doesn’t matter much, but when a toggle could potentially cost large amounts of compute you have to be more careful.
For the privacy sensitive user who only wants to opt out this shouldn’t matter. Turn the switch off, leave it off, and it’s not a problem. This is meant to address the users who try to turn it off and then back on every time they think it will fix something. It only takes one bad SEO spam advice article about “How to fix _____ problem with your photos” that suggests toggling the option to fix some problem to trigger a wave of people doing it for no reason.
I cancelled Facebook in part due to a tug-of-war over privacy defaults. They kept getting updated with some corporate pablum about how opting in benefited the user. It was just easier to permanently opt out via account deletion rather than keep toggling the options. I have no doubt Microsoft will do the same. I'm wiping my Windows partition and loading Steam OS or some variant and dual booting into some TBD Linux distro for development.
When I truly need Windows, I have an ARM VM in Parallels. Right now it gets used once a year at tax time.
Tell you what, Microsoft: turn it off, leave it off, remove it, fire the developers who made it, forget you ever had the idea. Bet that saved some processing power?
> It’s not hard to guess the problem: toggling the feature off and then on would trigger a rescan of every photo in the library.
That's would be a wild way to implement this feature.
I mean it's Microsoft so I wouldn't be surprised if it was done in the dumbest way possible but god damn this would be such a dumb way to implement this feature.
This would be because of the legal requirement to purge (erase) all the previous scan data once a user opts out. So the only way to re-enable is to scan everything again — unless you have some clever way I’ve not thought of?
Encrypt the data and store the key on the user's device. If the user enables the feature, they transmit their key to you. If they disable the feature, you delete the key on your side.
In theory, you could store a private key on the device and cryptoshred the data on Microsoft’s servers when the setting is disabled (Microsoft deletes their copy of the key). Then, when the feature is re-enabled, upload the private key to Microsoft again.
As far as I know, most data protection laws accept cryptoshredding as long as the party with a deletion requirement actually destroys the key. For one thing, it’s hard to reconcile deletion requirements with immutable architectures and backups without a mechanism like this.
IANAL, but I think the key remaining in the user’s possession doesn’t matter as far as the company with a deletion requirement is concerned.
You do not have to, and should not, start deleting data immediately. We've not uncivilized here, we can schedule tasks.
If this were happening on device (lol) then you should do both the scanning and deleting operations at times of usually low activity. Just like how you schedule updates (though Microsoft seems to not have forgotten how to do this). Otherwise, doing the operations at toggle time just slams the user's computer, which is a great way to get them to turn it off! We'd especially want the process to have high niceness and be able to pause itself to not hinder the user. Make sure they're connected to power or at least above some threshold in battery if on laptop.
If you can on device and upload, again, you should do this at times of low activity. But you also are not going to be deleting data right away because that is going to be held across several servers. That migration takes time. There's a reason your Google Takeout can take a few hours and why companies like Facebook say your data might still be recoverable for 90 days.
Doing so immediately also creates lots of problems. Let's say you enable, let it go for awhile, then just toggle back and fourth like a mad man. Does your toggling send the halt signal to the scanning operation? What does the toggling on option do? Do you really think this is going to happen smoothly without things stepping on each other? You're setting yourself up for a situation where the program is both scanning and deleting at the same time. If this is implemented better than most things I've seen from Microsoft then this will certainly happen and you'll be in an infinite loop. All because you make the assumption that there is no such thing, or the possibility of, an orphaned process. You just pray that these junior programmers with a senior title just don't know how to do parallelization...
In addition to the delay you should be marking the images in a database to create a queue. Store the hash of the file as the ID and mark appropriately. We are queuing our operations and we want to have fail safes. You're scanning the entire fucking computer so you don't want to do things haphazardly! Go ahead, take a "move fast and break things" approach, and watch your customers' get a blue screen of death and wake up to having their hard drives borked.
> unless you have some clever way I’ve not thought of?
Seriously, just sit down and think about the problem before you start programming. The whiteboard or pen and paper are some of your most important weapons as a programmer. Your first solution will be shit and that's okay. Your second and even third solution might be shit too. But there's a reason you need depth. We haven't even gotten into any real depth here either. Our "solution" here has no depth, it's just the surface level and I'm certain the first go will be shit. And But you'll figure more stuff out and find more problems and fix them. I'm also certain others will present other ideas that can be used too. Yay, collaboration! It's all good unless you just pretend you're done and problems don't exist anymore. (Look ma! All the tests pass! We're bug free!) For christ's sake, what are you getting a quarter million+ salary for?
This is such a norm in society now; PR tactics take priority over any notion of accountability, and most journalists and publishers act as stenographers, because challenging or even characterizing the PR line is treated as an unjustified attack and inflated claims of bias.
Just as linking to original documents, court filings etc. should be a norm in news reporting, it should also be a norm to summarize PR responses (helpful, dismissive, evasive or whatever) and link to a summary of the PR text, rather than treating it as valid body copy.
People need to treat PR like they do AIs. "You utterly failed to answer the question, try again and actually answer the question I asked this time." I'd love to see corporate representatives actually pressed to answer. "Did you actually do X, yes or no, if you dodge the question I'll present you as dodging the question and let people assume the worst."
It's worse actually. These are repeated games so the outcome of any current interaction affects the next one. Journalists can't be too hard on the people the cover or else they won't have the access to cover them in the future.
They take people for idiots. This can work a few times, but even someone who isn't the brightest will eventually put two and two together when they get screwed again and again and again.
The worst part of all this is even respectable news organisations like the BBC publish so many articles that are just the companies PR response verbatim. Even worse when it's like
- victim says hi, this thing is messed up and people need to know about this
-Company says "bla bla bla" legal speak we don't recognise an issue "bla bla bla"
End of article, instead of saying
"This comment doesn't seem to reflect the situation" or other pointing out that anybody with a brain can see the two statements are not equal in evidence nor truth
And not just advertising. If ICE asks Microsoft to identify accounts of people who have uploaded a photo of "Person X", do you think they're going to decline?
They'd probably do it happily even without a warrant.
I'd bet Microsoft is doing this more because of threats from USG than because of advertising revenue.
> They'd probably do it happily even without a warrant
I'm old enough to remember when companies were tripping over themselves after 9/11 trying to give the government anything they could to help them keep an eye on Americans. They eventually learned to monetize this, and now we have the surveillance economy.
> and follow Microsoft's compliance with General Data Protection Regulation
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
Truly bizarre. I'm so glad I detached from Windows a few years back, and now when I have to use it or another MS product (eg an Xbox) it's such an unpleasant experience, like notification hell with access control checks to read the notifications.
The sad thing is that they've made it this way, as opposed to Windows being inherently deficient; it used to be a great blend of GUI convenience with ready access to advanced functionality for those who wanted it, whereas MacOS used to hide technical things from a user a bit too much and Linux desktop environments felt primitive. Nowadays MS seems to think of its users as if they were employees or livestock rather than customers.
Meta just lost a court case against bits of freedom in the Netherlands, because their instagram setting to turn off the attention grabbing feed would reset every month or so. The court ruled that this infringed on the user’s freedom.
They are like shitty Midas, everything they touch, turns into pile of crap. However people stil buy their products. They think the turd is tasty, because billion of flies can't we wrong...
Meanwhile Apple is applying different set of toxic patterns. Lack of interoperability with other OS, their apps try to store data mainly on iCloud, iPhone has no jack connector etc.
Insider here, in m365 though not onedrive. It did change, but not because of satya ; because of rules and legislation and bad press. Privacy and security are taken very seriously (at least by people who care to follow internal rules) not because "we're nice", but because
- EU governments keep auditing us, so we gotta stay on our toes, do things by the book, and be auditable
- it's bad press when we get caught doing garbage like that. And bad press is bad for business
In my org, doing anything with customer's data that isn't directly bringing them value is theoretically not possible. You can't deliver anything that isn't approved by privacy.
Don't forget that this is a very big company. It's composed of people who actually care and want to do the right thing, people who don't really care and just want to ship and would rather not be impeded by compliance processes, and people who are actually trying to bypass these processes because they'd sell your soul for a couple bucks if they could. For the little people like me, the official stance is that we should care about privacy very much.
Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads. I guess the appeal to take customer's money and exploit their data is too big. The kind of shit that will get me to leave.
> Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads.
How long has MS been putting ads in the start menu?
Sure, but "there has been a good period of time where..." is a statement that the situation introduced by where continues into the present. And that doesn't seem to be compatible with the facts.
> EU governments keep auditing us, so we gotta stay on our toes, do things by the book
Erm, dude ....
IANAL, and I am sure most people do not need to be lawyers to figure out that not allowing people to permanently opt-out of photo scanning is almost certainly going to be in contravention of every EU law in the book.
I hope the EU take Microsoft to the cleaners over this one.
Do we even think that was real? I think social media has been astroturfed for a long time now. If enough people make those claims, it starts to feel true even without evidence to support it.
Did they ever open source anything that really make you think "wow"? The best I could see was them "embracing" Linux, but embrace, extend, extinguish was always a core part of their strategy.
I don't understand how this is losing their mind. Toggling this setting is expensive on the backend: opting in means "go and rescan all the photos". opting out means "delete all the scanned information for this user". As a user just make up your mind and set the setting. They let you opt in, they ley you opt out, they just don't want to let you trigger tons of work every minute.
This week I have received numerous reminders from Microsoft to renew my Skype credit..
Everything I see from that company is farcical. Massive security lapses, lazy AI features with huge privacy flaws, steamrolling OS updates that add no value whatsoever, and heavily relying on their old playbook of just buying anything that looks like it could disrupt them.
P.S. The skype acquisition was $8.5B in 2011 (That's $12.24B in today's money.)
By each passing day since I switched from using Windows to Linux at home, with decreasing friction, I am increasingly happy that I took time to learn Linux and stuck with it. This not a come to Linux call because I know it is easier said than done for most of non technical folks. But it is a testimony that if you do, the challenges eventually will be worth it. Because at this point, Microsoft is just openly insulting their captive users.
Growing up, Microsoft dominance felt so strong. 3 decades later, there’s a really high chance my kids will never own or use a windows machine (unless their jobs gives them one).
Microsoft hate was something else in the '90s and 2000s. Yet people stayed with it as if they had no choice while OS/2, AmigaOS, NextStep, BeOS and all those UNIXes died.
A lot of people didn't and still don't. Sometimes your job/business requires certain software that is only available on windows. I'm not giving up my job for an OS. for the past 15 years or so I could do everything on Mac and Linux, but that might not always be the case. I certainly wouldn't pass up a lucrative consulting position because it was windows only.
an employer requires their workers to use Windows; the target audience for Windows is management, their HR and attorneys, and then greater security services. MSFT sells investigative services.
Microsoft knows the vast majority of professionals are forced to use their products and services or else they can't put food on the table. That's why Microsoft can operate with near impunity.
I think the EU is flawed in more ways that just one. But every time I see „<AI feature> will be available starting now outside EU“ I am really grateful
Microsoft gets a lot less difficult to reason about when we start to think of it as a statistical mean of human nature rather than the mind of one arbitrary evil bastard. They have 228k employees. The CEO has virtually zero direct influence on the end work product of any team.
Any organization this large is going to have approximately the same level of dysfunction overall. But, there are almost always parts of these organizations where specific leaders have managed to carve out a fiefdom and provide some degree of actual value to the customer. In the case of Microsoft, examples of these would be things like .NET, C#, Visual Studio [Code], MSSQL, Xbox.
Windows, Azure & AI are where most of the rot exists at Microsoft. Office is a wash - I am not a huge fan of what has happened to my Outlook install over the years, but Teams has dramatically stabilized since the covid days. Throwing away the rest of the apple because of a few blemishes is a really wasteful strategy.
Do you think the PR person responding here feels, underneath it all, the inhumanity of their responses? The fact that they're merely wasting everyone's time with their prevaricated non-answers? Knowing what they need to say to keep their job but hurting internally at the stupidity of it all.
Or do they end up so enmeshed with the corporate machine that that they start to really believe it all makes sense?
I think - at least for the people who stick with a career in PR - that they enjoy playing the game of giving an answer that is sort of related to the question but doesn't actually give a single bit of useful information. That they enjoy seeing how far they can push it without the interviewer straight up accuse them of not answering the question.
At least that's the only way I can imagine them keeping their sanity.
Does this mean that when you disable, all labels are deleted, and when you turn it back on it has to re-scan all of your photos? Could this be a cost-saving measure?
"It's not your data citizen, you should be happy we made this OS for you. You are not smart enough to do it your self, we know what is best."
I can never help myself from hearing this inside, and am just incredibly thankful that we have Linux and FOSS in general. That really gives me hope for humanity at this point.
I type this in FireFox, on NixOS, with all my pics open in another tab, in Immich. Thank you, thank you, thank you.
I was quite happy for a couple years to just use windows and wsl. Fully switched to Linux at home and Linux VM's at work. The thirst and desperation to make AI work gives me the creeps more than usual.
So the message is: if you can, don't use OneDrive.
If you can't (work, etc.) try to avoid uploading sensitive documents in onedrive.
I always wondered who uses OneDrive for cloud storage. Hell, I think even Google Drive is better.
Microsoft has really pivoted to AI for all things. I wonder how many customers they will get vs how many they will lose due to this very invasive way of doing things.
Almost feel like we are getting to class action or antitrust when you connect the dots. Almost all PCs come with Windows. Defacto you need to create a M$ account to use Windows locally. They opt you into one drive by default. They sync your docs by default. They upload all your photos into AI by default.
Sounds like this advice will be expiring along with the next Windows update, so if you want a local account your window of opportunity may be closing. (What happens when you need to get a new PC?)
I wonder if you can write a program to make pictures with face tattoos be the normal for Microsoft AI to train on, like see if enough people did this, if Microsoft's facial recognition started generating lots of face tats...
How is this not a revenge porn or something? If I upload sensitive photos somewhere, it is 5 years prison sentence! CEO of Microsoft can do that billion times!
It really seems as though Microsoft has total contempt for their retail/individual customers. They do a lot to inconvenience those users, and it often seems gratuitous and unnecessary. (As it does in this case.)
...I guess Microsoft believes that they're making up for it in AI and B2B/Cloud service sales? Or that customers are just so locked-in that there's genuinely no alternative? I don't believe that the latter is true, and it's hard to come back from a badly tarnished brand. Won't be long before the average consumer hates Microsoft as much as they hate HP (printers).
I don't really see the issue. If you don't want the face recognition feature, then you'll turn it off once, and that's that. Maybe if you're unsure, you might turn it off, and then back on, and then back off again. But what's the use case where you'd want to do this more than 3x per year?
Presumably, it's somewhat expensive to run face recognition on all of your photos. When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again.
If this is the true reason, then they have made some poor decisions throughout that still deserve criticism. Firstly by restricting the number of times you can turn it _off_ rather than _on_, secondly by not explaining the reason in the linked pages, and thirdly by having their publicist completely refuse to say a word on the matter.
In fact, if you follow the linked page, you'll find a screenshot showing it was originally worded differently, "You can only change this setting 3 times a year" dating all the way back to 2023. So at some point someone made a conscious decision to change the wording to restrict the number of times you can turn it _off_
Well, sometimes Microsoft decides to change your settings back. This has happened to me very frequently after installing Windows updates. I remember finding myself turning the same settings off time and again.
The "fuck you, user!" behavior of software companies now means there's no more "No", only "Maybe later". Every time I update Google Photos, it shows me the screen that "Photos backups are not turned on! Turn on now?" (because they want to upsell their paid storage space option).
It also now bugs me to do face scanning every so often too
And unlike most things, both prompts require you to explicitly click some sort of "no", not just click away to dismiss. The backup one is particularly obnoxious because you have to flip a shitty little slider as the only button is "continue". Fuck. Off.
Silicon Valley companies are like a creepy guy in the nightclub going up to each woman and asking "Want to dance? [Yes] or [Ask Me Again]". The desperation is pathetic.
I mean at this point I think it's really just utter incompetence over at Microsoft to design a system that can be updated without breaking it. They have never actually cared about solving that problem.
If they had taste, someone opinionated over there would knock heads before shipping another version of windows that requires restarts or mutates user settings.
A joke in the Windows 95 days was "You plugged in a mouse. Please restart your computer.". A few weeks ago I plugged in a Logitech wireless mouse receiver, Windows 10 installed the drivers automatically, and finished with "To complete the installation of the software, please restart your computer"...
> If you don't want the face recognition feature, then you'll turn it off once.
The issue is that is a feature that 100% should in any sane world be opt in - not opt out.
Microsoft privacy settings are a case of - “It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”
There's inherently nothing wrong with face recognition, I love being able to search my own photos on my iPhone. If you could keep it private, you totally would too.
Even KDE's Digikam can run "somewhat expensive" algorithms on your photos without melting your PC and making you wait a year to recognize and label faces.
Even my 10(?) year old iPhone X can do facial recognition and memory extraction on device while charging.
My Sony A7-III can detect faces in real time, and discriminate it from 5 registered faces to do focus prioritization the moment I half-press the shutter.
That thing will take mere minutes on Azure when batched and fed through GPUs.
If my hunch is right, the option will have a "disable AI use for x months" slider and will turn itself on without letting you know. So you can't opt out of it completely, ever.
> When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again
This is probably the case. But Redmond being Redmond, they put their foot in their mouth by saying "you can only turn off this setting 3 times a year" (emphasis mine).
But that's not necessarily true for everyone. And it doesn't need to be this way, either.
For starters I think it'd help if we understood why they do this. I'm sure there's a cost to the compute MS spends on AI'ing all your photos, turning it off under privacy rules means you need to throw away that compute. And turning it back on creates an additional cost for MS, that they've already spent for nothing. Limiting that makes sense.
What doesn't make sense is that I'd expect virtually nobody to turn it on and off over and over again, beyond 3 times, to the point that cost increases by more than a rounding error... like what type of user would do that, and why would that type of user not be exceedingly rare?
And even in that case, it'd make more sense to do it the other way around: you can turn on the feature 3 times per year, and off anytime. i.e. if you abuse it, you lose out on the feature, not your privacy.
So I think it is an issue that could and should be quickly solved.
The point is it’s sucking your data into some amorphous big brother dataset without explicitly asking you if you want that to happen first. Opt out AI features are generally rude, trashy, low-class, money grubbing data grabs
Right, while I understand the potential compute cost, it would be like the iPhone restricting the number of times you could use “allow once“ for location permissions.
I wonder if it's possible to encrypt the index with a key that's copied to the user's device, and if the user wants to turn off this setting, delete the key on the server. When they want to turn it back on, the device uploads the key. Yes, the key might end up gone if there's a reinstall, etc.
If the user leaves it off for a year, then delete the encrypted index from the server...
Presumably, it's somewhat expensive to run face recognition on all of your photos.
Very likely true, but we shouldn't have to presume. If that's their motivation, they should state it clearly up front and make it opt-out by default. They can put a (?) callout on the UI for design decisions that have external constraints.
How hard it to turn it on? Does it show a confirmation message?
My wife has a phone with a button on the side that opens the microphone to ask questions to Google. I guess 90% of the audios they get are "How the /&%/&#"% do I close this )(&(/&(%)?????!?!??"
I bought a new Motorola phone and there are no less than three ways to open Google assistant (side button, hold home button, swipe from corner). Took me about 10 seconds before I triggered it unintentionally and quickly figured out how to disable all of them...
"When this feature is disabled, facial recognition will be disabled immediately and existing recognition data will be purged within 60 days". Then you don't need a creepy message. Okay, so that's 6 times a year, but whatever.
This is once again strongly suggesting that Microsoft is thoroughly doomed if the money they've dumped into AI doesn't pan out. It seems to me that if your company is tied to Microsoft's cloud platform, you should probably consider moving away as quickly as you can. Paying the vmware tax and moving eveyrthing in house is probably a better move at this point.
This doesn't feel like a problem at all. I only need to turn the setting off once, right? My immediate question to seeing that verbiage was, "how many times does the setting turn itself on in a year?"
Microsoft gets most of its money from big corporate customers. Some of those customers are obligated by law to not leak sensitive personal data to servers in USA soil, because those customers have the missfortune of being in countries with strong privacy laws, functioning civil societies and sometimes even left-winged governments. I know for a fact that the product in question, "OneDrive", it's sometimes mandated in those companies as a backup solution for the company's computers. All it takes is a whistle-blowing incident or a chat with a journalist for this to become a major blow-up for Microsoft, with companies forced by tribunals to back off from contracts with Microsoft.
Whenever I have to use Windows, I just create a new throwaway account on proton, connect it to the mother throwaway account connected to a yahoo email account created in the before times, install what I need, and then never access that account again.
Gitlab. Codeberg. Neocities. Nekoweb. Wasmer. Surge. Digital Ocean. Freehostia. Awardspace. 000webhost. Static.run. Kinsta. Cloudflare Pages. Render. Hostinger. Ionos. Bluehost. Firebase. Netlify. Orbiter. Heliohost. There's probably hundreds of services with a free tier these days (though many of them will have strict limitations on website size and traffic, and you may have to run the build step locally).
For private repos there is Forgejo, Gitea and Gitlab.
For open-source: Codeberg
Yes, it'll make projects harder to discover, because you can't assume that "everything is on github" anymore. But it is a small price to pay for dignity.
> Slashdot: What's the reason OneDrive tells users this setting can only be turned off 3 times a year? (And are those any three times — or does that mean three specific days, like Christmas, New Year's Day, etc.)
> [Microsoft's publicist chose not to answer this question.]
Why would anyone use this crap at this point? Buy a (possible used) mini PC or thin client, install Linux and Samba on it, and voila, your own private "cloud" completely free of corporate interference, spyware and recurring fees. This works best with a static IP for remote access via Wireguard but it can be made to work on a residential connection.
With a little more effort you can deploy Nextcloud, Home Assistant and a few other great FOSS projects and completely free yourself from Big Tech. The hardest part will probably be email on a residential connection, but it can be done with the help of a relay service for outgoing mail.
> Microsoft only lets you opt out of AI photo scanning
Their _UI_ says they let you opt out. I wouldn't bet on that actually being the case. At the very least - a copy of your photos goes to the US government, and they do whatever they want with it.
Isn't it cute when there's absolutely no rationale behind a new rule, and it's simply an incursion made in order to break down a boundary?
Look, scanning with AI is available!
Wow, scanning with AI is now free for everyone!
What? Scanning with AI is now opt-out?
Why would opting-out be made time-limited?
WTF, what's so special about 3x a year? Is it because it's the magic number?
Ah, the setting's gone again, I guess I can relax. I guess the market wanted this great feature, or else they wouldn't have gradually forced it on us. Anyway, you're a weird techie for noticing it. What do you have to hide?
There is a big rationale behind it. If their AI investments don't pan out, Microsoft will cease to exist. They've been out of ideas since the late 90s. They know that the subscription gravy train has already peaked. There is no more growth unless they fabricate new problems for which they will then force you to pay for the solution to the problem they created for you. Oh, your children were kidnapped because Microsoft sold their recognition and location data to kidnappers? Well you should have paid for Microsoft's identity protection E7 plus add-on subscription that prevents them from selling the data you did not authorize them to collect to entities that they should know better than to deal with.
I don't even get why they would need "ideas" or "growth" tbh. They have most popular desktop operating system, they have one of the most popular office suites, surely they make plenty of profit from those. If they just focused on making their existing products not shit they would remain a profitable company indefinitely. But instead they're enshittifying everything because they want more More MORE
Because there are too many people chasing of ever going up line on valuation chart. It is simply not acceptable anymore to have reasonable business that generates solid dividends and grows with opening markets and population. Blame the silicon valley, VC and like...
"You can only turn off this setting 3 times a year."
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
Can someone explain to me why the immediate perception is that this is some kind of bad, negative, evil thing? I don't understand it.
My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.
Their disclaimer already suggests they don't train on your photos.
This is Microsoft. They have a proven record of turning these toggles back on automatically without your consent.
So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.
And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.
Of course, the problem with having your data available even for a day or so, lets say because that day you didn't read your e-mails, will mean, that your data will be trained on, used for M$ purposes. They will have powerful server farms at the ready holding your data at gun point, so that the moment they manage to fabricate fake consent, they are there to process your data, before you can even finish reading any late notification e-mail, if any.
Someone show me any cases, where big tech has successfully removed such data from already trained models, or in case of being unable to do that with the blackboxes they create, removed the whole blackbox, because a few people complain about their data being in those black boxes. No one can, because this has not happened. Just like ML models are used as laundering devices, they are also used as responsibility shields for big tech, who rake in the big money.
This is M$ real intention here. Lets not fool ourselves.
A bug, or a dialog box that says ”Windows has reviewed your photo settings and found possible issues. Press Accept now to reset settings to secure defaults”
This is how my parents get Binged a few times per year
This feels different though. Every time you turn it off and then on again it has a substantial processing cost for MS. If MS "accidentally" turns it on and then doesn't allow you to turn it off it raises the bar for them successfully defending these actions in court.
So to me it looks like MS tries to avoid that users ram MS's infrastructure with repeated expensive full scans of their library. I would have worded it differently and said "you can only turn ON this setting 4 times a year". But maybe they do want to leave the door open to "accidentally" pushing a wrong setting to the users.
As stated many times elsewhere here, if that were the case, it'd be an opt in limit. Instead it's an opt out limit from a company that has a proven record of forcing users into an agreement against their will and requiring an opt out (that often doesn't work) after the fact.
Nobody really believes the fiction about processing being heavy and that's why they limit opt outs.
That they limit opt-outs instead of opt-ins, when the opt-in is the only plausibly costly step, speaks for itself.
> to prevent wasted processing.
If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.
Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.
> You are trying to reach really far out to find a plausible
This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.
Their spokes person also avoided answering why they are doing this.
On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.
Yeah exactly. Some people have 100k photo collections. The cost of scanning isn’t trivial.
They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.
> Yeah exactly. Some people have 100k photo collections. The cost of scanning isn’t trivial.
Then you can guess Microsoft hopes to make even more money than it costs them running this feature.
If it was that simple, there would be no practical reason to limit that scrub to three ( and in such a confusion inducing ways ). If I want to waste my time scrubbing, that should be up to me -- assuming it is indeed just scrubbing tagged data, because if anything should have been learned by now, it is that:
worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company
Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.
It takes processing power to scan the photos.
then it should say "this setting can only be turned back on three times a year"
Does it take processing power to NOT scan photos?
No, but the scanning is happening on Microsoft servers, not locally, I am guessing.
So if you enable the feature, it sends your photos to MS to scan... If you turn it off, they delete that data, meaning if you turn it on again, they have to process the photos again. Every time you enable it, you are using server resources.
However, this should mean that they don't let you re-enable it after you turn it off 3 times, not that you can't turn it off if you have enabled it 3 times.
where does it say turning it off deletes the data? it doesn't even say that turning it off stops them scanning your photos. the option is "do you want to see the AI tags" Google search history is the same. Turning off or deleting history only affects your copy of the data.
well said
Just because you can't personally think of a reason why the number shall be 3, and no more than 4, accepting that thou hast first counted 1 and 2, it doesn't mean that the reason is unthinkable.
I feel like you're way too emotionally invested in whatever this is to assess it without bias. I don't care what the emotions are around it, that's a marketing issue. I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
It's probably opt-out, because most users don't want to wait 24 hours for their photos to get analyzed when they just want to search for that dog photo from 15 years ago using their phone, because their dog just died and they want to share old photos with the family.
This doesn't apply to your encrypted vault files. Throw your files in there if you don't want to toggle off any given processing option they might add 3 years from now.
<< It's probably opt-out
Clearly, you personally can't think of a reason yourself based on that 'probably' alone.
<< I feel like you're way too emotionally invested
I think. You feel. I am not invested at all. I have.. limited encounters with windows these days. But it would be silly to simply dismiss it. Why? For the children man. Think of the poor children who were not raised free from this silliness.
<< I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
I can respect that. What are those technical details? MS was a little light on the details.
https://support.microsoft.com/en-us/office/group-photos-by-p...
"Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies. This helps you quickly and easily organize photos of friends and family. Only you can see your face groupings. If you share a photo or album with another individual, face groupings will not be shared.
Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall. Any data you provide is only used to help triage and improve the results of your account, no one else's.
While the feature is on, Microsoft uses this data to group faces in your photos. You can turn this feature off at any time through Settings. When you turn off this feature in your OneDrive settings, all facial grouping data will be permanently removed within 30 days. Microsoft will further protect you by deleting your data after a period of inactivity. See the Microsoft account activity policy for more information."
You can also see here some of the ways they're trying to expose these features to users, who can use Co-Pilot etc. https://techcommunity.microsoft.com/blog/onedriveblog/copilo...
I turn all Co-Pilot things off and I've got all those AI/tagging settings off in OneDrive, but I'm not worried about the settings being disingenuous currently.
There's always a worry that some day, a company will change and then you're screwed, because they have all your data and they aren't who you thought they were anymore. That's always a risk. Just right now, I'm less worried about Microsoft in that way than I am with other companies.
In a way, being anti-government is GOOD, because overly relying on government is dangerous. The same applies to all these mega-platforms. At the same time, I know a lot of people who have lots a lot of data, because they never had it backed up anywhere, and people who have the data, but can't find anything, because there's so much of it and none of it is organized. These are just, actual real world problems and Microsoft legitimately sees that the technology is there now to solve these problems.
That's what I see.
> I feel like you're way too emotionally invested in whatever this is to assess it without bias
Did this line ever win an argument for you or you just use it to annoy who you're talking to?
Then you would limit the number of times the feature can be turned on, not turned off. Turned off uses less resources, while turned on potentially continues using their resources. Also I doubt if they actually remove data that requires processing to obtain, I wouldn't expect them to delete it until they're actually required to do so, especially considering the metadata obtained is likely insignificant in size compared to the average image.
It's an illusion of choice. For over a decade now companies either are spamming you with modals/notifications up until you give up and agree to a compromising your privacy settings or "accidentally" turn these on and pretend that change happened by a mistake or bug.
Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.
Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.
Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".
And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.
---
And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:
> Microsoft's publicist chose not to answer this question
and
> We have nothing more to share at this time
but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.
Like iCloud on iOS and MacOS. It's not just Microsoft who insists on stealing your data, Apple does it too.
It sounds like you have revoked their permission to tag(verb) the photos, why should this interfere with what tag(noun) the photo already has?
But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.
> Their disclaimer already suggests they don't train on your photos.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
> Why is Microsoft so eager to also be able to know this?
A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.
Actually, most users probably don't understand, that this ridiculous policy is more effort to implement. They just blindly follow whatever MS prescribes and have long given up on making any sense of the digital world.
most people probably won't know MS is doing this at all until their data is leaked
It's hilarious that they actually say that right on the settings screen. I wonder why they picked 3 instead of 2 or 4. Like, some product manager actually sat down and thought about just how ridiculous they could be and have it still be acceptable.
My guess is it was an arbitrary guess and the limit is due to creating a mass scan of photos. Depending on if they purge old data when turned off, it could mean toggling the switch tells microsoft's servers to re-scan every photo in your (possibly very large) library.
Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.
I would be sceptical too, if I was still using Windows.
I’ve seen reports in the past that people found that syncing to the cloud was turned back on automatically after installing Windows updates.
I would not be surprised if Microsoft accidentally flip the setting back on for people who opted out of AI photo scanning.
And so if you can only turn it back off three times a year, it only takes Microsoft messing up and opting you back in three times in a year against your will and then you are stuck opted in to AI scanning for the rest of the year.
Like you said, they should be limiting the number of times it can be turned back on, not the number of times it can be turned off.
Yep. I have clients who operate under HIPAA rules who called me out of the blue wondering where their documents had gone. Microsoft left a cheery note on the desktop saying they had very helpfully uploaded ALL of their protected patient health data into an unauthorized cloud storage account without prior warning following one a Windows 10 update.
When I used to work as a technician at a medical school circa 2008, updating OS versions was a huge deal that required months of preparations and lots of employee training to ensure things like this didn't happen.
Not trying to say that you could have prevented this; I would not be surprised if Windows 10 enterprise decided to "helpfully" turn on auto updates and updated itself with its fun new "features" on next computer restart.
If they are worried about the cost of initial ingestion then a gate on enabling would make a whole lot more sense than a gate on disabling.
3 is the smallest odd prime number. 3 is a HOLY number. It symbolizes divine perfection, completeness, and unity in many religions: the Holy Trinity in Christianity, the Trimurti in Hinduism, the Tao Te Ching in Taoism (and half a dozen others)
I'd rather guess that they've pick 3 as a passive-aggressive attempt to provide a false pretense of choice in "you can change it but in the end it's gonna be our way" style than thinking they're attributing some cultural significance of number 3 behind this option. But that's still interesting concept tho
Manager: "Three is the number thou shall permit, and the number of the permitting shall be -- three."
Favebook introducing photo tagging was when I exited Facebook.
This was pre-AI hype, perhaps 15 years ago. It seems Microsoft feel it is normalised. More you are their product. It strikes me as great insecurity.
Honestly, I hated when they removed automatic photo tagging. It was handy as hell when uploading hundreds of pictures from a family event, which is about all I use it for.
> I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Presumably it can be used for filtering as well - find me all pictures of me with my dad, etc.
Sure but if it was for your benefit, not theirs, they wouldn't force it on you.
Precisely. The logic could just as easily be "you can only turn this ON three times a year." You should be able to turn it off as many times as you want and no hidden counter should prevent you from doing so.
Tip:
If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.
It's rather annoying that high-entropy files (also known as encrypted files... unknown magic header files) in OneDrive trigger ransomware protection.
I agree with you but there's nothing astonishing about any of this unfortunately, it was bound to happen. Almost all of cautionary statements about AI abuse fall on deaf ears of HN's overenthusiastic and ill-informed rabble, stultified by YC tech lobbyists.
Worst part about it was all the people fretting on about ridiculous threats like the chatbot turning into skynet sucked the oxygen out of the room for the more realistic corporate threats
Right. But then the AI firms did that deliberately, didn't they? Started the big philosophical argument to move the focus away from the things they were doing (epic misappropriation of intellectual property) and the very things their customers intended to do: fire huge numbers of staff on an international, multi-industry scale, replace them with AI, and replace already limited human accountability with simple disclaimers.
The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.
And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.
It's fun that the working class bears the brunt of the mistakes of management.
Manager: hey let's go all in on this fancy new toy! We'll all be billionaires!
Employee: oh yeah I will work nights and weekends with no pay for this! I wanna be a billionaire!
Manager: actually it failed, we ran out of money, you no longer have a job... But at least we didn't build skynet, right?
My initial thoughts were so they could scan for csam while pretending as if users have a choice to not have their privacy violated.
From my understanding, CSAM scanning is always considered a separate, always on and mandatory subsystem in any cloud storage system.
Yes, any non E2EE cloud storage system has strict scanning for CSAM. And it's based on perceptual hashes, not AI (because AI systems can be tricked with normal-looking adversarial images pretty easily)
I built a similar photo ID system, not for this purpose or content, and the idea of platforms using perceptual hashes to potentially ruin people's lives is horrifying.
Depending on the algorithm and parameters, you can easily get a scary amount of false positives, especially using algorithms that shrink images during hashing, which is a lot of them.
I imagine you'd add more heuristics and various types of hashes? If the file is just sitting there, rarely accessed and unshared, or if the file only triggers on 2/10 hashes, it's probably a false alarm. If the file is on a public share, you can probably run an actual image comparison...
A lot of classic perceptual hash algorithms do "squinty" comparisons, where if an image kind of looks like one you've hashed against, you can get false positives.
I'd imagine outside of egregious abuse and truly unique images, you could squint at a legal image and say it looks very much like another illegal image, and get a false positive.
From what I'm reading about PhotoDNA, it's your standard phashing system from 15 years ago, which is terrifying.
But yes, you can add heuristics, but you will still get false positives.
I thought Apple’s approach was very promising. Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
> Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked
Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
> the conversation was dominated by uninformed outrage about things that weren’t happening.
The thing that wasn't happening yet was mission creep beyond the original targets. Because expanding-beyond-originally-stated-parameters is thing that happens with far reaching monitoring systems. Because it happens with the type of regularity that is typically limited to physics.
There were 2ndary concerns about how false positives would be handled. There were concerns about what the procedures were for any positive. Given Gov propensities to ruin lives now and ignore that harm (or craft a justification) later, the concerns seem valid.
That's what I recall the concerned voices were on about. To me, they didn't seem outraged.
> Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
Yes. Completely wrong. Not even close.
Why don’t you just go and read about it instead of guessing? Seriously, the point of my comment was that discussion with people who are just guessing is worthless.
Why don't you just explain what you want people to know instead of making everyone else guess what you are thinking?
> Why don't you just explain what you want people to know instead of making everyone else guess what you are thinking?
I’m not making people guess. I explained directly what I wanted people to know very, very plainly.
You are replying now as if the discussion we are having is whether it’s a good system or not. That is not the discussion we are having.
This is the point I was making:
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
The discussion is about the ignorance, not about the system itself. If you knew how it worked and disagreed with it, then I would completely support that. I’m not 100% convinced myself! But you don’t know how it works, you just assumed – and you got it very wrong. So did a lot of other people. And collectively, that drowned out any discussion of how it actually worked, because you were all mad about something imaginary.
You are perfectly capable of reading how it worked. You do not need me to waste a lot of time re-writing Apple’s materials on a complex system in this small text box on Hacker News so you can then post a one sentence shallow dismissal. There is no value in doing that at all, it just places an asymmetric burden on me to continue the conversation.
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage
I would not care if it worked 100% accurately. My outrage is informed by people like you who think it is OK in any form whatever.
Perceptual hashes? An embedding in a vector space by a learned encoder.
Phew, not AI then… ?
I assume this would be a ... call it feature for now, so a feature not available in the EU due to GDPR violations.
They could have avoided the negative press by changing the requirement to be that you can’t re-enable the feature after switching it off 3 times per year.
It’s not hard to guess the problem: Steady state operation will only incur scanning costs for newly uploaded photos, but toggling the feature off and then on would trigger a rescan of every photo in the library. That’s a potentially very expensive operation.
If you’ve ever studied user behavior you’ve discovered situations where users toggle things on and off in attempts to fix some issue. Normally this doesn’t matter much, but when a toggle could potentially cost large amounts of compute you have to be more careful.
For the privacy sensitive user who only wants to opt out this shouldn’t matter. Turn the switch off, leave it off, and it’s not a problem. This is meant to address the users who try to turn it off and then back on every time they think it will fix something. It only takes one bad SEO spam advice article about “How to fix _____ problem with your photos” that suggests toggling the option to fix some problem to trigger a wave of people doing it for no reason.
> Turn the switch off, leave it off, and it’s not a problem.
Assuming that it doesn't mysteriously (due to some error or update, no doubt) move back to the on position by itself.
I cancelled Facebook in part due to a tug-of-war over privacy defaults. They kept getting updated with some corporate pablum about how opting in benefited the user. It was just easier to permanently opt out via account deletion rather than keep toggling the options. I have no doubt Microsoft will do the same. I'm wiping my Windows partition and loading Steam OS or some variant and dual booting into some TBD Linux distro for development.
When I truly need Windows, I have an ARM VM in Parallels. Right now it gets used once a year at tax time.
Tell you what, Microsoft: turn it off, leave it off, remove it, fire the developers who made it, forget you ever had the idea. Bet that saved some processing power?
I agree this is a concern, but it frustrates me that tech companies won't give us reasonable options.
- "Scan photos I upload" yes/no. No batch processing needed, only affects photos from now on.
- "Delete all scans (15,101)" if you are privacy conscious
- "Scan all missing photos (1,226)" can only be done 3x per year
"But users are dummies who cannot understand anything!" Not with that attitude they can't.
I mean it's Microsoft so I wouldn't be surprised if it was done in the dumbest way possible but god damn this would be such a dumb way to implement this feature.
This would be because of the legal requirement to purge (erase) all the previous scan data once a user opts out. So the only way to re-enable is to scan everything again — unless you have some clever way I’ve not thought of?
Encrypt the data and store the key on the user's device. If the user enables the feature, they transmit their key to you. If they disable the feature, you delete the key on your side.
In theory, you could store a private key on the device and cryptoshred the data on Microsoft’s servers when the setting is disabled (Microsoft deletes their copy of the key). Then, when the feature is re-enabled, upload the private key to Microsoft again.
Does that meet the legal requirement to delete data when requested? I am not sure it does.
As far as I know, most data protection laws accept cryptoshredding as long as the party with a deletion requirement actually destroys the key. For one thing, it’s hard to reconcile deletion requirements with immutable architectures and backups without a mechanism like this.
IANAL, but I think the key remaining in the user’s possession doesn’t matter as far as the company with a deletion requirement is concerned.
You do not have to, and should not, start deleting data immediately. We've not uncivilized here, we can schedule tasks.
If this were happening on device (lol) then you should do both the scanning and deleting operations at times of usually low activity. Just like how you schedule updates (though Microsoft seems to not have forgotten how to do this). Otherwise, doing the operations at toggle time just slams the user's computer, which is a great way to get them to turn it off! We'd especially want the process to have high niceness and be able to pause itself to not hinder the user. Make sure they're connected to power or at least above some threshold in battery if on laptop.
If you can on device and upload, again, you should do this at times of low activity. But you also are not going to be deleting data right away because that is going to be held across several servers. That migration takes time. There's a reason your Google Takeout can take a few hours and why companies like Facebook say your data might still be recoverable for 90 days.
Doing so immediately also creates lots of problems. Let's say you enable, let it go for awhile, then just toggle back and fourth like a mad man. Does your toggling send the halt signal to the scanning operation? What does the toggling on option do? Do you really think this is going to happen smoothly without things stepping on each other? You're setting yourself up for a situation where the program is both scanning and deleting at the same time. If this is implemented better than most things I've seen from Microsoft then this will certainly happen and you'll be in an infinite loop. All because you make the assumption that there is no such thing, or the possibility of, an orphaned process. You just pray that these junior programmers with a senior title just don't know how to do parallelization...
In addition to the delay you should be marking the images in a database to create a queue. Store the hash of the file as the ID and mark appropriately. We are queuing our operations and we want to have fail safes. You're scanning the entire fucking computer so you don't want to do things haphazardly! Go ahead, take a "move fast and break things" approach, and watch your customers' get a blue screen of death and wake up to having their hard drives borked.
Seriously, just sit down and think about the problem before you start programming. The whiteboard or pen and paper are some of your most important weapons as a programmer. Your first solution will be shit and that's okay. Your second and even third solution might be shit too. But there's a reason you need depth. We haven't even gotten into any real depth here either. Our "solution" here has no depth, it's just the surface level and I'm certain the first go will be shit. And But you'll figure more stuff out and find more problems and fix them. I'm also certain others will present other ideas that can be used too. Yay, collaboration! It's all good unless you just pretend you're done and problems don't exist anymore. (Look ma! All the tests pass! We're bug free!) For christ's sake, what are you getting a quarter million+ salary for?Did anyone notice that Microsoft never replied any of the asked questions, but deflected them?
They are exactly where I left them 20 years ago.
It's very sad that I can't stop using them again for doing this.
This is such a norm in society now; PR tactics take priority over any notion of accountability, and most journalists and publishers act as stenographers, because challenging or even characterizing the PR line is treated as an unjustified attack and inflated claims of bias.
Just as linking to original documents, court filings etc. should be a norm in news reporting, it should also be a norm to summarize PR responses (helpful, dismissive, evasive or whatever) and link to a summary of the PR text, rather than treating it as valid body copy.
People need to treat PR like they do AIs. "You utterly failed to answer the question, try again and actually answer the question I asked this time." I'd love to see corporate representatives actually pressed to answer. "Did you actually do X, yes or no, if you dodge the question I'll present you as dodging the question and let people assume the worst."
I'd guess that unlike AI a PR person would just simply stay silent or demand to continue with a different question or end the interview/talk and leave
This is why I stopped watching American presidential "debates." If I wanted that kind of entertainment, I'd listen to a rap battle.
Challenging or even characterizing the PR line is usually treated as an unjustified attack to justify inflated claims of bias.
It's worse actually. These are repeated games so the outcome of any current interaction affects the next one. Journalists can't be too hard on the people the cover or else they won't have the access to cover them in the future.
They take people for idiots. This can work a few times, but even someone who isn't the brightest will eventually put two and two together when they get screwed again and again and again.
link to a summary of the PR text
Should have just said 'link to a screenshot of the PR text', apologies for the confusion
It's not just PR tactics for the sake of accountability. It's because there's a glut of lawyers that'll sue for the tinest admission of anything.
The worst part of all this is even respectable news organisations like the BBC publish so many articles that are just the companies PR response verbatim. Even worse when it's like
- victim says hi, this thing is messed up and people need to know about this
-Company says "bla bla bla" legal speak we don't recognise an issue "bla bla bla"
End of article, instead of saying
"This comment doesn't seem to reflect the situation" or other pointing out that anybody with a brain can see the two statements are not equal in evidence nor truth
They prevaricated all of their answers, and that itself is far more telling.
You can really tell that Microsoft has adopted advertising as a major line of business.
The privacy violations they are racking up are very reminiscent of prior behavior we've seen from Facebook and Google.
And not just advertising. If ICE asks Microsoft to identify accounts of people who have uploaded a photo of "Person X", do you think they're going to decline?
They'd probably do it happily even without a warrant.
I'd bet Microsoft is doing this more because of threats from USG than because of advertising revenue.
> They'd probably do it happily even without a warrant
I'm old enough to remember when companies were tripping over themselves after 9/11 trying to give the government anything they could to help them keep an eye on Americans. They eventually learned to monetize this, and now we have the surveillance economy.
ICE don't have to ask for anything, the USG gets a copy of all data Microsoft collects from you, anyway. Remember:
https://www.pcmag.com/news/the-10-most-disturbing-snowden-re...
> and follow Microsoft's compliance with General Data Protection Regulation
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
Truly bizarre. I'm so glad I detached from Windows a few years back, and now when I have to use it or another MS product (eg an Xbox) it's such an unpleasant experience, like notification hell with access control checks to read the notifications.
The sad thing is that they've made it this way, as opposed to Windows being inherently deficient; it used to be a great blend of GUI convenience with ready access to advanced functionality for those who wanted it, whereas MacOS used to hide technical things from a user a bit too much and Linux desktop environments felt primitive. Nowadays MS seems to think of its users as if they were employees or livestock rather than customers.
Meta just lost a court case against bits of freedom in the Netherlands, because their instagram setting to turn off the attention grabbing feed would reset every month or so. The court ruled that this infringed on the user’s freedom.
Source: https://www.dutchnews.nl/2025/10/court-tells-meta-to-give-du...
Microsoft in the past few years has totally lost it's mind, it's ruining nearly everything it touches and I can't understand why
They are like shitty Midas, everything they touch, turns into pile of crap. However people stil buy their products. They think the turd is tasty, because billion of flies can't we wrong...
Meanwhile Apple is applying different set of toxic patterns. Lack of interoperability with other OS, their apps try to store data mainly on iCloud, iPhone has no jack connector etc.
They never changed. For some reason Satya became CEO and nerds fawned over the “new Microsoft” for whatever reason.
They are a hard nosed company focused with precision on dominance for themselves.
Insider here, in m365 though not onedrive. It did change, but not because of satya ; because of rules and legislation and bad press. Privacy and security are taken very seriously (at least by people who care to follow internal rules) not because "we're nice", but because
- EU governments keep auditing us, so we gotta stay on our toes, do things by the book, and be auditable - it's bad press when we get caught doing garbage like that. And bad press is bad for business
In my org, doing anything with customer's data that isn't directly bringing them value is theoretically not possible. You can't deliver anything that isn't approved by privacy.
Don't forget that this is a very big company. It's composed of people who actually care and want to do the right thing, people who don't really care and just want to ship and would rather not be impeded by compliance processes, and people who are actually trying to bypass these processes because they'd sell your soul for a couple bucks if they could. For the little people like me, the official stance is that we should care about privacy very much.
Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads. I guess the appeal to take customer's money and exploit their data is too big. The kind of shit that will get me to leave.
> Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads.
How long has MS been putting ads in the start menu?
As he said, very heterogeneous company.
Sure, but "there has been a good period of time where..." is a statement that the situation introduced by where continues into the present. And that doesn't seem to be compatible with the facts.
10 years
https://news.ycombinator.com/item?id=10393812
> EU governments keep auditing us, so we gotta stay on our toes, do things by the book
Erm, dude ....
IANAL, and I am sure most people do not need to be lawyers to figure out that not allowing people to permanently opt-out of photo scanning is almost certainly going to be in contravention of every EU law in the book.
I hope the EU take Microsoft to the cleaners over this one.
Do we even think that was real? I think social media has been astroturfed for a long time now. If enough people make those claims, it starts to feel true even without evidence to support it.
Did they ever open source anything that really make you think "wow"? The best I could see was them "embracing" Linux, but embrace, extend, extinguish was always a core part of their strategy.
> Microsoft in the past few years has totally lost it's mind
I don't know what this Microsoft thing is that you speak of. I only know a company called Copilot Prime.
I don't understand how this is losing their mind. Toggling this setting is expensive on the backend: opting in means "go and rescan all the photos". opting out means "delete all the scanned information for this user". As a user just make up your mind and set the setting. They let you opt in, they ley you opt out, they just don't want to let you trigger tons of work every minute.
Microsoft wants money. Microsoft does not care about you.
Money and power. Who was the first BigTech co on the Prism slides? Who muscled out competitors in the 90s?
This week I have received numerous reminders from Microsoft to renew my Skype credit..
Everything I see from that company is farcical. Massive security lapses, lazy AI features with huge privacy flaws, steamrolling OS updates that add no value whatsoever, and heavily relying on their old playbook of just buying anything that looks like it could disrupt them.
P.S. The skype acquisition was $8.5B in 2011 (That's $12.24B in today's money.)
There was a time with a strong sentiment of Satya Nadella making MS great again.
Oh what time does to things!
By each passing day since I switched from using Windows to Linux at home, with decreasing friction, I am increasingly happy that I took time to learn Linux and stuck with it. This not a come to Linux call because I know it is easier said than done for most of non technical folks. But it is a testimony that if you do, the challenges eventually will be worth it. Because at this point, Microsoft is just openly insulting their captive users.
You know, in 90s in Russia in IT circles Windows was known as "маздай" which is a transliteration of "must die".
Looks like nothing has changed.
Growing up, Microsoft dominance felt so strong. 3 decades later, there’s a really high chance my kids will never own or use a windows machine (unless their jobs gives them one).
Do you remember this: http://toastytech.com/evil/index.html ?
Microsoft hate was something else in the '90s and 2000s. Yet people stayed with it as if they had no choice while OS/2, AmigaOS, NextStep, BeOS and all those UNIXes died.
A lot of people didn't and still don't. Sometimes your job/business requires certain software that is only available on windows. I'm not giving up my job for an OS. for the past 15 years or so I could do everything on Mac and Linux, but that might not always be the case. I certainly wouldn't pass up a lucrative consulting position because it was windows only.
an employer requires their workers to use Windows; the target audience for Windows is management, their HR and attorneys, and then greater security services. MSFT sells investigative services.
>unless their jobs gives them one
Microsoft knows the vast majority of professionals are forced to use their products and services or else they can't put food on the table. That's why Microsoft can operate with near impunity.
I was afraid for the EU economy, but after this declaration I'm reassured that Microsoft will pay for my grand kids' education in 30 years.
I think the EU is flawed in more ways that just one. But every time I see „<AI feature> will be available starting now outside EU“ I am really grateful
Microsoft gets a lot less difficult to reason about when we start to think of it as a statistical mean of human nature rather than the mind of one arbitrary evil bastard. They have 228k employees. The CEO has virtually zero direct influence on the end work product of any team.
Any organization this large is going to have approximately the same level of dysfunction overall. But, there are almost always parts of these organizations where specific leaders have managed to carve out a fiefdom and provide some degree of actual value to the customer. In the case of Microsoft, examples of these would be things like .NET, C#, Visual Studio [Code], MSSQL, Xbox.
Windows, Azure & AI are where most of the rot exists at Microsoft. Office is a wash - I am not a huge fan of what has happened to my Outlook install over the years, but Teams has dramatically stabilized since the covid days. Throwing away the rest of the apple because of a few blemishes is a really wasteful strategy.
Do you think the PR person responding here feels, underneath it all, the inhumanity of their responses? The fact that they're merely wasting everyone's time with their prevaricated non-answers? Knowing what they need to say to keep their job but hurting internally at the stupidity of it all.
Or do they end up so enmeshed with the corporate machine that that they start to really believe it all makes sense?
It's in their job description, they're most likely very proud of how their words can swindle the majority. They're greasy and they love it.
I think - at least for the people who stick with a career in PR - that they enjoy playing the game of giving an answer that is sort of related to the question but doesn't actually give a single bit of useful information. That they enjoy seeing how far they can push it without the interviewer straight up accuse them of not answering the question.
At least that's the only way I can imagine them keeping their sanity.
Does this mean that when you disable, all labels are deleted, and when you turn it back on it has to re-scan all of your photos? Could this be a cost-saving measure?
In that case, they should make it the other way around — you can enable this only three times a year.
They should do it the other direction, then: if you turn it off more than three times you can’t turn it back on.
But that's less good for profit. Why would they give up money for morals?
Esp. when you can just eat money to survive when you relocate to the Mars, no?
No, it's a profit-seeking measure.
>Does this mean that when you disable, all labels are deleted
AHHAHAHAHAHAHAHAHA.
Ha.
Nice one.
"It's not your data citizen, you should be happy we made this OS for you. You are not smart enough to do it your self, we know what is best."
I can never help myself from hearing this inside, and am just incredibly thankful that we have Linux and FOSS in general. That really gives me hope for humanity at this point.
I type this in FireFox, on NixOS, with all my pics open in another tab, in Immich. Thank you, thank you, thank you.
I was quite happy for a couple years to just use windows and wsl. Fully switched to Linux at home and Linux VM's at work. The thirst and desperation to make AI work gives me the creeps more than usual.
Microsoft's understanding of consent is about on-par with that of a rapist.
So the message is: if you can, don't use OneDrive.
If you can't (work, etc.) try to avoid uploading sensitive documents in onedrive.
I always wondered who uses OneDrive for cloud storage. Hell, I think even Google Drive is better.
Microsoft has really pivoted to AI for all things. I wonder how many customers they will get vs how many they will lose due to this very invasive way of doing things.
Microsoft: forces OneDrive on users via dark pattern dialogs that many users just accept
Users: save files "on their PC" (they think)
Microsoft: Rolls out AI photo-scanning feature to unknowing users intending to learn something.
Users: WTF? And there are rules on turning it on and off?
Microsoft: We have nothing more to share at this time.
Favorite quote from the article:
> [Microsoft's publicist chose not to answer this question.]
Almost feel like we are getting to class action or antitrust when you connect the dots. Almost all PCs come with Windows. Defacto you need to create a M$ account to use Windows locally. They opt you into one drive by default. They sync your docs by default. They upload all your photos into AI by default.
You can use Windows without a Microsoft account, but the dark pattern to do this is very difficult to navigate.
Sounds like this advice will be expiring along with the next Windows update, so if you want a local account your window of opportunity may be closing. (What happens when you need to get a new PC?)
Tell them "you may only refuse to answer this question 3 times a year".
It's totally worth self hosting files, it's gotten much better.
This made me look up if you can disable iOS photo scanning and you can’t. Hmm.
I wonder if you can write a program to make pictures with face tattoos be the normal for Microsoft AI to train on, like see if enough people did this, if Microsoft's facial recognition started generating lots of face tats...
> You can only turn off this setting 3 times a year.
Who's making the t-shirts? Don't forget the Microsoft logo. They're proud of this!
In my head it's sounding like that Christmas jingle. It's the most wonderful time of the year!
How is this not a revenge porn or something? If I upload sensitive photos somewhere, it is 5 years prison sentence! CEO of Microsoft can do that billion times!
EU please whack them and whack them good
This sounds like the next level of the nauseating “maybe later”.
i.e. You’ll do what we tell you eventually.
Seems obvious they actually mean to limit the number of times you can opt in. Very poor choice of words.
The difference is whether you get locked into having it on or having it off at the end.
It really seems as though Microsoft has total contempt for their retail/individual customers. They do a lot to inconvenience those users, and it often seems gratuitous and unnecessary. (As it does in this case.)
...I guess Microsoft believes that they're making up for it in AI and B2B/Cloud service sales? Or that customers are just so locked-in that there's genuinely no alternative? I don't believe that the latter is true, and it's hard to come back from a badly tarnished brand. Won't be long before the average consumer hates Microsoft as much as they hate HP (printers).
I don't really see the issue. If you don't want the face recognition feature, then you'll turn it off once, and that's that. Maybe if you're unsure, you might turn it off, and then back on, and then back off again. But what's the use case where you'd want to do this more than 3x per year?
Presumably, it's somewhat expensive to run face recognition on all of your photos. When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again.
If this is the true reason, then they have made some poor decisions throughout that still deserve criticism. Firstly by restricting the number of times you can turn it _off_ rather than _on_, secondly by not explaining the reason in the linked pages, and thirdly by having their publicist completely refuse to say a word on the matter.
In fact, if you follow the linked page, you'll find a screenshot showing it was originally worded differently, "You can only change this setting 3 times a year" dating all the way back to 2023. So at some point someone made a conscious decision to change the wording to restrict the number of times you can turn it _off_
Well, sometimes Microsoft decides to change your settings back. This has happened to me very frequently after installing Windows updates. I remember finding myself turning the same settings off time and again.
The "fuck you, user!" behavior of software companies now means there's no more "No", only "Maybe later". Every time I update Google Photos, it shows me the screen that "Photos backups are not turned on! Turn on now?" (because they want to upsell their paid storage space option).
It also now bugs me to do face scanning every so often too
And unlike most things, both prompts require you to explicitly click some sort of "no", not just click away to dismiss. The backup one is particularly obnoxious because you have to flip a shitty little slider as the only button is "continue". Fuck. Off.
The lack of a true “no” option and only “maybe later” infuriates me.
Silicon Valley companies are like a creepy guy in the nightclub going up to each woman and asking "Want to dance? [Yes] or [Ask Me Again]". The desperation is pathetic.
I mean at this point I think it's really just utter incompetence over at Microsoft to design a system that can be updated without breaking it. They have never actually cared about solving that problem.
If they had taste, someone opinionated over there would knock heads before shipping another version of windows that requires restarts or mutates user settings.
A joke in the Windows 95 days was "You plugged in a mouse. Please restart your computer.". A few weeks ago I plugged in a Logitech wireless mouse receiver, Windows 10 installed the drivers automatically, and finished with "To complete the installation of the software, please restart your computer"...
> If you don't want the face recognition feature, then you'll turn it off once.
The issue is that is a feature that 100% should in any sane world be opt in - not opt out.
Microsoft privacy settings are a case of - “It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”
There's inherently nothing wrong with face recognition, I love being able to search my own photos on my iPhone. If you could keep it private, you totally would too.
Even KDE's Digikam can run "somewhat expensive" algorithms on your photos without melting your PC and making you wait a year to recognize and label faces.
Even my 10(?) year old iPhone X can do facial recognition and memory extraction on device while charging.
My Sony A7-III can detect faces in real time, and discriminate it from 5 registered faces to do focus prioritization the moment I half-press the shutter.
That thing will take mere minutes on Azure when batched and fed through GPUs.
If my hunch is right, the option will have a "disable AI use for x months" slider and will turn itself on without letting you know. So you can't opt out of it completely, ever.
> When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again
This is probably the case. But Redmond being Redmond, they put their foot in their mouth by saying "you can only turn off this setting 3 times a year" (emphasis mine).
Agreed, in practice for me there's no real issue.
But that's not necessarily true for everyone. And it doesn't need to be this way, either.
For starters I think it'd help if we understood why they do this. I'm sure there's a cost to the compute MS spends on AI'ing all your photos, turning it off under privacy rules means you need to throw away that compute. And turning it back on creates an additional cost for MS, that they've already spent for nothing. Limiting that makes sense.
What doesn't make sense is that I'd expect virtually nobody to turn it on and off over and over again, beyond 3 times, to the point that cost increases by more than a rounding error... like what type of user would do that, and why would that type of user not be exceedingly rare?
And even in that case, it'd make more sense to do it the other way around: you can turn on the feature 3 times per year, and off anytime. i.e. if you abuse it, you lose out on the feature, not your privacy.
So I think it is an issue that could and should be quickly solved.
The point is it’s sucking your data into some amorphous big brother dataset without explicitly asking you if you want that to happen first. Opt out AI features are generally rude, trashy, low-class, money grubbing data grabs
To prevent you from having the option to temporarily disable it, so you have to choose between privacy and the supposed utility
Right, while I understand the potential compute cost, it would be like the iPhone restricting the number of times you could use “allow once“ for location permissions.
> what's the use case where you'd want to do this more than 3x per year?
That means that all Microsoft has to do to get your consent to scan photos is turn the setting on every quarter.
I wonder if it's possible to encrypt the index with a key that's copied to the user's device, and if the user wants to turn off this setting, delete the key on the server. When they want to turn it back on, the device uploads the key. Yes, the key might end up gone if there's a reinstall, etc.
If the user leaves it off for a year, then delete the encrypted index from the server...
Presumably, it's somewhat expensive to run face recognition on all of your photos.
Very likely true, but we shouldn't have to presume. If that's their motivation, they should state it clearly up front and make it opt-out by default. They can put a (?) callout on the UI for design decisions that have external constraints.
How hard it to turn it on? Does it show a confirmation message?
My wife has a phone with a button on the side that opens the microphone to ask questions to Google. I guess 90% of the audios they get are "How the /&%/&#"% do I close this )(&(/&(%)?????!?!??"
I bought a new Motorola phone and there are no less than three ways to open Google assistant (side button, hold home button, swipe from corner). Took me about 10 seconds before I triggered it unintentionally and quickly figured out how to disable all of them...
"When this feature is disabled, facial recognition will be disabled immediately and existing recognition data will be purged within 60 days". Then you don't need a creepy message. Okay, so that's 6 times a year, but whatever.
So why not limit how many times you can turn it on, instead of off?
We all know why.
Assuming this reasoning is accurate, why not just silently throw a rate limit error and simply not reenable it if it's repeatedly switched on and off?
This is once again strongly suggesting that Microsoft is thoroughly doomed if the money they've dumped into AI doesn't pan out. It seems to me that if your company is tied to Microsoft's cloud platform, you should probably consider moving away as quickly as you can. Paying the vmware tax and moving eveyrthing in house is probably a better move at this point.
This doesn't feel like a problem at all. I only need to turn the setting off once, right? My immediate question to seeing that verbiage was, "how many times does the setting turn itself on in a year?"
Fedora with vanilla Gnome is excellent for anyone looking for an alternative.
Microsoft gets most of its money from big corporate customers. Some of those customers are obligated by law to not leak sensitive personal data to servers in USA soil, because those customers have the missfortune of being in countries with strong privacy laws, functioning civil societies and sometimes even left-winged governments. I know for a fact that the product in question, "OneDrive", it's sometimes mandated in those companies as a backup solution for the company's computers. All it takes is a whistle-blowing incident or a chat with a journalist for this to become a major blow-up for Microsoft, with companies forced by tribunals to back off from contracts with Microsoft.
Year of the Linux desktop edges ever closer.
There's a great solution to this.
Just stop using Microsoft shit. It's a lot easier than untangling yourself from Google.
Yeah it is legitimately hard to avoid Google, if nothing else some of your emails will probably be leaked to Gmail.
But Microsoft is pretty easy to avoid after their decade of floundering.
Whenever I have to use Windows, I just create a new throwaway account on proton, connect it to the mother throwaway account connected to a yahoo email account created in the before times, install what I need, and then never access that account again.
It is fucked you almost need mob levels of burner cell precautions to have privacy and use Excel.
How can I play starcraft 2 without it?
Starcraft 2 w/ Battlenet has been working on Linux for over a decade. You don’t even need Proton, it works lovely with WINE.
Apparently it runs in Proton (I haven’t tried it though).
Yes. Just use Immich for photos. AI scanning, but local and only opt-in.
Is there a free platform that will let me blog like GitHub Pages works?
Gitlab. Codeberg. Neocities. Nekoweb. Wasmer. Surge. Digital Ocean. Freehostia. Awardspace. 000webhost. Static.run. Kinsta. Cloudflare Pages. Render. Hostinger. Ionos. Bluehost. Firebase. Netlify. Orbiter. Heliohost. There's probably hundreds of services with a free tier these days (though many of them will have strict limitations on website size and traffic, and you may have to run the build step locally).
you mean like stop using GitHub?
Yes.
For private repos there is Forgejo, Gitea and Gitlab.
For open-source: Codeberg
Yes, it'll make projects harder to discover, because you can't assume that "everything is on github" anymore. But it is a small price to pay for dignity.
Why not put open-source projects on Gitlab?
You can't create new account on Gitlab without a credit card (outside of EU and USA).
Yes, that too.
> Slashdot: What's the reason OneDrive tells users this setting can only be turned off 3 times a year? (And are those any three times — or does that mean three specific days, like Christmas, New Year's Day, etc.)
> [Microsoft's publicist chose not to answer this question.]
Why would anyone use this crap at this point? Buy a (possible used) mini PC or thin client, install Linux and Samba on it, and voila, your own private "cloud" completely free of corporate interference, spyware and recurring fees. This works best with a static IP for remote access via Wireguard but it can be made to work on a residential connection.
With a little more effort you can deploy Nextcloud, Home Assistant and a few other great FOSS projects and completely free yourself from Big Tech. The hardest part will probably be email on a residential connection, but it can be done with the help of a relay service for outgoing mail.
Presumably you just need to turn it off once, right?
Crossposting slashdot?
Heaven forfend!
They are the ones who did this interview
That’s not opt out. Opt out is the ability to say no. If you’re not allowed to say no there’s no consent and you’re being forced.
If you opt out and then never turn it back on, you have opted out.
Microsoft is such a scummy company. They always were but they've become even worse since they've gone all in on AI.
I wonder if this is also a thing for their EU users. I can think of a few laws this violates.
Makes me want to download and install windows, and store a picture of my hairy brown nutsack with googly eyes on it.
I think a call to Australia’s privacy commissioner might be in order.
What are they gonna do? Hard to have a convo with your master when youre on your knees...
Reminder: Microsoft owns Github and NPM.
I've never seen a better case for uploading endless AI slop photos.
This is your daily reminder not to use Microsoft.
> I uploaded a photo on my phone to Microsoft's
That's your problem right there.
> Microsoft only lets you opt out of AI photo scanning
Their _UI_ says they let you opt out. I wouldn't bet on that actually being the case. At the very least - a copy of your photos goes to the US government, and they do whatever they want with it.
fuck microsoft
Isn't it cute when there's absolutely no rationale behind a new rule, and it's simply an incursion made in order to break down a boundary?
Look, scanning with AI is available!
Wow, scanning with AI is now free for everyone!
What? Scanning with AI is now opt-out?
Why would opting-out be made time-limited?
WTF, what's so special about 3x a year? Is it because it's the magic number?
Ah, the setting's gone again, I guess I can relax. I guess the market wanted this great feature, or else they wouldn't have gradually forced it on us. Anyway, you're a weird techie for noticing it. What do you have to hide?
There is a big rationale behind it. If their AI investments don't pan out, Microsoft will cease to exist. They've been out of ideas since the late 90s. They know that the subscription gravy train has already peaked. There is no more growth unless they fabricate new problems for which they will then force you to pay for the solution to the problem they created for you. Oh, your children were kidnapped because Microsoft sold their recognition and location data to kidnappers? Well you should have paid for Microsoft's identity protection E7 plus add-on subscription that prevents them from selling the data you did not authorize them to collect to entities that they should know better than to deal with.
I don't even get why they would need "ideas" or "growth" tbh. They have most popular desktop operating system, they have one of the most popular office suites, surely they make plenty of profit from those. If they just focused on making their existing products not shit they would remain a profitable company indefinitely. But instead they're enshittifying everything because they want more More MORE
Because there are too many people chasing of ever going up line on valuation chart. It is simply not acceptable anymore to have reasonable business that generates solid dividends and grows with opening markets and population. Blame the silicon valley, VC and like...