Microsoft documentation is a nightmare, it doesn't surprise me there are vulnerabilities.
I recently built an SSO login using Entra ID (which was thankfully single-tenant) and I basically had to keep randomly stabbing in the dark until I got it to work with the correct scopes and extra fields returned with the access token.
Trying to search for any kind of Getting started guide just took me to child pages several levels deep full of incomprehensible Microsoft jargon and hyperlinks to helpful-sounding but ultimately similarly useless articles.
I find this consistent across the Microsoft ecosystem. I thought maybe Copilot would have an edge, but it’s just as lost as us (which i guess makes sense..)
I'm pretty sure what you're describing is the fact that Microsoft return Graph scopes by default when you request a token, I agree it is very annoying and only really documented if you read between the lines...
ohhhh the gifts multi-tenant app authorization keeps giving!
(laid off) Microsoft PM here that worked on the patch described as a result of the research from Wiz.
One correction I’d like to suggest to the article: the guidance given is to check either the “iss” or “tid” claim when authorizing multi-tenant apps.
The actual recommended guidance we provided is slightly more involved. There is a chance that when only validating the tenant, any service principal could be granted authorized access.
You should always validate the subject in addition to validating the tenant for the token being authorized. One method for this would be to validate the token using a combined key (for example, tid+oid) or perform checks on both the tenant and subject before authorizing access. More info can be found here:
Assume every token is forged. Secure by default. Even if it wastes cpu, validate each and every field. Signatures only work if verified. While you're at it, validate it against your identity database as well. Double check, triple check if you must. This is what I taught my devs.
Tenant, User, Group, Resource - validate it all before allowing it through.
How is their "guidance" on what to check? Shouldn't it be a yes / no type thing? I've never worked on a system that had some checkbox for permissions that was labelled something like "maybe users in this group should be able to read everyone's personal notes".
Not surprising at all. The configuration and docs for Oauth2 on Entra is an absolute cluster-f. Evidently, it’s so confusing that not even Microsoft themselves can get it right.
Their solution to this will be to add even more documentation, as if anyone had the stomach to read through the spaghetti that exist today.
Ran into this just a few weeks ago. According to the documentation it should be impossible to perform the authorization code flow with a scope that targets multiple resource servers. But if I request "openid $clientid/.default" it works. Kinda. At the end of the flow I get back an ID token and and access token. The ID token indicates that Azure has acknowledged the OIDC scope. But when I check the access token I can see that the scope has been adjusted to not include "openid". And indeed I'm unable to call Microsoft Graph which serves as the UserInfo endpoint. I was unable to find any good explanation for this behavior.
You are confusing the purpose of the openid scope. That scope is used to "enable" OIDC in an otherwise pure-OAuth server. By itself, the openid scope never gives you access to anything itself, so it should not impact the Access Token at all - which should not include that scope (as it would be useless anyway). The UserInfo endpoint should only return claims that were requested in the authorization request via scopes like `profile` and `email`. The ID token is only returned if your response_type includes `id_token` and usually means you want the claims directly returned as a JWT ID Token, and won't be making Userinfo requests.
For me, the "openid" scope gives me access to the UserInfo endpoint (which is provided by the Microsoft Graph API). So probably this is something where the implementation in Azure differs from the general protocol spec?
You can see it that way, but you need to understand that if what you want from the Userinfo endpoint is to obtain claims about the subject... and to do that, you need to require scopes that map to claims (the openid scope does not map any claim) or you need to explicitly request the claims directly. An authorization request that only requests the `openid` scope should result in a Userinfo response containing only the user's `sub` (because that's a mandatory claim to return) but the OIDC server may chose to just fail the request.
> According to the documentation it should be impossible to perform the authorization code flow with a scope that targets multiple resource servers.
(I work on Entra) Can you point me to the documentation for this? This statement is not correct. The WithExtraScopesToConsent method (https://learn.microsoft.com/en-us/dotnet/api/microsoft.ident...) exists for this purpose. An Entra client can call the interactive endpoint (/authorize) with scope=openid $clientid/.default $client2/.default $client3/.default - multiple resource servers as long as it specifies exactly one of those resource servers on the non-interactive endpoint (/token) - i.e. scope=openid $clientid/.default. In the language of Microsoft.Identity.Client (MSAL), that's .WithScopes("$clientid/.default").WithExtraScopesToConsent("$client2/.default $client3.default"). This pattern is useful when your app needs to access multiple resources and you want the user to resolve all relevant permission or MFA prompts up front.
It is true that an access token can only target a single resource server - but it should be possible to go through the first leg of the authorization code flow for many resources, and then the second leg of the authorization code flow for a single resource, followed by refresh token flows for the remaining resources.
(I work on Entra) The OpenID Connect standard says that when you make a request using the OpenID Connect scopes (openid, profile, email, address, phone, offline_access), you get an access token that can be used to call the UserInfo endpoint. The OpenID Connect standard *does not say* what happens when you combine OpenID Connect scopes with OAuth scopes (like $clientid/.default).
Entra treats such requests as an OpenID Connect OAuth hybrid. The ID token is as specified under OpenID Connect, but the access token is as expected from OAuth. In practice, these are the tokens most people want. The UserInfo endpoint is stupid - you can get all that information in the ID token without an extra round trip.
Ignoring the ridiculous complexity of Entra and how easy it is to not realize you’re making a mistake with it (especially internal at Microsoft where there’s no delineation between all the internal tenants you need to support and 3P customer tenants), it’s really scary how people think an auth token is the only layer of security you need. These sites shouldn’t have ever been exposed to public internet (they’re not now). Network security is such an afterthought but it’s the best layer of defense you can have!
The last decade has seen an increase push in what Google started calling "Zero Trust"[0] and dropping VPNs entirely. The issue being that once someone got into a VPN it was much, much harder to prevent them from accessing important data.
So everything "internal" is now also external and required to have its own layer of permissions and the like, making it much harder for, e.g. the article, to use one exploit to access another service.
I don’t see that really as an argument for this. You still should use VPN as an additional layer of security, assuming that you use some proper protocol. Then zero trust applies to internal network.
In the bad old days, if your company-internal tools were full of XSS bugs, fixing them wasn't a priority, because the tools could only be accessed with a login and VPN connection.
So outside attackers have already been foiled, and insider threats have a million attack options anyway, what's one more? Go work on features that increase revenue instead.
In principle the idea of "zero trust" was to write your internal-facing webapps to the same high standards as your externally-facing code. You don't need the VPN, because you've fixed the many XSS bugs.
In practice zero trust at most companies means buying something extremely similar to a VPN.
> In principle the idea of "zero trust" was to write your internal-facing webapps to the same high standards as your externally-facing code. You don't need the VPN, because you've fixed the many XSS bugs.
But why stop there? If these apps are not required to be accessed from public world, by setting up VPN you need to exploit both VPN and and the service to have an impact. Denial of specific service is harder and exploiting known CVEs is harder.
Should they? From a threat modeling perspective, what's the consequences for HN of a user having their password compromised? Are those consequences serious enough to warrant the expense and added complexity of adding MFA?
HN allows for creating a user. HN requires every post and comment to be created by a user. HN displays the user for each post and comment. HN allows for browsing users' post and comment history. HN allows for flagging posts and comments, but only by users. HN allows for voting on posts and comments, but only by users. HN also has some baseline guardrails for fresh accounts. Very clearly, the concept of user accounts is central to the overall architecture of the site.
And you ask if it is in HN's interest to ensure people's user accounts remain in their control? Literally all mutative actions you can take on HN are bound to a user that I can tell, with that covering all content submission actions. They even turn on captchas from time to time for combating bots. [0] How could it not be in their interest to ensure people can properly secure their user accounts?
And if I further extend this thinking, why even perform proper password practices at all (hashing and salting)? Heck, why even check passwords, or even have user accounts at all?
So in my thinking, this is not a reasonable question to ponder. What is, is that maybe the added friction of more elaborate security practices would deter users, or at least that's what [0] suggests to me. But then the importance of user account security or the benefit of 2FA really isn't even a question, it's accepted to be more secure, it's more a choice of giving up on it in favor of some perceived other rationale.
TBF I didn't ask if it was in their interests, I asked if the consequences of a password related attack were serious enough to warrant the expense of implementing MFA.
Let's look at some common attacks :-
- Single user has their password compromised (e.g. by a keylogger). Here the impact to HN is minimal, the user may lose their account if they can't get through some kind of reset process to get access to it. MFA may protect against this, depending on the MFA type and the attacker.
- Attacker compromises HN service to get the password database. MFA's not really helping HN here at all and assuming that they're using good password storage processes the attacker probably isn't retrieving the passwords anyway.
- Attacker uses a supply chain attack to get MITM access to user data via code execution on HNs server(s). Here MFA isn't helping at all.
It's important to recognize that secure is not a binary state, it's a set of mitigations that can be applied to various risks. Not every site will want to use all of them.
Implementing mechanisms has a direct cost (development and maintenance of the mechanism) and also an indirect cost (friction for users), each service will decide whether a specific mitigation is worth it for them to implement on that basis.
Whether they are "serious enough" is a perceived attribute, so it is on them to evaluate, not on any one of us. Depending, it could mean a blank check, or a perpetual zero. The way HN is architected (as described prior), and it being a community space, it makes no sense to me not to do it in general, and even considering costs, I'm not aware of e.g. TOTP 2FA being particularly expensive to implement at all.
Certainly, not doing anything will always be the more frugal option, and people are not trading on here, so financial losses of people are not a concern. The platform isn't monetized either. Considering finances is important, but reversing the arrow and using it as a definitive reason to not do something is not necessarily a good idea.
Regarding the threat scenarios, MFA would indeed help the most against credential reuse based attacks, or in cases of improper credential storage and leakage, but it would also help prevent account takeovers in cases of device compromise. Consider token theft leading to compromised HN user account and email for example - MFA involving an independent other factor would allow for recovery and prevent a complete hijack.
yes it would help against some attack scenarios, no argument there. The question is, do HN regard it as sufficiently important. Changing the codebase to implement MFA would at the least require some development effort/additional code, which has a cost. Whilst I'm not privy to HNs development budget, given that it doesn't seem to change much, my guess is they're not spending a lot at the moment...
MFA can also add a support cost, where a user loses their MFA token. If you allow e-mail only reset, you lose some security benefits, if you use backup tokens, you run the risk that people don't store those securely/can't remember where they put them after a longer period.
As there's no major direct impact to HN that MFA would mitigate, the other question is, is there a reputational impact to consider?
I'd say the answer to that is no, in that all the users here seem fine with using the site in its current form :)
Other forum sites (e.g. reddit) do offer MFA, but I've never seen someone comment that they use reddit and not HN due to the relative availability of that feature, providing at least some indication that it's not a huge factor in people's decision to use a specific site.
> what's the consequences for HN of a user having their password compromised
HN does not enforce anonymity, so the identity of some users (many startup owners btw) is tied to their real identities.
A compromised password could allow a bad actor to impersonate those users. That could be used to scam others or to kickstart some social engineering that could be used to compromise other systems.
Indeed a consequence for the individual user could be spammed posts, but for scams, I'd guess that HN would fall back on their standard moderation process.
The question was though, what are the consequences for HN, rather than individual users, as it's HN that would take the cost of implementation.
Now if a lot of prominent HN users start getting their passwords compromised and that leads to a hit on HNs reputation, you could easily see that tipping the balance in favour of implementing MFA, but (AFAIK at least) that hasn't happened.
Now ofc you might expect orgs to be pro-active about these things, but having seen companies that had actual financial data and transactions on the line drag their feet on MFA implementations in the past, I kind of don't expect that :)
I think this conversation would benefit from introducing scale and audience into the equation.
Individual breaches don't really scale (e.g. device compromise, phishing, credential reuse, etc.), but at scale everything scales. At scale then, you get problems like hijacked accounts being used for spam and scams (e.g. you can spam in comment sections, or replace a user's contact info with something malicious), and sentiment manipulation (including vote manipulation, flagging manipulation, propaganda, etc.).
HN, compared to something like Reddit, is a fairly small scale operation. Its users are also more on the technically involved side. It makes sense then that due to the lesser velocity and unconventional userbase, they might still have this under control via other means, or can dynamically adjust to the challenge. But on its own, this is not a technical trait. There's no hard and fast rule to tell when they cross the boundary and get into the territory where adding manpower is less good than to just spend the days or weeks to implement better account controls.
I guess if I really needed to put this into some framework, I'd weigh the amount of time spent on chasing the aforementioned abuse vectors compared to the estimated time required to implement MFA. The forum has been operating for more than 18 years. I think they can find an argument there for spending even a whole 2 week sprint on implementing MFA, though obviously, I have no way of knowing.
And this is really turning the bean counting to the maximum. I'm really surprised that one has to argue tooth and nail about the rationality of implementing basic account controls, like MFA, in the big 2025. Along with session management (the ability to review all past and current sessions, to retrieve an immutable activity log for them, and a way to clear all other active sessions), it should be the bare minimum these days. But then, even deleting users is not possible on here. And yes, I did read the FAQ entry about this [0], it misses the point hard - deleting a user doesn't necessarily have to mean the deletion of their submissions, and no, not deleting submissions doesn't render the action useless; because as described, user hijacking can and I'm sure does happen. A disabled user account "wouldn't be possible" to hijack, however. I guess one could reasonably take an issue with calling this user deletion though.
It's interesting you suggest a two week sprint for this. How large do you think HNs development team is, do you know if they even have a single full time developer?
I don't but the lack of changes in the basic functionality of the site in the number of years I've used it make me feel that they may not have any/many full time devs working on it...
I really don't think the site is like this because they lack capacity. It's pretty clearly an intentional design choice in my view, like with Craigslist.
But no, I do not have any information on their staffing situation. I presume you don't either though, do you?
Indeed I don't. However it we examine the pace of new features of the last several years (I can't think of a single way this site has changed over that time period), it's reasonable to surmise that there isn't a lot of development of the user accessible/visible portions of the site, and that leads me to guess that they don't have much in the way of dev. resources.
Oh boy, this should be good. Mark my words, this will be followed by a "proof" of nonexistence, in the following form:
"Well, let's build a list of attacks that I can think of off-the-cuff. And then let's iterate through that list of attacks: For each attack, let's build a list of 'useful' things that attackers could possibly want.
Since I'm the smartest and most creative person on the planet, and can also tell the future, my lists of ideas here will actually be complete. There's no way that any hacker could possibly be smart enough or weird enough to think of something different! And again, since I'm the smartest and most creative --and also, magically able to tell the future-- and since I can't think of anything that would be 'worth the cost', then this must be a complete proof as to why your security measure should be skipped!"
I'm firmly in the pro 2FA camp, but merely as a point of discussion: the Arc codebase is already so underwater with actual features that would benefit a forum, and if I changed my password to hunter2 right now the only thing that would happen is my account would shortly be banned when spammers start to hate-bomb or crypto-scam-bomb discussion threads. Dan would be busy, I would be sad, nothing else would happen
For accounts that actually mean something (Microsoft, Azure, banking, etc), yes, the more factors the better. For a lot of other apps, the extra security is occupying precious roadmap space[1]
1: I'm intentionally side-stepping the "but AI does everything autonomously" debate for the purpose of this discussion
I am currently having this debate at $DAYJOB, having come from a zero trust implementation to one using fucking Cloudflare Warp. The cost to your "just use a VPN" approach or, if I'm understanding your point correctly, use VPN and zero trust(?!), is that VPNs were designed for on-premises software. In modern times, the number of cases where one needs to perform a fully authenticated, perfectly valid action, from a previously unknown network on previously unconfigured compute is bigger than in the "old days"
GitHub Actions are a prime example. Azure's network, their compute, but I can cryptographically prove it's my repo (and my commit) OIDC-ing into my AWS account. But configuring a Warp client on those machines is some damn nonsense
If you're going to say "self hosted runners exist," yes, so does self-hosted GitHub and yet people get out of the self-hosted game because it eats into other valuable time that could be spent on product features
> is that VPNs were designed for on-premises software.
The way I see this is that VPN is just network extender. Nothing to do with design for on-premise software.
By using VPN as an additional layer, most of the vulnerability scanners can’t scan your services anymore. It reduces the likelihood that you are impacted immediately by some publicly known CVEs. That is the only purpose of VPN here.
VPN may also have vulnerabilities, but for making the impact, both VPN and service vulnerability is required at the same time. The more different services/protocols you have behind VPN, more useful it is. It might not make sense if you have just ssh need, for example. Then you have 1:1 protocol ratio, and ssh could be more secure protocol.
It doesn't, but from my perspective the thinking behind zero trust is partly to stop treating networking as a layer of security. Which makes sense to me - the larger the network grows, the harder to know all its entry-points and the transitive reach of those.
A VPN? Yes, by definition. Zero trust requires that every connection is authenticated and users are only granted access to the app they request. They never “connect to the network” - something brokers that connection to the app in question.
VPN puts a user on the network and allows a bad actor to move laterally through the network.
It doesn't have to. There's nothing to stop you using a VPN as an initial filter to reduce the number of people who have access to a network and then properly authenticating and authorizing all access to services after that.
In fact, I'd say is a good defence-in-depth approach, which comes at the cost of increased complexity.
It also prevents the whole world for scanning your outdated public interfaces. Before they can do that, they need to bypass VPN.
If there are tens of different services, is it more likely that one of them has vulnerablity than both VPN and service has? And vulnerability in VPN alone does not matter if your internal network is build like it is facing public world. You might be able to patch it before vulnerability in other services is found.
I’m not saying you can’t have your own definition.
But I am saying that a VPN isn’t zero trust, by the agreed upon industry definition. There’s no way to make a VPN zero trust, and zero trust was created specifically to replace legacy VPNs.
The big problem with the ZT approach is that smaller shops don't have a lot of developers and testers (some maybe with a security inclination) to be certain to a somewhat high degree that their app is written in a secure manner. Or be able to continuously keep abreast of every new security update Microsoft or other IdP makes to their stack.
It is easy for Google/Microsoft and any other FAANG like company to preach about Zero Trust when they have unlimited (for whatever value of unlimited you want to consider) resources. And even then they get it wrong sometimes.
The simpler alternative is to publish all your internal apps through a load balancer / API gateway with a static IP address, put it behind a VPN and call it a day.
> publish all your internal apps through a load balancer / API gateway with a static IP address, put it behind a VPN and call it a day.
Or just use Cognito. It can wrap up all the ugly Microsoft authentication into it's basic OAuth and API Gateway can use and verify Cognito tokens for you transparently. It's as close to the Zero Trust model in a Small Developer Shop we could get.
The zero trust architechture implies (read: requires) that authentication occurs at every layer. Token reuse constitutes a replay attack that mandatory authentication is supposed to thwart. Bypass it and the system's security profile reverts back to perimeter security, with the added disadvantage of that perimeter being outside your org's control.
Zero trust is a good concept turned into a dumb practice. Basically people buying Google's koolaid for this forgot about "defense in depth". Yeah, authenticating every connection is great, throwing a big effing moat around it too is better.
The other thing is most companies are not Google. If you're a global company with hundreds of thousands of people who need internal access, moats may be non-ideal. For a business located in one place, local-only on-premise systems which block access to any country which they don't actively do business with is leaps and bounds better.
> Move to the cloud they said. It will be more secure then your intranet they said. Only fools pay for their own Ops team they said.
It seems that the fundamental issue surfaced in the blog post is that developers who work on authorizarion in resource servers are failing to check basic claims in tokens such as the issuer, the audience, and subject.
If your developers are behind this gross oversight, do you honestly expect an intranet to make a difference?
Listen, the underlying issue is not cloud vs self-hosted. The underlying issue is that security is hard and in general there is no feedback loop except security incidents. Placing your apps in a intranet, or VPN, does nothing to mitigate this issue.
But of course it does provide an additional layer of security that indeed could have reduced the likelihood of this issue being exploited.
For me, the core of the discovered issue was that applications intended purely for use by internal MS staff were discoverable and attackable by anyone on the Internet, and some of those applications had a mis-configuration that allowed them to be attacked.
If all those applications had been behind a decently configured VPN service which required MFA, any attacker who wanted to exploit them would first need access to that VPN, which is another hurdle to cross and would reduce the chance of exploitation.
With a target like MS (and indeed most targets of any value) you shouldn't rely solely on the security provided by a VPN, but it can provide another layer of defence.
For me the question should be, "is the additional security provided by the VPN layer justified against the costs of managing it, and potentially the additional attack surface introduced with the VPN".
I work at a corporate that uses FortiNet. Not just VPN but for AV and web filtering. It aggregates traffic together, increases the attack surface and makes us vulnerable to zero day attacks. All to protect sensitive data that is almost entirely composed of connections of Microsoft software to Microsoft servers. And using all the normal SSO/authorisation stuff. It probably is required from a compliance perspective, but just seems like a massive tradeoff for security .
Everything in security is a tradeoff, and unfortunately compliance risks are real risks :D
That said yep corps over-complicate things and given the number of 0-days in enterprise VPN providers, it could easily be argued that they add more risk than they mitigate.
That's not to say a good VPN setup (or even allow-listing source IP address ranges) doesn't reduce exposure of otherwise Internet visible systems, reducing the likelihood of a mis-configuration or vulnerability being exploited...
Yeah agreed. And some of these products can be configured to be more specific in whitelisting users to particular service. But only if they are actually configured to do that.
"The underlying issue is that security is hard and in general there is no feedback loop except security incidents."
this is tbh, computer architecture is already hard enough and cyber security is like a whole different field especially if the system/program is complex
For me, I don't think that the application is public exposed is really the problem (i.e. not in intranet).
I think the real problem is that these applications (Entra ID) are multi-tenant, rather than a dedicated single-tenant instance.
Here, we have critical identity information that is being stored and shared in the same database with other tenants (malicious attackers). This makes multi-tenancy violations common.
Even if Entra ID had a robust mechanism to perform tenancy checks i.e. object belongs to some tenant, there are still vulnerabilities.
For example, as you saw in the blog post, multi-tenant requests (requests that span >= 2 tenants), are fundamentally difficult to authorize. A single mistake, can lead to complete compromise.
Compare this to a single tenant app. First, the attacker would need to be authenticated as an user within your tenant. This makes pre-auth attacks more difficult.
That is probably still good advice for most companies. Joe's roof fixing business may be the best roof fixing business in 3 states, but would you want them to run their own server for their website, email, and booking?
Anyone who is on this forum is capable of building their own stuff, and running their own server, but that is not most people.
$0 in rewards for RCE on the Windows build servers is crazy. I understand he didn’t find an actual zero-day, only a configuration issue, but still. Imagine the global havoc you can cause if you can pollute the build environment with backdoored DLLs…
I was a windows build engineer at Microsoft. I am unfamiliar with this specific UI for managing build tools (I think it may have been added after I left), however I would be surprised if it was actually RCE-capable.
I notice that it requires the tool to be pulled from NuGet. While it looks like you could enter any package and NuGet source, I would be very surprised if there wasn’t a locked down whitelist of allowed sources (limited to internal Microsoft NuGet feeds).
Locking down NuGet packages was one of the primary things we (the Windows Engineering System team) were heavily focusing on when I left years ago. We were explicitly prevented from using public NuGet packages at all. We had to repackage them and upload them to the internal source to be used.
It's Microsoft. I'm sure there are wonderful people there, but haven't we witnessed recently their master key leak, their engineers begging GPT in PRs to do stuff, their CEO boasting 'backend' engineers are going away.. I wouldn't rely on that company for anything, but I acknowledge a ton of people are not in that position. If they do stay however, it's malpractice.
It's all very simple: Entra* (Azure AD or however you'd call it) should not be used for AuthZ. Entra AuthN is okayish, but forget about Entra AuthZ, do it all yourself. It's all very simple to avoid once you do AuthZ.
* No idea why the rename happened. Does some manager in Microsoft have the plaque: "Renomino, ergo sum."?
It's ok to use Entra for AuthZ. Just click the box that says "Require users to be assigned to this application" and assign them [1]. However - that's really the only AuthZ feature Entra has. If you don't enable AuthZ, you should not expect Entra to just magically do AuthZ for you.
Edit: I would add - simple allow/deny authz is only relevant for the very simplest of apps (where all users have the same permissions). For any complex application, users will have different levels of access, which usually requires the application to do AuthZ.
>For any complex application, users will have different levels of access, which usually requires the application to do AuthZ.
Any application when AuthZ isn't simply yes/no, which rather quickly is just about all of them (even simple blogs have an admin tier), except for a very heavily microservice based architecture - where one would still want to have a much more convenient interface than Entra to see/manage the access permissions centrally... Entra AuthZ is at best a temporary development aid, but it's so easy to roll AuthZ yourself one might as well do it.
That’s what you get. Entra ID doesn’t allow you to blacklist or whitelist specific tenants for multi tenant apps, which causes problems like this.
Add the fact that MSAL doesn’t work for stuff like browser extensions, so people have to implement their own security solutions to interact with Entra ID and it’s not surprising there are so many issues.
> Entra ID doesn’t allow you to blacklist or whitelist specific tenants for multi tenant apps.
This one very annoying "feature" where I could say this app is available for the following tenants. No, only "my tenant" or "all tenants in Azure".
One workaround I use is to set up apps with "only this tenant" and invite users from other tenants into my tenant. The other approach is to say "all tenants" and then use a group to enforce who can actually use the app.
I don't know if there are any reasons behind this limitation or just an oversight or no client big enough asked for this feature.
Inviting individual users is a good pattern. If you want to allow an entire tenant into your tenant (e.g. if your parent company has a subdivision that has their own tenant), Entra has cross tenant access [1] for that use case.
Generally, you should say "only this tenant" unless you're a SaaS provider. And if you're a SaaS provider, you should really already understand the need to keep your various customers data separate.
I am aware of the Cross tenant functionality, but it does not come free - you need at least a P1 subscription in all tenants involved. And you can't do this per user, just per tenant.
I know it's a joke, but it's funny because it's (somewhat) true. To add to the confusion, sometimes one of them gets abbreviated "authn". That is so unhelpful.
Okta (the other elephant in the room) has its own issues but at least it has decent documentation and even though it’s more expensive I think it’s worth paying that price just to keep security in a separate domain than co-mingle it with other Azure services.
This is still very effective for other organizations, not just microsoft. Of course they won't pay a bounty but any org that uses Micorosft 365/Office 365, Entrea ID which was Azure Active Directory can be polled and abused.
Was not new news, AFAIK from the article they just patched the microsoft tools but they won't be pushing it tenant wide for all orgs.
Ohh, that's probably why our integration suddenly stopped working for single-tenant app registrations right before release. We were using the /common endpoint for everyone. That is disallowed now.
Did he really get no bounties out of this? The guy found a way into build boxes retail Windows is built on, potentially found the private key that would be used to generate license keys, likely could have dived in a little bit more after getting RCE on the build box to exfil the latest Windows 11 source code. He even found a way to issue rewards. They still gave him nothing?
I will forever remain radicalized how every tech company decided to just say fuck it in 2023. (ex-Google and left in 2023 over similar shenanigans)
Should be a major public reckoning over this. But there can't be, they hold the cards, the only real view of this you'd have is day-to-day on Blind and some occasional posts that stir honest discussion here.
I guess we just get to grin and bear it while they give gold statues and millions to the right politicians.
It’ll come. Can’t say in what form, but the reckoning will come. Probably anti trust, or anti tech regulations as the public hatred of the tooligarchs grows. The problem with being out of touch is you can’t see the ground shifting beneath your feet.
Have they already gotten so drunk on "zero trust" that they don't think it should matter if attackers see their source code? Then again, they are open-sourcing a ton of stuff these days...
Their SECURITY.md mentions bug bounties, yet if your submission has anything to do with GitHub it's immediately disqualified. They refuse to remove that (in my opinion) misleading language.
Fwiw, the way it works is that Microsoft doesn't really have a bug bounty program. Individual Microsoft teams have bug bounty programs (or not). Platform teams like Entra, Windows, and Azure have robust programs. However, when teams that operate on top of platforms misconfigure those platforms (as happened here), those bugs are owned by the teams that operate on top of the platform, not by the platform.
That's some exceptionally shallow thinking on their part. I think may people would agree that part of the vulnerability is the authentication configuration options do not map well onto real world use cases, the documentation surrounding this is absent or confusing, and even internal teams that should know better are creating insecure services an alarming percentage of the time.
This is what I like about actual safety culture, like you would find in aviation, _all causes_ are to be investigated, all the way back to the shape, size and position of the switches on the flight deck.
It's difficult to take Microsoft's stance seriously. It makes the prices for their "service" seem completely unjustifiable.
My own , small, experience with MSRC is indeed their bug bounty program is not good, they take any possible opportunity to avoid payouts.
this means that a lot of genuine bug bounty hunters just won't look at MS stuff and MS avoid getting things fixed, instead other attackers will be the ones finding things, and they likely won't report it to MS...
If Azure's horrific security track record (tens of exploits, often cross-tenant, often trivial) over the past few years doesn't give you pause, their joke of a bug bounty definitely should.
Obviously nobody with power cares about security in Microsoft's Azure branch. Why does anyone trust continue trusting them? (I mean, I know that Azure is not something you buy by choice, you do because you got a good deal on it or were a Microsoft shop before, but still).
Now remember these dimwits are bragging that 30% of their code is now written by AI; and have mandated Microsoft Accounts, set up OneDrive backup by default, and are providing infrastructure to OpenAI who is currently required to preserve even deleted chats. They also own LinkedIn.
This totally has no foreseeable potential consequences. It would be a real shame if some foreign hostile government with nuclear weapons managed to connect MS Account, LinkedIn Profile, and OpenAI accounts together by shared emails and phone numbers. Is it really worth starting a war for the crime of depantsing the nation?
Having done two rounds at different Fortune 10s (one being Microsoft) I can tell you: This isn't AI, this is the result of years of "make it work" and duct tape.
This is "It'll be safe if we leave it on the intranet" and then someone says "Zero trust!" and then all the sudden things that had authentication on the inside are also going through a new and different layer of authentication. A stack of totally reasonable expectations stack tolerance on tolerance, and just like the Sig Sauer P320, it has a habit of shooting you in the foot when you least expect it.
Microsoft documentation is a nightmare, it doesn't surprise me there are vulnerabilities.
I recently built an SSO login using Entra ID (which was thankfully single-tenant) and I basically had to keep randomly stabbing in the dark until I got it to work with the correct scopes and extra fields returned with the access token.
Trying to search for any kind of Getting started guide just took me to child pages several levels deep full of incomprehensible Microsoft jargon and hyperlinks to helpful-sounding but ultimately similarly useless articles.
I find this consistent across the Microsoft ecosystem. I thought maybe Copilot would have an edge, but it’s just as lost as us (which i guess makes sense..)
I'm pretty sure what you're describing is the fact that Microsoft return Graph scopes by default when you request a token, I agree it is very annoying and only really documented if you read between the lines...
ohhhh the gifts multi-tenant app authorization keeps giving!
(laid off) Microsoft PM here that worked on the patch described as a result of the research from Wiz.
One correction I’d like to suggest to the article: the guidance given is to check either the “iss” or “tid” claim when authorizing multi-tenant apps.
The actual recommended guidance we provided is slightly more involved. There is a chance that when only validating the tenant, any service principal could be granted authorized access.
You should always validate the subject in addition to validating the tenant for the token being authorized. One method for this would be to validate the token using a combined key (for example, tid+oid) or perform checks on both the tenant and subject before authorizing access. More info can be found here:
https://learn.microsoft.com/en-us/entra/identity-platform/cl...
Assume every token is forged. Secure by default. Even if it wastes cpu, validate each and every field. Signatures only work if verified. While you're at it, validate it against your identity database as well. Double check, triple check if you must. This is what I taught my devs.
Tenant, User, Group, Resource - validate it all before allowing it through.
Also knowing the difference between authentication and authorization is crucial and should not be forgotten.
Usage of the slang "auth" is my current favorite indicator of complete cryptographic snakeoil.
also assume that the valid credentials have been stolen and are being used by a hacker.
make sure anything done in a session can be undone as part of sanitizing the user
You are 100% correct but really these engineers should go read the guidance - it’s pretty clear what is required: https://learn.microsoft.com/en-us/entra/identity-platform/cl...
How is their "guidance" on what to check? Shouldn't it be a yes / no type thing? I've never worked on a system that had some checkbox for permissions that was labelled something like "maybe users in this group should be able to read everyone's personal notes".
Not surprising at all. The configuration and docs for Oauth2 on Entra is an absolute cluster-f. Evidently, it’s so confusing that not even Microsoft themselves can get it right.
Their solution to this will be to add even more documentation, as if anyone had the stomach to read through the spaghetti that exist today.
Ran into this just a few weeks ago. According to the documentation it should be impossible to perform the authorization code flow with a scope that targets multiple resource servers. But if I request "openid $clientid/.default" it works. Kinda. At the end of the flow I get back an ID token and and access token. The ID token indicates that Azure has acknowledged the OIDC scope. But when I check the access token I can see that the scope has been adjusted to not include "openid". And indeed I'm unable to call Microsoft Graph which serves as the UserInfo endpoint. I was unable to find any good explanation for this behavior.
You are confusing the purpose of the openid scope. That scope is used to "enable" OIDC in an otherwise pure-OAuth server. By itself, the openid scope never gives you access to anything itself, so it should not impact the Access Token at all - which should not include that scope (as it would be useless anyway). The UserInfo endpoint should only return claims that were requested in the authorization request via scopes like `profile` and `email`. The ID token is only returned if your response_type includes `id_token` and usually means you want the claims directly returned as a JWT ID Token, and won't be making Userinfo requests.
For me, the "openid" scope gives me access to the UserInfo endpoint (which is provided by the Microsoft Graph API). So probably this is something where the implementation in Azure differs from the general protocol spec?
You can see it that way, but you need to understand that if what you want from the Userinfo endpoint is to obtain claims about the subject... and to do that, you need to require scopes that map to claims (the openid scope does not map any claim) or you need to explicitly request the claims directly. An authorization request that only requests the `openid` scope should result in a Userinfo response containing only the user's `sub` (because that's a mandatory claim to return) but the OIDC server may chose to just fail the request.
> According to the documentation it should be impossible to perform the authorization code flow with a scope that targets multiple resource servers.
(I work on Entra) Can you point me to the documentation for this? This statement is not correct. The WithExtraScopesToConsent method (https://learn.microsoft.com/en-us/dotnet/api/microsoft.ident...) exists for this purpose. An Entra client can call the interactive endpoint (/authorize) with scope=openid $clientid/.default $client2/.default $client3/.default - multiple resource servers as long as it specifies exactly one of those resource servers on the non-interactive endpoint (/token) - i.e. scope=openid $clientid/.default. In the language of Microsoft.Identity.Client (MSAL), that's .WithScopes("$clientid/.default").WithExtraScopesToConsent("$client2/.default $client3.default"). This pattern is useful when your app needs to access multiple resources and you want the user to resolve all relevant permission or MFA prompts up front.
It is true that an access token can only target a single resource server - but it should be possible to go through the first leg of the authorization code flow for many resources, and then the second leg of the authorization code flow for a single resource, followed by refresh token flows for the remaining resources.
(I work on Entra) The OpenID Connect standard says that when you make a request using the OpenID Connect scopes (openid, profile, email, address, phone, offline_access), you get an access token that can be used to call the UserInfo endpoint. The OpenID Connect standard *does not say* what happens when you combine OpenID Connect scopes with OAuth scopes (like $clientid/.default).
Entra treats such requests as an OpenID Connect OAuth hybrid. The ID token is as specified under OpenID Connect, but the access token is as expected from OAuth. In practice, these are the tokens most people want. The UserInfo endpoint is stupid - you can get all that information in the ID token without an extra round trip.
Ignoring the ridiculous complexity of Entra and how easy it is to not realize you’re making a mistake with it (especially internal at Microsoft where there’s no delineation between all the internal tenants you need to support and 3P customer tenants), it’s really scary how people think an auth token is the only layer of security you need. These sites shouldn’t have ever been exposed to public internet (they’re not now). Network security is such an afterthought but it’s the best layer of defense you can have!
> Network security is such an afterthought but it’s the best layer of defense you can have!
I think the opposite problem can be the case: people think that something inside a VPN is now secure and we don't have to worry too much about it.
> Network security is such an afterthought but it’s the best layer of defense you can have!
I mean, it's an additional layer.
Defense-in-depth is about having multiple.
Move to the cloud they said. It will be more secure then your intranet they said. Only fools pay for their own Ops team they said.
I’m so old and dumb that I don’t even understand why an app for internal Microsoft use is even accesible from outside its network.
The last decade has seen an increase push in what Google started calling "Zero Trust"[0] and dropping VPNs entirely. The issue being that once someone got into a VPN it was much, much harder to prevent them from accessing important data.
So everything "internal" is now also external and required to have its own layer of permissions and the like, making it much harder for, e.g. the article, to use one exploit to access another service.
[0] https://cloud.google.com/learn/what-is-zero-trust
I don’t see that really as an argument for this. You still should use VPN as an additional layer of security, assuming that you use some proper protocol. Then zero trust applies to internal network.
In the bad old days, if your company-internal tools were full of XSS bugs, fixing them wasn't a priority, because the tools could only be accessed with a login and VPN connection.
So outside attackers have already been foiled, and insider threats have a million attack options anyway, what's one more? Go work on features that increase revenue instead.
In principle the idea of "zero trust" was to write your internal-facing webapps to the same high standards as your externally-facing code. You don't need the VPN, because you've fixed the many XSS bugs.
In practice zero trust at most companies means buying something extremely similar to a VPN.
> In principle the idea of "zero trust" was to write your internal-facing webapps to the same high standards as your externally-facing code. You don't need the VPN, because you've fixed the many XSS bugs.
But why stop there? If these apps are not required to be accessed from public world, by setting up VPN you need to exploit both VPN and and the service to have an impact. Denial of specific service is harder and exploiting known CVEs is harder.
Because the protection that the VPN provides decreases the risk of having bugs to the point where they won't get prioritized, ever.
That is just bad management, to be fair. Companies need to intentionally increase risks before they can fix them?
Rule #1 of business, government, or education: Nobody, ever, ever, does what they “should.”
Even here: Hacker News “should” support 2 factor authentication, being an online forum literally owned by a VC firm with tons of cash, but they don’t.
Should they? From a threat modeling perspective, what's the consequences for HN of a user having their password compromised? Are those consequences serious enough to warrant the expense and added complexity of adding MFA?
I don't really understand this reasoning.
HN allows for creating a user. HN requires every post and comment to be created by a user. HN displays the user for each post and comment. HN allows for browsing users' post and comment history. HN allows for flagging posts and comments, but only by users. HN allows for voting on posts and comments, but only by users. HN also has some baseline guardrails for fresh accounts. Very clearly, the concept of user accounts is central to the overall architecture of the site.
And you ask if it is in HN's interest to ensure people's user accounts remain in their control? Literally all mutative actions you can take on HN are bound to a user that I can tell, with that covering all content submission actions. They even turn on captchas from time to time for combating bots. [0] How could it not be in their interest to ensure people can properly secure their user accounts?
And if I further extend this thinking, why even perform proper password practices at all (hashing and salting)? Heck, why even check passwords, or even have user accounts at all?
So in my thinking, this is not a reasonable question to ponder. What is, is that maybe the added friction of more elaborate security practices would deter users, or at least that's what [0] suggests to me. But then the importance of user account security or the benefit of 2FA really isn't even a question, it's accepted to be more secure, it's more a choice of giving up on it in favor of some perceived other rationale.
[0] https://news.ycombinator.com/item?id=34312937
TBF I didn't ask if it was in their interests, I asked if the consequences of a password related attack were serious enough to warrant the expense of implementing MFA.
Let's look at some common attacks :-
- Single user has their password compromised (e.g. by a keylogger). Here the impact to HN is minimal, the user may lose their account if they can't get through some kind of reset process to get access to it. MFA may protect against this, depending on the MFA type and the attacker.
- Attacker compromises HN service to get the password database. MFA's not really helping HN here at all and assuming that they're using good password storage processes the attacker probably isn't retrieving the passwords anyway.
- Attacker uses a supply chain attack to get MITM access to user data via code execution on HNs server(s). Here MFA isn't helping at all.
It's important to recognize that secure is not a binary state, it's a set of mitigations that can be applied to various risks. Not every site will want to use all of them.
Implementing mechanisms has a direct cost (development and maintenance of the mechanism) and also an indirect cost (friction for users), each service will decide whether a specific mitigation is worth it for them to implement on that basis.
Whether they are "serious enough" is a perceived attribute, so it is on them to evaluate, not on any one of us. Depending, it could mean a blank check, or a perpetual zero. The way HN is architected (as described prior), and it being a community space, it makes no sense to me not to do it in general, and even considering costs, I'm not aware of e.g. TOTP 2FA being particularly expensive to implement at all.
Certainly, not doing anything will always be the more frugal option, and people are not trading on here, so financial losses of people are not a concern. The platform isn't monetized either. Considering finances is important, but reversing the arrow and using it as a definitive reason to not do something is not necessarily a good idea.
Regarding the threat scenarios, MFA would indeed help the most against credential reuse based attacks, or in cases of improper credential storage and leakage, but it would also help prevent account takeovers in cases of device compromise. Consider token theft leading to compromised HN user account and email for example - MFA involving an independent other factor would allow for recovery and prevent a complete hijack.
yes it would help against some attack scenarios, no argument there. The question is, do HN regard it as sufficiently important. Changing the codebase to implement MFA would at the least require some development effort/additional code, which has a cost. Whilst I'm not privy to HNs development budget, given that it doesn't seem to change much, my guess is they're not spending a lot at the moment...
MFA can also add a support cost, where a user loses their MFA token. If you allow e-mail only reset, you lose some security benefits, if you use backup tokens, you run the risk that people don't store those securely/can't remember where they put them after a longer period.
As there's no major direct impact to HN that MFA would mitigate, the other question is, is there a reputational impact to consider?
I'd say the answer to that is no, in that all the users here seem fine with using the site in its current form :)
Other forum sites (e.g. reddit) do offer MFA, but I've never seen someone comment that they use reddit and not HN due to the relative availability of that feature, providing at least some indication that it's not a huge factor in people's decision to use a specific site.
> what's the consequences for HN of a user having their password compromised
HN does not enforce anonymity, so the identity of some users (many startup owners btw) is tied to their real identities.
A compromised password could allow a bad actor to impersonate those users. That could be used to scam others or to kickstart some social engineering that could be used to compromise other systems.
Indeed a consequence for the individual user could be spammed posts, but for scams, I'd guess that HN would fall back on their standard moderation process.
The question was though, what are the consequences for HN, rather than individual users, as it's HN that would take the cost of implementation.
Now if a lot of prominent HN users start getting their passwords compromised and that leads to a hit on HNs reputation, you could easily see that tipping the balance in favour of implementing MFA, but (AFAIK at least) that hasn't happened.
Now ofc you might expect orgs to be pro-active about these things, but having seen companies that had actual financial data and transactions on the line drag their feet on MFA implementations in the past, I kind of don't expect that :)
I think this conversation would benefit from introducing scale and audience into the equation.
Individual breaches don't really scale (e.g. device compromise, phishing, credential reuse, etc.), but at scale everything scales. At scale then, you get problems like hijacked accounts being used for spam and scams (e.g. you can spam in comment sections, or replace a user's contact info with something malicious), and sentiment manipulation (including vote manipulation, flagging manipulation, propaganda, etc.).
HN, compared to something like Reddit, is a fairly small scale operation. Its users are also more on the technically involved side. It makes sense then that due to the lesser velocity and unconventional userbase, they might still have this under control via other means, or can dynamically adjust to the challenge. But on its own, this is not a technical trait. There's no hard and fast rule to tell when they cross the boundary and get into the territory where adding manpower is less good than to just spend the days or weeks to implement better account controls.
I guess if I really needed to put this into some framework, I'd weigh the amount of time spent on chasing the aforementioned abuse vectors compared to the estimated time required to implement MFA. The forum has been operating for more than 18 years. I think they can find an argument there for spending even a whole 2 week sprint on implementing MFA, though obviously, I have no way of knowing.
And this is really turning the bean counting to the maximum. I'm really surprised that one has to argue tooth and nail about the rationality of implementing basic account controls, like MFA, in the big 2025. Along with session management (the ability to review all past and current sessions, to retrieve an immutable activity log for them, and a way to clear all other active sessions), it should be the bare minimum these days. But then, even deleting users is not possible on here. And yes, I did read the FAQ entry about this [0], it misses the point hard - deleting a user doesn't necessarily have to mean the deletion of their submissions, and no, not deleting submissions doesn't render the action useless; because as described, user hijacking can and I'm sure does happen. A disabled user account "wouldn't be possible" to hijack, however. I guess one could reasonably take an issue with calling this user deletion though.
[0] https://news.ycombinator.com/newsfaq.html
It's interesting you suggest a two week sprint for this. How large do you think HNs development team is, do you know if they even have a single full time developer?
I don't but the lack of changes in the basic functionality of the site in the number of years I've used it make me feel that they may not have any/many full time devs working on it...
I really don't think the site is like this because they lack capacity. It's pretty clearly an intentional design choice in my view, like with Craigslist.
But no, I do not have any information on their staffing situation. I presume you don't either though, do you?
Indeed I don't. However it we examine the pace of new features of the last several years (I can't think of a single way this site has changed over that time period), it's reasonable to surmise that there isn't a lot of development of the user accessible/visible portions of the site, and that leads me to guess that they don't have much in the way of dev. resources.
Oh boy, this should be good. Mark my words, this will be followed by a "proof" of nonexistence, in the following form:
"Well, let's build a list of attacks that I can think of off-the-cuff. And then let's iterate through that list of attacks: For each attack, let's build a list of 'useful' things that attackers could possibly want.
Since I'm the smartest and most creative person on the planet, and can also tell the future, my lists of ideas here will actually be complete. There's no way that any hacker could possibly be smart enough or weird enough to think of something different! And again, since I'm the smartest and most creative --and also, magically able to tell the future-- and since I can't think of anything that would be 'worth the cost', then this must be a complete proof as to why your security measure should be skipped!"
I'm firmly in the pro 2FA camp, but merely as a point of discussion: the Arc codebase is already so underwater with actual features that would benefit a forum, and if I changed my password to hunter2 right now the only thing that would happen is my account would shortly be banned when spammers start to hate-bomb or crypto-scam-bomb discussion threads. Dan would be busy, I would be sad, nothing else would happen
For accounts that actually mean something (Microsoft, Azure, banking, etc), yes, the more factors the better. For a lot of other apps, the extra security is occupying precious roadmap space[1]
1: I'm intentionally side-stepping the "but AI does everything autonomously" debate for the purpose of this discussion
Everyone else: I need unique 128-character passwords for every site I ever visit with unphishable FIDO keys for MFA.
Me: I didn't give the store website permission to save my credit card. If someone logs in, they'll know I ordered pants there.
I am currently having this debate at $DAYJOB, having come from a zero trust implementation to one using fucking Cloudflare Warp. The cost to your "just use a VPN" approach or, if I'm understanding your point correctly, use VPN and zero trust(?!), is that VPNs were designed for on-premises software. In modern times, the number of cases where one needs to perform a fully authenticated, perfectly valid action, from a previously unknown network on previously unconfigured compute is bigger than in the "old days"
GitHub Actions are a prime example. Azure's network, their compute, but I can cryptographically prove it's my repo (and my commit) OIDC-ing into my AWS account. But configuring a Warp client on those machines is some damn nonsense
If you're going to say "self hosted runners exist," yes, so does self-hosted GitHub and yet people get out of the self-hosted game because it eats into other valuable time that could be spent on product features
> is that VPNs were designed for on-premises software.
The way I see this is that VPN is just network extender. Nothing to do with design for on-premise software. By using VPN as an additional layer, most of the vulnerability scanners can’t scan your services anymore. It reduces the likelihood that you are impacted immediately by some publicly known CVEs. That is the only purpose of VPN here.
VPN may also have vulnerabilities, but for making the impact, both VPN and service vulnerability is required at the same time. The more different services/protocols you have behind VPN, more useful it is. It might not make sense if you have just ssh need, for example. Then you have 1:1 protocol ratio, and ssh could be more secure protocol.
In theory, for automated traffic like that you should probably be using a plain Access application with a service token rather than WARP
Does having a VPN/intranet preclude zero trust? It seems you could do both with the private network just being an added layer of security.
It doesn't, but from my perspective the thinking behind zero trust is partly to stop treating networking as a layer of security. Which makes sense to me - the larger the network grows, the harder to know all its entry-points and the transitive reach of those.
A VPN? Yes, by definition. Zero trust requires that every connection is authenticated and users are only granted access to the app they request. They never “connect to the network” - something brokers that connection to the app in question.
VPN puts a user on the network and allows a bad actor to move laterally through the network.
It doesn't have to. There's nothing to stop you using a VPN as an initial filter to reduce the number of people who have access to a network and then properly authenticating and authorizing all access to services after that.
In fact, I'd say is a good defence-in-depth approach, which comes at the cost of increased complexity.
It also prevents the whole world for scanning your outdated public interfaces. Before they can do that, they need to bypass VPN.
If there are tens of different services, is it more likely that one of them has vulnerablity than both VPN and service has? And vulnerability in VPN alone does not matter if your internal network is build like it is facing public world. You might be able to patch it before vulnerability in other services is found.
I’m not saying you can’t have your own definition.
But I am saying that a VPN isn’t zero trust, by the agreed upon industry definition. There’s no way to make a VPN zero trust, and zero trust was created specifically to replace legacy VPNs.
The big problem with the ZT approach is that smaller shops don't have a lot of developers and testers (some maybe with a security inclination) to be certain to a somewhat high degree that their app is written in a secure manner. Or be able to continuously keep abreast of every new security update Microsoft or other IdP makes to their stack.
It is easy for Google/Microsoft and any other FAANG like company to preach about Zero Trust when they have unlimited (for whatever value of unlimited you want to consider) resources. And even then they get it wrong sometimes.
The simpler alternative is to publish all your internal apps through a load balancer / API gateway with a static IP address, put it behind a VPN and call it a day.
> publish all your internal apps through a load balancer / API gateway with a static IP address, put it behind a VPN and call it a day.
Or just use Cognito. It can wrap up all the ugly Microsoft authentication into it's basic OAuth and API Gateway can use and verify Cognito tokens for you transparently. It's as close to the Zero Trust model in a Small Developer Shop we could get.
The zero trust architechture implies (read: requires) that authentication occurs at every layer. Token reuse constitutes a replay attack that mandatory authentication is supposed to thwart. Bypass it and the system's security profile reverts back to perimeter security, with the added disadvantage of that perimeter being outside your org's control.
Zero trust is a good concept turned into a dumb practice. Basically people buying Google's koolaid for this forgot about "defense in depth". Yeah, authenticating every connection is great, throwing a big effing moat around it too is better.
The other thing is most companies are not Google. If you're a global company with hundreds of thousands of people who need internal access, moats may be non-ideal. For a business located in one place, local-only on-premise systems which block access to any country which they don't actively do business with is leaps and bounds better.
> Move to the cloud they said. It will be more secure then your intranet they said. Only fools pay for their own Ops team they said.
It seems that the fundamental issue surfaced in the blog post is that developers who work on authorizarion in resource servers are failing to check basic claims in tokens such as the issuer, the audience, and subject.
If your developers are behind this gross oversight, do you honestly expect an intranet to make a difference?
Listen, the underlying issue is not cloud vs self-hosted. The underlying issue is that security is hard and in general there is no feedback loop except security incidents. Placing your apps in a intranet, or VPN, does nothing to mitigate this issue.
But of course it does provide an additional layer of security that indeed could have reduced the likelihood of this issue being exploited.
For me, the core of the discovered issue was that applications intended purely for use by internal MS staff were discoverable and attackable by anyone on the Internet, and some of those applications had a mis-configuration that allowed them to be attacked.
If all those applications had been behind a decently configured VPN service which required MFA, any attacker who wanted to exploit them would first need access to that VPN, which is another hurdle to cross and would reduce the chance of exploitation.
With a target like MS (and indeed most targets of any value) you shouldn't rely solely on the security provided by a VPN, but it can provide another layer of defence.
For me the question should be, "is the additional security provided by the VPN layer justified against the costs of managing it, and potentially the additional attack surface introduced with the VPN".
I work at a corporate that uses FortiNet. Not just VPN but for AV and web filtering. It aggregates traffic together, increases the attack surface and makes us vulnerable to zero day attacks. All to protect sensitive data that is almost entirely composed of connections of Microsoft software to Microsoft servers. And using all the normal SSO/authorisation stuff. It probably is required from a compliance perspective, but just seems like a massive tradeoff for security .
Everything in security is a tradeoff, and unfortunately compliance risks are real risks :D
That said yep corps over-complicate things and given the number of 0-days in enterprise VPN providers, it could easily be argued that they add more risk than they mitigate.
That's not to say a good VPN setup (or even allow-listing source IP address ranges) doesn't reduce exposure of otherwise Internet visible systems, reducing the likelihood of a mis-configuration or vulnerability being exploited...
Yeah agreed. And some of these products can be configured to be more specific in whitelisting users to particular service. But only if they are actually configured to do that.
"The underlying issue is that security is hard and in general there is no feedback loop except security incidents."
this is tbh, computer architecture is already hard enough and cyber security is like a whole different field especially if the system/program is complex
For me, I don't think that the application is public exposed is really the problem (i.e. not in intranet).
I think the real problem is that these applications (Entra ID) are multi-tenant, rather than a dedicated single-tenant instance.
Here, we have critical identity information that is being stored and shared in the same database with other tenants (malicious attackers). This makes multi-tenancy violations common. Even if Entra ID had a robust mechanism to perform tenancy checks i.e. object belongs to some tenant, there are still vulnerabilities. For example, as you saw in the blog post, multi-tenant requests (requests that span >= 2 tenants), are fundamentally difficult to authorize. A single mistake, can lead to complete compromise.
Compare this to a single tenant app. First, the attacker would need to be authenticated as an user within your tenant. This makes pre-auth attacks more difficult.
I guess the term "defense in depth" has fallen out of fashion?
That is probably still good advice for most companies. Joe's roof fixing business may be the best roof fixing business in 3 states, but would you want them to run their own server for their website, email, and booking?
Anyone who is on this forum is capable of building their own stuff, and running their own server, but that is not most people.
$0 in rewards for RCE on the Windows build servers is crazy. I understand he didn’t find an actual zero-day, only a configuration issue, but still. Imagine the global havoc you can cause if you can pollute the build environment with backdoored DLLs…
I was a windows build engineer at Microsoft. I am unfamiliar with this specific UI for managing build tools (I think it may have been added after I left), however I would be surprised if it was actually RCE-capable.
I notice that it requires the tool to be pulled from NuGet. While it looks like you could enter any package and NuGet source, I would be very surprised if there wasn’t a locked down whitelist of allowed sources (limited to internal Microsoft NuGet feeds).
Locking down NuGet packages was one of the primary things we (the Windows Engineering System team) were heavily focusing on when I left years ago. We were explicitly prevented from using public NuGet packages at all. We had to repackage them and upload them to the internal source to be used.
It's Microsoft. I'm sure there are wonderful people there, but haven't we witnessed recently their master key leak, their engineers begging GPT in PRs to do stuff, their CEO boasting 'backend' engineers are going away.. I wouldn't rely on that company for anything, but I acknowledge a ton of people are not in that position. If they do stay however, it's malpractice.
> engineers begging GPT in PRs to do stuff
This about donet/runtime?
This about donet/runtime?
Absolutely. It'd be hilarious if it weren't sad.
Well we will have monthly goto fail exploits.
It's all very simple: Entra* (Azure AD or however you'd call it) should not be used for AuthZ. Entra AuthN is okayish, but forget about Entra AuthZ, do it all yourself. It's all very simple to avoid once you do AuthZ.
* No idea why the rename happened. Does some manager in Microsoft have the plaque: "Renomino, ergo sum."?
IMO “Azure AD” implies it is literally just AD hosted in Azure, when it’s become much more than that
It's ok to use Entra for AuthZ. Just click the box that says "Require users to be assigned to this application" and assign them [1]. However - that's really the only AuthZ feature Entra has. If you don't enable AuthZ, you should not expect Entra to just magically do AuthZ for you.
Edit: I would add - simple allow/deny authz is only relevant for the very simplest of apps (where all users have the same permissions). For any complex application, users will have different levels of access, which usually requires the application to do AuthZ.
[1] https://learn.microsoft.com/en-us/entra/identity/enterprise-...
>For any complex application, users will have different levels of access, which usually requires the application to do AuthZ.
Any application when AuthZ isn't simply yes/no, which rather quickly is just about all of them (even simple blogs have an admin tier), except for a very heavily microservice based architecture - where one would still want to have a much more convenient interface than Entra to see/manage the access permissions centrally... Entra AuthZ is at best a temporary development aid, but it's so easy to roll AuthZ yourself one might as well do it.
That’s what you get. Entra ID doesn’t allow you to blacklist or whitelist specific tenants for multi tenant apps, which causes problems like this.
Add the fact that MSAL doesn’t work for stuff like browser extensions, so people have to implement their own security solutions to interact with Entra ID and it’s not surprising there are so many issues.
> Entra ID doesn’t allow you to blacklist or whitelist specific tenants for multi tenant apps.
This one very annoying "feature" where I could say this app is available for the following tenants. No, only "my tenant" or "all tenants in Azure".
One workaround I use is to set up apps with "only this tenant" and invite users from other tenants into my tenant. The other approach is to say "all tenants" and then use a group to enforce who can actually use the app.
I don't know if there are any reasons behind this limitation or just an oversight or no client big enough asked for this feature.
Inviting individual users is a good pattern. If you want to allow an entire tenant into your tenant (e.g. if your parent company has a subdivision that has their own tenant), Entra has cross tenant access [1] for that use case.
Generally, you should say "only this tenant" unless you're a SaaS provider. And if you're a SaaS provider, you should really already understand the need to keep your various customers data separate.
[1] https://learn.microsoft.com/en-us/entra/external-id/cross-te...
I am aware of the Cross tenant functionality, but it does not come free - you need at least a P1 subscription in all tenants involved. And you can't do this per user, just per tenant.
Yeah, I mean - if you're a big enough company where you have lots of cross tenant collaboration going on, you should pay for P1.
OAuth is frequently marketed as "more secure". But implementations often confuse authentication with authorization, resulting in problems like this.
I just say auth. You decide which one I mean.
I know it's a joke, but it's funny because it's (somewhat) true. To add to the confusion, sometimes one of them gets abbreviated "authn". That is so unhelpful.
Azure is a true cluster F.
Okta (the other elephant in the room) has its own issues but at least it has decent documentation and even though it’s more expensive I think it’s worth paying that price just to keep security in a separate domain than co-mingle it with other Azure services.
0$ for all this? Microsoft security is a joke.
Automattic HTTP Headers:
Popular CDNs require SNI but do not offer a solution for plaintext domain names on the wire.(ECH exists but is not enabled everywhere SNI is required.)
Meanwhile Wordpress hosts multiple HTTPS sites on same IP and does not require SNI.
(No plaintext domain names on the wire.)
This is still very effective for other organizations, not just microsoft. Of course they won't pay a bounty but any org that uses Micorosft 365/Office 365, Entrea ID which was Azure Active Directory can be polled and abused.
Was not new news, AFAIK from the article they just patched the microsoft tools but they won't be pushing it tenant wide for all orgs.
This dumb stuff is why even Microsoft should use a common, secured and vetted pipeline for service principals so this does not happen.
Does this have a CVE or something? I have the weird feeling the cloud initiative here won't notice this ever otherwise...
Ohh, that's probably why our integration suddenly stopped working for single-tenant app registrations right before release. We were using the /common endpoint for everyone. That is disallowed now.
Did he really get no bounties out of this? The guy found a way into build boxes retail Windows is built on, potentially found the private key that would be used to generate license keys, likely could have dived in a little bit more after getting RCE on the build box to exfil the latest Windows 11 source code. He even found a way to issue rewards. They still gave him nothing?
If their rules say this doesn't deserve a bounty their bounty program is a sham.
Microsoft's bug bounty program is a shell of its former self. They quietly disqualified a lot of high-impact findings in 2023.
In my own experience:
- Leaked service principal credentials granting access to their tenant? $0 bounty.
- Leaked employee credentials granting access to generate privileged tokens? $0 bounty.
- Access to private source code? $0 bounty.
Etc.
I will forever remain radicalized how every tech company decided to just say fuck it in 2023. (ex-Google and left in 2023 over similar shenanigans)
Should be a major public reckoning over this. But there can't be, they hold the cards, the only real view of this you'd have is day-to-day on Blind and some occasional posts that stir honest discussion here.
I guess we just get to grin and bear it while they give gold statues and millions to the right politicians.
It’ll come. Can’t say in what form, but the reckoning will come. Probably anti trust, or anti tech regulations as the public hatred of the tooligarchs grows. The problem with being out of touch is you can’t see the ground shifting beneath your feet.
Corporations getting regulated out of existence is unlikely.
Access to private source code?
Have they already gotten so drunk on "zero trust" that they don't think it should matter if attackers see their source code? Then again, they are open-sourcing a ton of stuff these days...
I think they just don't care.
Their SECURITY.md mentions bug bounties, yet if your submission has anything to do with GitHub it's immediately disqualified. They refuse to remove that (in my opinion) misleading language.
https://github.com/microsoft/.github/blob/main/SECURITY.md
They need the money for AI data centers
Fwiw, the way it works is that Microsoft doesn't really have a bug bounty program. Individual Microsoft teams have bug bounty programs (or not). Platform teams like Entra, Windows, and Azure have robust programs. However, when teams that operate on top of platforms misconfigure those platforms (as happened here), those bugs are owned by the teams that operate on top of the platform, not by the platform.
That's some exceptionally shallow thinking on their part. I think may people would agree that part of the vulnerability is the authentication configuration options do not map well onto real world use cases, the documentation surrounding this is absent or confusing, and even internal teams that should know better are creating insecure services an alarming percentage of the time.
This is what I like about actual safety culture, like you would find in aviation, _all causes_ are to be investigated, all the way back to the shape, size and position of the switches on the flight deck.
It's difficult to take Microsoft's stance seriously. It makes the prices for their "service" seem completely unjustifiable.
My own , small, experience with MSRC is indeed their bug bounty program is not good, they take any possible opportunity to avoid payouts.
this means that a lot of genuine bug bounty hunters just won't look at MS stuff and MS avoid getting things fixed, instead other attackers will be the ones finding things, and they likely won't report it to MS...
If Azure's horrific security track record (tens of exploits, often cross-tenant, often trivial) over the past few years doesn't give you pause, their joke of a bug bounty definitely should.
Obviously nobody with power cares about security in Microsoft's Azure branch. Why does anyone trust continue trusting them? (I mean, I know that Azure is not something you buy by choice, you do because you got a good deal on it or were a Microsoft shop before, but still).
Now remember these dimwits are bragging that 30% of their code is now written by AI; and have mandated Microsoft Accounts, set up OneDrive backup by default, and are providing infrastructure to OpenAI who is currently required to preserve even deleted chats. They also own LinkedIn.
This totally has no foreseeable potential consequences. It would be a real shame if some foreign hostile government with nuclear weapons managed to connect MS Account, LinkedIn Profile, and OpenAI accounts together by shared emails and phone numbers. Is it really worth starting a war for the crime of depantsing the nation?
Having done two rounds at different Fortune 10s (one being Microsoft) I can tell you: This isn't AI, this is the result of years of "make it work" and duct tape.
This is "It'll be safe if we leave it on the intranet" and then someone says "Zero trust!" and then all the sudden things that had authentication on the inside are also going through a new and different layer of authentication. A stack of totally reasonable expectations stack tolerance on tolerance, and just like the Sig Sauer P320, it has a habit of shooting you in the foot when you least expect it.
To be fair, I’m pretty sure the code here was written before modern AI was a thing, back when dinosaurs roamed the earth.
Then this is the AI code the was trained on, and my confidence is still not increasing.
And they don’t use AI to at least check older code?
As the adage goes: now you have two problems
Yes, but Microsoft hasn’t put together that AI making mistakes, is perfect plausible deniability for intentional “mistakes.”