It’s notable that there were ShinyHunters members arrested by the FBI a few years ago. I was in prison with Sebastian Raoult, one of them. We talked quite a bit.
The level of persistence these guys went through to phish at scale is astounding—which is how they gained most of their access. They’d otherwise look up API endpoints on GitHub and see if there were any leaked keys (he wasn’t fond of GitHub's automated scanner).
Generally speaking, humans are more often than not the weakest link the chain when it comes to cyber security, so the fact that most of their access comes from social engineering isn't the least bit surprising.
They themselves are likely to some extent the victims of social engineering as well. After all who benefits from creating exploits for online games and getting children to become script kiddies? Its easier (and probably safer) to make money off of cyber crime if your role isn't committing the crimes yourself. It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
>It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
Who is making money off of selling premium software, that's not marketed as for cybercrime, to non-governmental attackers? Wouldn't the attackers just pirate it?
Do you mean they thought the scanner was effective and weren't fond of it because it disrupted their business? Or do you mean they had a low opinion of the scanner because it was ineffective?
He would complain that it disrupted their business, and that it doesn't catch all keys—it catches the big ones that he certainly found to be very valuable.
Not every culture has the same standards of politeness. I didn't think it was rude, I think it can be even respectful of their time and intelligence to be concise, plain and direct, as long as you are not literally attacking them.
I mean, the comments under the GPT-5.1 announcement just today were full of people wishing that AI actually responded to them like this.
> We are sorry. We regret that this incident has caused worry for our partners and people. We have begun the process to identify and contact those impacted and are working closely with law enforcement and the relevant regulators. We are fully committed to maintaining your trust.
I know there will by a bunch of cynics who say that an LLM or a PR crisis team wrote this post... but if they did, hats off. It is powerful and moving. This guys really falls on his sword / takes it on the chin.
I'll never not think of that South Park scene where they mocked BP's "We're so sorry" statement whenever I see one of those. I don't care if you're sorry or if you realize how much you betrayed your customers. Tell me how you investigated the root causes of the incident and how the results will prevent this scenario from ever happening again. Like, how many other deprecated third party systems were identified handling a significant portion of your customer data after this hack? Who declined to allocate the necessary budget to keep systems updated? That's the only way I will even consider giving some trust back. If you really want to apologise, start handing out cash or whatever to the people you betrayed. But mere words like these are absolutely meaningless in today's world. People are right to dismiss them.
I wouldn't be so quick. Everybody gets hacked, sooner or later. Whether they'll own up to it or not is what makes the difference and I've seen far, far worse than this response by Checkout.com, it seems to be one of the better responses to such an event that I've seen to date.
> Like, how many other deprecated third party systems were identified handling a significant portion of your customer data after this hack?
The problem with that is that you'll never know. Because you'd have to audit each and every service provider and I think only Ebay does that. And they're not exactly a paragon of virtue either.
> Who declined to allocate the necessary budget to keep systems updated?
See: prevention paradox. Until this sinks in it will happen over and over again.
> But mere words like these are absolutely meaningless in today's world. People are right to dismiss them.
Again, yes, but: they are at least attempting to use the right words. Now they need to follow them up with the right actions.
The prevention paradox only really applies when the bad event has significant costs. It seems to me that getting hacked has at worst mild consequences. Cisco for example is still doing well despite numerous embarrassing backdoors.
Right! But, wouldn't a more appropriate approach be to mitigate the damage from being hacked as much as possible in the first place? Perhaps this starts by simplifying bloated systems, reducing data collection to data that which is only absolutely legally necessary for KYC and financial transactions in whatever respective country(ies) the service operates in, hammer-testing databases for old tricks that seem to have been forgotten about in a landscape of hacks with ever-increasingly complexity, etc.
Maybe it's the dad in me, years of telling me son to not apologize, but to avoid the behavior that causes the problem in the first place. Bad things happen, and we all screw up from time to time, that is a fact of life, but a little forethought and consideration about the best or safest way to do a thing is a great way to shrink the blast area of any surprise bombs that go off.
I don’t see how any of what you’re suggesting would have prevented this hack though (which involved an old storage account that hadn’t been used since 2020 getting hacked).
You don't see how preventative maintenance such as implementing a policy to remove old accounts after N days could have prevented this? Preventative maintenance is part of the forethought that should take place about the best or safest way to do a thing. This is something that could be easily learned by looking an problems others have had in the past.
As a controls tech, I provide a lot of documentation and teach to our customers about how to deploy, operate and maintain a machine for best possible results with lowest risk to production or human safety. Some clients follow my instruction, some do not. Guess which ones end up getting billed most for my time after they've implemented a product we make.
Too often, we want to just do without thinking. This often causes us to overlook critical points of failure.
For the app I maintain, we have a policy of deleting inactive accounts, after a year. We delete approved signups that have not been “consummated,” after thirty days.
Even so, we still need to keep an eye out. A couple of days ago, an old account (not quite a year), started spewing connection requests to all the app users. It had been a legit account, so I have to assume it was pwned. We deleted it quickly.
A lot of our monitoring is done manually, and carefully. We have extremely strict privacy rules, and that actually makes security monitoring a bit more difficult.
Such data is a liability, not an asset and if you dispose of it as soon as you reasonably can that's good. If this is a communications service consider saving a hash of the ID and refusing new sign ups with that same ID because if the data gets deleted then someone could re-sign up with someone else's old account. But if you keep a copy of the hash around you can check if an account has ever existed and refuse registration if that's the case.
> Maybe it's the dad in me, years of telling me son to not apologize, but to avoid the behavior that causes the problem in the first place.
What an odd thing to teach a child. If you've wronged someone, avoiding the behavior in future is something that'll help you, but does sweet fuck all for the person you just wronged. They still deserve an apology.
I think people this approach is overcompensating for over-apologizing (or, similarly, over thanking, both in excess are off-putting). I have a child who just says "sorry" and doesn't actually care about changing the underlying behavior.
But yes, even if you try to make a healthy balance, there are still plenty of times when an apology are appropriate and will go a long way, for the giver and receiver, in my opinion anyway.
Thanks, this is where I was coming from. I suppose I could have made that more clear in my original comment. The idea behind my style of parenting is self-reflecting and our ability to analyze the impact of our choices before we make them.
But of course, apologizing when you have definitely wronged a person is important, too. I didn't mean to come off as teaching my kid to never apologize, just think before you act. But you get the idea.
Amazonian here. My views are my own; I do not represent my company/corporate.
That said...
We do our very best. But I don't know anyone here who would say "it can never happen". Security is never an absolute. The best processes and technology will lower the likelihood and impact towards 0, but never to 0. Viewed from that angle, it's not if Amazon will be hacked, it's when and to what extent. It is my sincere hope that if we have an incident, we rise up to the moment with transparency and humility. I believe that's what most of us are looking for during and after an incident has occurred.
To our customers: Do your best, but have a plan for what you're going to do when it happens. Incidents like this one here from checkout.com can show examples of some positive actions that can be taken.
> But I don't know anyone here who would say "it can never happen". Security is never an absolute.
Exactly. I think it is great for people like you to inject some more realistic expectations into discussions like these.
An entity like Amazon is not - in the longer term - going to escape fate, but they have more budget and (usually) much better internal practices which rule out the kind of thing that would bring down a lesser org. But in the end it is all about the budget, as long as Amazon's budget is significantly larger than the attackers they will probably manage to stay ahead. But if they ever get complacent or start economizing on security then the odds change very rapidly. Your very realistic stance is one of the reasons it hasn't happened yet, you are acutely aware you are in spite of all of your efforts still at risk.
Blast radius reduction by removing data you no longer need (and that includes the marketing department, who more often than not are the real culprit) is a good first step towards more realistic expectations for any org.
That was a Salesforce instance with largely public data, rather than something owned and operated by Google itself. It's a bit like saying you stole from me, but instead of my apartment you broke into my off-site storage with Uhaul. Technically correct, but different implications on the integrity of my apartment security.
It was a social engineering attack that leveraged the device OAuth flow, where the device gaining access to the resource server (in this case the Salesforce API) is separate from the device that grants the authorization.
The hackers called employees/contractors at Google (& lots of other large companies) with user access to the company's Salesforce instance and tricked them into authorizing API access for the hackers' machine.
It's the same as loading Apple TV on your Roku despite not having a subscription and then calling your neighbor who does have an account and tricking them into entering the 5 digit code at link.apple.com
Continuing with your analogy, they didn't break into the off-site storage unit so much as they tricked someone into giving them a key.
There's no security vulnerability in Google/Salesforce or your apartment/storage per se, but a lapse in security training for employees/contractors can be the functional equivalent to a zero-day vulnerability.
There's no vulnerability per se, but I think the Salesforce UI is pretty confusing in this case. It looks like a login page, but actually if you fill it in, you're granting an attacker access.
Disclosure: I work at Google, but don't have much knowledge about this case.
The relevant difference here is that these companies have actual security standards on the level that you would only find in the FAA or similar organisations were lives are in danger. For every incident in Google cloud for example, they don't just apologise, but they state exactly what happened and how they responded (down to the minute) and you can read up exactly how they plan to prevent this from happening again: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...
This is what incident handling by a trustworthy provider looks like.
fair or not, if their customers get hacked it's still on them to mitigate and reduce the damage. Ex: cloud providers that provide billing alerts but not hard cut-offs are not doing a good job.
There are millions of companies even century or decade old ones without a hacking incident with data extraction. The whole everyone gets hacked is copium for a lack of security standards or here the lack of deprecation and having unmantained systems online with legacy client data. Announcing it proudly would be concerning if I had business with them. It's not even a lack of competence... it's a lack of hygiene.
The pedantic answer is to point to a bunch of shell companies without any electronic presence. However in terms of actual businesses there’s decent odds the closest dry cleaners, independent restaurant, car wash, etc has not had its data extracted by a hacking incident.
Having a minimal attack surface and not being actively targeted is a meaningful advantage here.
Take the OP. What defenses were breached? An old abandoned system running unmantained in the background with old user data still attached. There is no excuse.
We also have to remember that we have collectively decided to use Windows and AD, QA tested software etc (some examples) over correct software, hardened by default settings etc.
The intent of the South Park sketch was to lampoon that BP were (/are) willingly doing awful things and then give corpo apology statements when caught.
Here, Checkout has been the victim of a crime, just as much as their impacted customers. It’s a loss for everyone involved except the perpetrators. Using words like “betrayed” as if Checkout wilfully mislead its customers, is a heavy accusation to level.
At a point, all you can do is apologise, offer compensation if possible, and plot out how you’re going to prevent it going forward.
> At a point, all you can do is apologise, offer compensation if possible, and plot out how you’re going to prevent it going forward.
I totally agree – You've covered the 3 most important things to do here: Apologize; make it right; sufficiently explain in detail to customers how you'll prevent recurrences.
After reading the post, I see the 1st of 3. To their credit, most companies don't get that far, so thanks, Checkout.com. Now keep going, 2 tasks left to do and be totally transparent about.
They are donating the entire ransom amount to two universities for security research. I don't care about the words themselves, but assuming they're not outright lying about this, that meant a lot to me. They are putting their (corporate!) money where their mouth is.
No trolling on my side, I think having people who think just like you is a triumph for humanity. As we approach times far darker and manipulation takes smarter shapes, a cynical mind is worth many trophies.
> prevent this scenario from ever happening again.
Every additional nine of not getting hacked takes effort. Getting to 100% takes infinite effort i.e. is impossible. Trying to achieve the impossible will make you spin on the spot chasing ever more obscure solutions.
As soon as you understand a potential solution enough to implement it you also understand that it cannot achieve the impossible. If you keep insisting on achieving the impossible you have to abandon this potential solution and pin your hope on something you don't understand yet. And so the cycle repeats.
It is good to hold people accountable but only demand the impossible from those you want to go crazy.
What you request is for them to divulge internal details of their architecture that could lead to additional compromise as well as admission of fault that could make it easier for them to be sued. All for some intangible moral notion. No business leader would ever do those things.
In attacks on software systems specifically though, I always find this aggressive stance toward the victimized business odd, especially when otherwise reasonable security standards have been met. You simply cannot plug all holes.
As AI tools accelerate hacking capabilities, at what point do we seriously start going after the attackers across borders and stop blaming the victimized businesses?
We solved this in the past. Let’s say you ran a brick-and-mortar business, and even though you secured your sensitive customer paperwork in a locked safe (which most probably didn’t), someone broke into the building and cracked the safe with industrial-grade drilling equipment.
You would rightly focus your ire and efforts on the perpetrators, and not say ”gahhh what an evil dumb business, you didn’t think to install a safe of at least 1 meter thick titanium to protect against industrial grade drilling!????”
If we want to have nice things going forward, the solution is going to have to involve much more aggressive cybercrime enforcement globally. If 100,000 North Koreans landed on the shores of Los Angeles and began looting en masse, the solution would not be to have everybody build medieval stone fortresses around their homes.
Right. Transparency doesn't mean telling about the attack that already happened. It means telling us about their issues and ways this could happen again. And they didn't even mention the investment amount for the security labs.
Words are cheap, but "We are sorry." is a surprisingly rare thing for a company to say (they will usually sugarcoat it, shift blame, add qualifiers, use weasel words, etc.), so it's refreshing to hear that.
This is a classic example of a fake apology: "We regret that this incident has caused worry for our partners and people" they are not really "sorry" that data was stolen but only "regret" that their partners are worried. No word on how they will prevent this in the future and how it even happened. Instead it gets downplayed ("legacy third-party","less than 25% were affected" (which is a huge number), no word on what data exactly).
How would the apology need to be worded so that it does not get interpreted as a fake apology?
In terms of "downplaying" it seems like they are pretty concrete in sharing the blast radius. If less than 25% of users were affected, how else should they phrase this? They do say that this was data used for onboarding merchants that was on a system that was used in the past and is no longer used.
I am as annoyed by companies sugar coating responses, but here the response sounds refreshingly concrete and more genuine than most.
We are truly sorry for the impact this has no doubt caused on our customers and partners businesses. This clearly should never have happened, and we take full responsibility.
Whilst we can never put into words how deeply sorry we are, we will work tirelessly to make this right with each and every one of you, starting with a full account of what transpired, and the steps we are going to be taking immediately to ensure nothing like this can ever happen again.
We want to work directly with you to help minimise the impact on you, and will be reaching out to every customer directly to help understand their immediate needs. If that means helping you migrate away to another platform, then so be it - we will assist in any way we can. Trust should be earn't, and we completely understand that in this instance your trust in us has understandably been shaken.
an effective apology establishes accountability, demonstrates reflection on what caused the problem, and commits to concrete changes to prevent it from reoccurring
I always presume the "We are sorry" opens up to financial compensation, whereas the "we regret that you are worried" does not.
In my country, this debate is being held WRT the atrocities my country committed in its (former) colonies, and towards enslaved humans¹. Our king and prime minister never truly "apologized". Because, I kid you not, the government fears that this opens up possibilities for financial reparation or compensation and the government doesn't want to pay this. They basically searched for the words that sound as close to apologies as possible, but aren't words that require one to act on the apologies.
¹ I'm talking about The Netherlands. Where such atrocities were committed as close as one and a half generations ago still (1949) (https://www.maastrichtuniversity.nl/blog/2022/10/how-do-dutc...) but mostly during what is still called "The Golden Age".
> No word on how they will prevent this in the future and how it even happened.
Because these things take time, while you need to disclose that something happened as fast as possible to your customers (in the EU, you are mandated by the GDPR, for instance).
Agreed. It's just a classic way to manipulate the viewers. They just wanted to sound edgy for not paying a ransom, which is definitely a good thing. Never pay these crooks but you left a legacy system online without any protections? That's serious
I like you like this. For me it’s close but fails in the word selection in the last sentence: “maintaining” trust is not what I would say their job is at this point, it’s “restoring” it.
One places the company at the center as the important point of reference, avoiding some responsibility. The other places the customer at the center, taking responsibility.
If i was a customer id be pissed off, but this is as good as a response you can have to an incident like this.
- timely response
- initial disclosure by company and not third party
- actual expression of shame and remorse
- a decent explanation of target/scope
i could imagine being cyclical about the statement, but look at other companies who have gotten breached in the past. very few of them do well on all points
Timely in what way? Seems they didn't discover the hack themselves, didn't discover it until the hackers themselves reached out last week, and today we're seeing them acknowledging it. I'm not sure anything here could be described as "timely".
I have been doing a self Have I Been Pwned audit and, reading many company blog posts, and it wasn't uncommon to see disclosure months after incidents.
Yeah, that sucks, and I wouldn't call those "timely" either. Is your point that "timely" is relative and depends on what others are doing? Personally, "slow" is slow regardless of how slow others are, but clearly some would feel differently, that's OK too.
If we just let the companies go away with 'we are sorry' and say that is as good as it gets, then this industry is up for far more catastrophic situations in the future. Criminal liability, refunds to customers, requirements from regulators might move things in the right direction, but letting companies have shitty practices by hoarding data they don't need and putting customers at risk is definitely something that should be looked at with more scrutiny.
It depends on the crime though right? This was all legacy data and from the description the worst thing they got was contact information that's five years older or more ("internal operational documents and merchant onboarding materials at that time.").
For that level of breach their response seems about right to me, especially waving the money in ShinyHunters' face before giving it away to their enemies.
I agree, it depends, but this wouldn't be the first time company underplayed (or simply lied) about the extent of the breach. I am sure even if it was current data or a more serious breach, the messaging would be similar from their side.
> as good as a response you can have to an incident like this.
From customer perspective “in an effort to reduce the likelihood of this data becoming widely available, we’ve paid the ransom” is probably better, even if some people will not like it.
Also to really be transparent it’d be good to post a detailed postmortem along with audit results detailing other problems they (most likely) discovered.
No, that would not help me as a customer. Because I would never believe that that party would keep their word, besides, it can't be verified. You'll have that shadow hanging around for ever. The good thing is that those assholes now have less budget to go after the next party. The herd is safe from wolves by standing together, not by trying to see which of their number should be sacrificed next.
There’s a very real difference between the data possibly still being saved in some huge storage dump of a ransomware group and being available for everybody to exploit on a leak site.
It’s a sliding scale, where payment firmly pushes you in the more comfortable direction.
Also, the uncomfortable truth is that ransomware payments are very common. Not paying will make essentially no difference, the business would probably still be incredibly lucrative even if payment rates dropped to 5% of what they are now.
If there was global co-operation to outlaw ransom payments, that’d be great. Until then, individual companies refusing to pay is largely pointless.
Yes, but your concerns are less rooted in reality and more in the fact that you find the idea of paying ransomware groups repulsive. That’s fine, but there’s rational analysis to be done here, and it often leads to paying being the best option.
If your company gets hit by one of these groups and you want to protect your customers, paying is almost always the most effective way to do that. Someone who isn’t particularly interested in protecting their customers probably wouldn’t pay if the damage from not paying would be lower than the cost of paying.
A third possibility is that you simply feel uncomfortable about paying, which is fine, but it isn’t a particularly rational basis for the decision.
I think we can also fairly assume that the vast majority of people have no strong feelings about ransomware, so there’s likely going to be no meaningful reputational damage caused by paying.
If this is actually frequently happening, your claim should be pretty easy to prove. Most stolen databases are sold fairly publicly.
The ransom payments tend to be so big anyway that selling the data and associated reputational damage is most likely not worth the hassle.
Basic game theory shows that the best course of action for any ransomware group with multiple victims is to act honestly. You can never be sure, but the incentives are there and they’re pretty obvious.
The big groups are making in the neighbourhood of $billions, earning extra millions by sabotaging their main source of revenue seems ridiculous.
Probably? They have pretty professional customer service pages.
However they don’t really need to because there are plenty of documented cases, and the incident response company you hire will almost certainly have prior knowledge of the group you’re forced to deal with.
If they had a history of fucking over their “customers”, the IR team you hired would know and presumably advise against paying.
We’re talking about criminal organisations that depend on a certain level of trust to make any money at all.
Yes, the data might still leak. It’s absurd to suggest that it’s not less likely to leak if you pay.
There’s a reason why businesses very frequently arrive at the conclusion that it’s better to pay, and it’s not because they’re stupid or malicious. They actually have money on the line too, unlike almost everyone who would criticise them for paying.
> Until there is legislation to stop these payments, there will be countless situations where paying is simply the best option.
Paying the ransom is not exactly legal, is it? Surely the attackers don't provide you with a legitimate invoice for your accounting. As a company you cannot just buy a large amount of crypto and randomly send it to someone.
They hire a third party, sometimes their cyber insurance provider, to "cleanup" the ransomware. That third party then pays another third party who is often located in a region of the world with lax laws to perform the negotiations.
At the end of the day nobody breaks any laws and the criminals get paid.
Probably not that significantly, these are primarily crimes of opportunity. An attacker isn’t likely to do much research on the company until they already have access, and that point they might as well proceed (especially since getting hit a second time would be doubly awkward for the company, presumably dramatically increasing the chances of payment)
And selling the data from companies like Checkout.com is generally still worth a decent amount, even if nowhere close to the bigger ransom payments.
You mean as a customer you'd feel better if the company victim of ransom would help fund the very group that put the business and your data in jeopardy?
What makes you think they won't get the money and sell the database in the dark web?
This is like falling victim to a scam and paying more on top of it because the scammers promised to return the money if you pay a bit more.
I see no likelihood game to be played there because you can't trust criminals by default. Thinking otherwise is just naive and wishful. Your data is out in the wild, nothing you can do about that. As soon as you accept that the better are your chances to do damage reduction.
Their incentives are well known. You don’t have to trust them to assume that they will act rationally.
Picking up hundreds of thousands at best (very few databases would be worth so much) when your main business pays millions or tens of millions per victim simply isn’t worth it, selling the data would jeopardise their main business which is orders of magnitude more profitable.
Absolutely no IR company will advise their clients to pay if the particular ransomware group is known to renege on their promises.
Did some research and indeed there is a sort of "honor among thieves" kinda vibes when it comes to ransom attacks.
Still, it's illegal or quite bureaucratic in some places to pay up.
And idk... It still feels like these ransom groups could well sit on the data a while, collect data from other attacks, shuffle, shard and share these databases, and then sell the data in a way that is hard to trace back to the particular incident and to a particular group, so they get away with getting the ransom money and then selling the db latter.
It's also not granted that even with the decrypt tools you'd be able to easily recover data at scale given how janky these tools are.
I don't know. I am less sure now than I was before about this, but I feel like it's the correct move not to pay up and fund the group that struck you, only so it can strike others, and also risk legal litigations.
Completely useless take in the real world where these payments are common, it makes no difference whatsoever if an individual or even vast majority of victims stop paying. Ransomware will remain incredibly lucrative until payments are outlawed.
The cost of an attack like this is in the thousands of dollars at most, the ransom payments tend to be in the millions. The economics of not paying just don’t add up in the current situation.
> An investigation by the NCA is very unlikely to be commenced into a ransomware victim, or those involved in the facilitation of the victim’s payment, who have proactively engaged with the relevant bodies as set out in the mitigating factors above
i.e you’re not even going to be investigated unless you try to cover things up.
This is a solved problem, big companies with big legal departments make large ransomware payments every day. Big incident response companies have teams of negotiators to work through the process of paying, and to get the best possible price.
The donation is more or less virtue signaling rather than actual insight.
The problem can not be helped by research research against cybercrime. Proper practices for protections are well established and known, they just need to be implemented.
The amount donated should've rather be invested into better protections / hiring a person responsible in the company.
(Context: The hack happened on a not properly decomissioned legacy system.)
> The donation is more or less virtue signalling rather than actual insight.
I see it more as a middle finger to the perps: “look, we can afford to pay, here, see us pay that amount elsewhere, but you aren't getting it”. It isn't signalling virtue as much as it is signalling “fuck you and your ransom demands” in the hope that this will mark them as not an easy target for that sort of thing in future.
It also serves as a proxy for a punishment. They are, from one perspective, paying a voluntary fine based on their own assessment of their security failings.
For customers it signals sincerity and may help dampen outrage in their follow up dealings.
Yes but I think it's a good virtue to signal considering the circumstances. If they paid the ransom that would signal that ransoming this company works, incentivizing more ransoms. If they refuse to pay the ransom it might signal that they care more about money than they do integrity. Taking the financial hit of the ransom, but paying it to something that signals their values, is about the best move I can imagine.
It is virtue signaling, especially considering the fact that doing the hard to swallow thing of paying the ransom would probably be the best outcome from a customer perspective.
Yes there are negative externalities in funding ransomware operations, not paying is still much more likely to hurt your customers than paying.
Paying ransomware fines is never the smart move to do unless you happen to trust what cyber criminals tell you.
You send them the payment, they tell you they deleted the data, but they also sell the data to 10 other customers over the dark-web.
Why would you ever trust people who are inherently trustworthy and who are trying to screw you? While also encouraging further ransomware crimes in the future.
What is the problem with virtue signaling? By all means signal virtue! Perhaps you are concerned by cheap virtue signals, which have little significance.
The point here is that this is an expensive virtue signal. Although, it would be more effective if we knew how expensive it was.
> If companies want security, they should pay for security.
Or just properly follow best-practise, and their own procedures, internally.⁰
That was the failing here, which in an unusual act of honesty they are taking responsibility for in this matter.
--------
[0] That might be considered paying for security, indirectly, as it means having the resources available to make sure these things are done, and tracked so it can be proven they are done making slips difficult to happen and easy to track & hopefully rectify when they inevitably still do.
The whole codebase & tools at whatever company I ever worked at was using 99% legacy stuff. Its wild...
Often times it would have been easier to rebuild the whole project over trying to upgrade 5-6 year old dependencies.
Ultimately the companies do not care about these kinda incidents. They say sorry, everyone laughs at them for a week and then after its business as usual, with that one thing fixed and still rolling legacy stuff for everything else.
Interesting spin for a core infrastructure provider who deals with the most sensitive part of most businesses, tries to bury the lede of getting hacked with a tale of their virtuous refusal to pay a ransom; is this supposed to make them attractive or just have people skip the motivating events? Swing and a miss in my books.
While a nice gesture, I'm not so certain that if I were one of their "less than 25%" of customers impacted that I'd be so pleased. Why not compensate them instead?
"The system was used for internal operational documents and merchant onboarding materials at that time"
To me it seems most likely that this is data collected during the KYC process during onboarding, meaning company documents, director passport or ID card scans, those kind of things. So the risk here for at least a few more years until all identity documents have expired is identity theft possibilities (e.g. fraudsters registering their company with another PSP using the stolen documents and then processing fraudulent payments until they get shut down, or signing up for bank accounts using their info and tax id).
Passport or ID card scans would never be be stored alongside general KYB information, e.g. the standard forms PSPs use.
If you read between the lines of the verbiage here, it looks like a general archived dropbox of stuff like PDF documents which the onboarding team used.
Since GDPR etc, items like passports, driving license data etc, has been kept in far more secure areas that low-level staff (e.g. people doing merchant onboarding) won't have easy access to.
I could be wrong but I would be fairly surprised if JPGs of passports were kept alongside docx files of merchant onboarding questionnaires.
> Passport or ID card scans would never be be stored alongside general KYB information
How do you qualify this statement? Did you mean “should never”? Even then, you’re likely overstating things. Nothing prevents co-locating KYC/KYB information. On the contrary, most businesses conducting KYB are required to conduct UBO and they’re trained to combine them both. Register as a director/officer with any FSI in North America and you’ll see.
Fair point! Yeah, it could be. Although Europe tends to be stricter about those things, i.e. where PII is stored. I was trained way back in like 2018 about ensuring I never have any PII stored on my PC and around the requirements of the GDPR in terms of access to information and right to delete etc.
Why would merchants fill out docx files? They would submit an online form with their business, director and UBO details, that data would be stored in the Checkout.com merchants database, and any supporting documents like passport scans would be stored in a cloud storage system, just like the one that got hacked.
If it was just some internal PDFs used by the onboarding team, probably they wouldn't make such a big announcement.
Another person wrote a good response to this but yeah, I would say, as someone that has worked in fintech, you will almost always have some integrations with systems which require Microsoft word format, as well as obviously PDFs, CSVs, etc.
Every country you operate in has different rules and regulations and you have to integrate with many third party systems as well as governmental entities etc, and sometimes you have to do really really technically backwards things.
Some integrations I remember were stuff like cron jobs sending CSV files via FTP which were automatically picked up.
If you are dealing with financial services (and payment provider most certainly would), you will be forced to interface with infuriating vendor vetting and onboarding questionnaire processes. The kinds that would make Franz Kafka blush, and CIA take notice for their enhanced interrogation techniques.
The sheer amount of effectively useless bingo sheets with highly detailed business (and process) information boggles the mind.
I dont understand some of the cynicism in this thread. This is a bold move and I support. It is impossible to not have incidents like this and until
theres a proper post mortem we wont really know how much of it can be attributed to carelessness. They could have just kept is hush hush but I appreciate that they came forward with it and also donated money to academia. The research will be open and everybody benefits.
"Cyber Security Oxford is a community of researchers and experts working under the umbrella of the University of Oxford’s Academic Centre of Excellence in Cyber Security Research (ACE-CSR)."
So, I used to work in the fintech world and it looks to me like what was hacked was merchant KYB documents. I.e. when a merchant signs up for a PSP they have to provide various documentation about the business so the PSP can underwrite the risk of taking on this business. I.e. some PSPs won't deal with porn companies or travel companies or companies from certain regions etc.
This sort of data is generally treated very differently to the actual PANs and payment information (which are highly encrypted using HSMs).
So it's obviously shitty to get hacked, but if it was just KYB (or KYC) type information, it's not harming any individuals. A lot of KYB information is public (depending on country).
It's not just business data though - usually it will include ultimate beneficial owner and directors' passports, tax ID, etc. So there is a risk of identity theft there of potentially some very wealthy individuals.
When they say "The episode occurred when threat actors gained access to this third party legacy system which was not decommissioned properly. " for me it sounds like a not properly wiped disk that got into the the bad guys hands. It would be interesting to know more to be prepared for proper decommissioning of hardware.
They're "sorry", they want to be "transparent" and "accountable", they want your "trust", but not enough to publicly explain what happened or what kind of data got taken (is a full CRM backup from 6 years ago considered "legacy" "internal operational documents"?). There's not even a promise to produce more information about their mistake.
> Jimmy, where did the cookies go?
> Something that was on the counter is gone! I don't know how! It might not even be my fault! But I'm sorry!
What kind of an apology is that? It's not. It's marketing for the public while they contact the "less than 25% of [their] current merchant base" whose (presumably sensitive) information was somehow in "internal operational documents".
Oh but also took some of what they charge their customers and gave that (undisclosed?) sum away to a university. They must be really sorry.
Typically, companies wouldn't really pay an actual ransom like unmarked bills stacked in a paper bag and thrown out from a bridge onto a passing barge.
Instead, you would pay (exhorbitant) consulting fees to a foreign-based "offensive security" entity, and most of the time get some sort of security report that says if you'd simply plug this and that holes, your systems would now be reasonably safe.
> Typically, companies wouldn't really pay an actual ransom like unmarked bills stacked in a paper bag and thrown out from a bridge onto a passing barge.
Yes, that's why cryptocurrencies are a gift from heaven for these hacker groups.
Therefore, even if paying ransom money (somehow) must be legal, maybe it should be illegal to use crypto for it. You don't want to make it too easy to run this type of criminal business.
I was guessing it's a OneDrive, Google Drive, DropBox or something.
Probably someone was phished and they still had access to an old shared drive which still had this data. Total guess but reading between the lines it could be something like this.
They are downplaying the severity of the data theft, which most likely includes user identification documents, the most dangerous type of breach, since it directly enables identity theft
Reading between the lines reveals the severity they're obfuscating, with contradictions:
> This incident has not impacted our payment processing platform. The threat actors do not have, and never had, access to merchant funds or card numbers.
> The system was used for internal operational documents and merchant onboarding materials at that time.
> We have begun the process to identify and contact those impacted and are working closely with law enforcement and the relevant regulators
They stress that "merchant funds or card numbers" weren't accessed, yet acknowledge contacting "impacted" users, this begs the question: how can users be meaningfully "impacted" by mere onboarding paperwork?
> We will be donating the ransom amount to Carnegie Mellon University and the University of Oxford Cyber Security Center (OXCIS) to support their research in the fight against cybercrime.
Can this be tax deducted? Because this it sounds like gaslighting to change the narrative.
This one doesn't change that much like others said, but it is still burning money. Universities and their projects waste a lot of money - from buying hardware via complicated processes to projects wasting millions of USD (in cases I know it is EUR). Sponsored by companies like Samsung or Siemens, not releasing anything useful for years and still extending projects for "further research" :(
It's their money in this case so they can burn it any way they want and great to see they didn't support script kiddies here (assuming it was some leftover files on forgotten object storage bucket, sadly unencrypted or with keys available nearby).
Lots of companies waste money too. I'd rather see universities spend it on research and studies than companies developing useless products and shutting down after a year.
To me this looks like getting hacked, donating to some public non-profit, deduct it via taxes (essentially spending nothing) and spin it online as a positive.
I've met a few people who genuinely believe that 'tax deductible' equates to 'essentially spending nothing' or somehow equate that the amount you donate would be an amount you would otherwise give to the Government in taxes so from your perspective it doesn't change anything.
This is definitely not the case. If you make $100 profit and you would have had to pay 20% corporate tax, then you pay $20 in taxes, you'd be left with $80 to buy chocolate or whatever you want.
If you donate $20 and deduct it from your profit, then your profit is now calculated at $80. So you pay $16 in taxes. So you saved $4 but spent $20, so you're $16 dollars down and now you only have $64 for chocolate, so not 'essentially nothing'.
> deduct it via taxes (essentially spending nothing)
Unless you're positing some very specific, unusual situation, this isn't how tax deductibility works. The dollar amount of a tax deductible donation is subtracted from your taxable income, not from your tax bill. So you're getting a discount on the donation equal to your marginal tax rate.
That’s not how tax deductions work because a tax deduction doesn’t give you the full amount of your donation back it only reduces your taxable income, not your tax bill dollar-for-dollar.
Example:
You earn $100,000.
You donate $10,000 to a qualifying charity.
You can now deduct that $10,000, i.e. you’ll be taxed as if you earned $90,000, not $100,000.
If your marginal tax rate is 30%, you’ll save 30% of $10,000 = $3,000 in taxes.
So you’re still out $7,000 in real money.
It changes nothing. If you get taxes 20% til 90k and 30% above that, then donating 10k still saves you 3k in taxes, you're still out 7k and you're still paying 18k in taxes on the 90k.
"Firefighter arson is a persistent phenomenon involving a very small minority of firefighters who are also active arsonists ... It has been reported that roughly 100 U.S. firefighters are convicted of arson each year."
It wouldn't require a conspiracy for these companies to 'invest' in security companies they have ties to. Throw in tax incentives and loopholes and whatnot and it turns out not to hurt the original company at all.
It’s notable that there were ShinyHunters members arrested by the FBI a few years ago. I was in prison with Sebastian Raoult, one of them. We talked quite a bit.
The level of persistence these guys went through to phish at scale is astounding—which is how they gained most of their access. They’d otherwise look up API endpoints on GitHub and see if there were any leaked keys (he wasn’t fond of GitHub's automated scanner).
https://www.justice.gov/usao-wdwa/pr/member-notorious-intern...
Generally speaking, humans are more often than not the weakest link the chain when it comes to cyber security, so the fact that most of their access comes from social engineering isn't the least bit surprising.
They themselves are likely to some extent the victims of social engineering as well. After all who benefits from creating exploits for online games and getting children to become script kiddies? Its easier (and probably safer) to make money off of cyber crime if your role isn't committing the crimes yourself. It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
>It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
Who is making money off of selling premium software, that's not marketed as for cybercrime, to non-governmental attackers? Wouldn't the attackers just pirate it?
Feel like IDA Pro counts.
> (he wasn’t fond of GitHub's automated scanner
Do you mean they thought the scanner was effective and weren't fond of it because it disrupted their business? Or do you mean they had a low opinion of the scanner because it was ineffective?
He would complain that it disrupted their business, and that it doesn't catch all keys—it catches the big ones that he certainly found to be very valuable.
> The level of persistence these guys went through to phish at scale is astounding—which is how they gained most of their access.
explain
Don't talk to a human being like they're an AI ever again.
That’s standard practice, on HN, and has been, before AI was a broken condom on the drug store shelf.
Unpleasant, but comes with the territory (I don’t like it, when it’s done to me).
That said, I’m not sure that kind of scolding is particularly effective, either.
I think saying just "explain" is a bit of a meme and meant to come across as almost humorously asking for an explanation.
Not every culture has the same standards of politeness. I didn't think it was rude, I think it can be even respectful of their time and intelligence to be concise, plain and direct, as long as you are not literally attacking them.
I mean, the comments under the GPT-5.1 announcement just today were full of people wishing that AI actually responded to them like this.
https://news.ycombinator.com/item?id=45904551
I love this part (no trolling from me):
I know there will by a bunch of cynics who say that an LLM or a PR crisis team wrote this post... but if they did, hats off. It is powerful and moving. This guys really falls on his sword / takes it on the chin.I'll never not think of that South Park scene where they mocked BP's "We're so sorry" statement whenever I see one of those. I don't care if you're sorry or if you realize how much you betrayed your customers. Tell me how you investigated the root causes of the incident and how the results will prevent this scenario from ever happening again. Like, how many other deprecated third party systems were identified handling a significant portion of your customer data after this hack? Who declined to allocate the necessary budget to keep systems updated? That's the only way I will even consider giving some trust back. If you really want to apologise, start handing out cash or whatever to the people you betrayed. But mere words like these are absolutely meaningless in today's world. People are right to dismiss them.
I wouldn't be so quick. Everybody gets hacked, sooner or later. Whether they'll own up to it or not is what makes the difference and I've seen far, far worse than this response by Checkout.com, it seems to be one of the better responses to such an event that I've seen to date.
> Like, how many other deprecated third party systems were identified handling a significant portion of your customer data after this hack?
The problem with that is that you'll never know. Because you'd have to audit each and every service provider and I think only Ebay does that. And they're not exactly a paragon of virtue either.
> Who declined to allocate the necessary budget to keep systems updated?
See: prevention paradox. Until this sinks in it will happen over and over again.
> But mere words like these are absolutely meaningless in today's world. People are right to dismiss them.
Again, yes, but: they are at least attempting to use the right words. Now they need to follow them up with the right actions.
The prevention paradox only really applies when the bad event has significant costs. It seems to me that getting hacked has at worst mild consequences. Cisco for example is still doing well despite numerous embarrassing backdoors.
> Everybody gets hacked, sooner or later.
Right! But, wouldn't a more appropriate approach be to mitigate the damage from being hacked as much as possible in the first place? Perhaps this starts by simplifying bloated systems, reducing data collection to data that which is only absolutely legally necessary for KYC and financial transactions in whatever respective country(ies) the service operates in, hammer-testing databases for old tricks that seem to have been forgotten about in a landscape of hacks with ever-increasingly complexity, etc.
Maybe it's the dad in me, years of telling me son to not apologize, but to avoid the behavior that causes the problem in the first place. Bad things happen, and we all screw up from time to time, that is a fact of life, but a little forethought and consideration about the best or safest way to do a thing is a great way to shrink the blast area of any surprise bombs that go off.
I don’t see how any of what you’re suggesting would have prevented this hack though (which involved an old storage account that hadn’t been used since 2020 getting hacked).
You don't see how preventative maintenance such as implementing a policy to remove old accounts after N days could have prevented this? Preventative maintenance is part of the forethought that should take place about the best or safest way to do a thing. This is something that could be easily learned by looking an problems others have had in the past.
As a controls tech, I provide a lot of documentation and teach to our customers about how to deploy, operate and maintain a machine for best possible results with lowest risk to production or human safety. Some clients follow my instruction, some do not. Guess which ones end up getting billed most for my time after they've implemented a product we make.
Too often, we want to just do without thinking. This often causes us to overlook critical points of failure.
> Some clients follow my instruction, some do not.
So you’re telling me you design a non-foolproof system?!? Why isn’t it fully automated to prevent any potential pitfalls?
For the app I maintain, we have a policy of deleting inactive accounts, after a year. We delete approved signups that have not been “consummated,” after thirty days.
Even so, we still need to keep an eye out. A couple of days ago, an old account (not quite a year), started spewing connection requests to all the app users. It had been a legit account, so I have to assume it was pwned. We deleted it quickly.
A lot of our monitoring is done manually, and carefully. We have extremely strict privacy rules, and that actually makes security monitoring a bit more difficult.
These are excellent practices.
Such data is a liability, not an asset and if you dispose of it as soon as you reasonably can that's good. If this is a communications service consider saving a hash of the ID and refusing new sign ups with that same ID because if the data gets deleted then someone could re-sign up with someone else's old account. But if you keep a copy of the hash around you can check if an account has ever existed and refuse registration if that's the case.
> Maybe it's the dad in me, years of telling me son to not apologize, but to avoid the behavior that causes the problem in the first place.
What an odd thing to teach a child. If you've wronged someone, avoiding the behavior in future is something that'll help you, but does sweet fuck all for the person you just wronged. They still deserve an apology.
I think people this approach is overcompensating for over-apologizing (or, similarly, over thanking, both in excess are off-putting). I have a child who just says "sorry" and doesn't actually care about changing the underlying behavior.
But yes, even if you try to make a healthy balance, there are still plenty of times when an apology are appropriate and will go a long way, for the giver and receiver, in my opinion anyway.
Sorry, I should have worded that as "stop apologizing so much, especially when you keep making the same mistake/error/disruption/etc."
I did not mean to come off as teaching my kid to never apologize.
"Sorry - this is my fault" is such an effective response, if followed up with "how do we make this right?" or "stop this from happening again?"
Not a weird thing to teach a child.
It’s 5-why’s style root cause analysis, which will build a person that causes less harm to others.
I am willing to believe that the same parent also teaches when and why it is sometimes right to apologize.
Thanks, this is where I was coming from. I suppose I could have made that more clear in my original comment. The idea behind my style of parenting is self-reflecting and our ability to analyze the impact of our choices before we make them.
But of course, apologizing when you have definitely wronged a person is important, too. I didn't mean to come off as teaching my kid to never apologize, just think before you act. But you get the idea.
Well said, ideally action comes first and then these actions can be communicated.
But in the real world, you have words ie. commitment before actions and a conclusion.
Best of luck to them.
Not everyone gets hacked. Companies not hacked include e.g.
- Google
- Amazon
- Meta
Facebook was hacked in 2013. Attacker used a Java browser exploit to take over employees' computers:
https://www.reuters.com/article/technology/exclusive-apple-m...
Facebook was also hacked in 2018. A vulnerability in the website allowed attackers to steal the API keys for 50 million accounts:
https://news.ycombinator.com/item?id=18094823
Amazonian here. My views are my own; I do not represent my company/corporate.
That said...
We do our very best. But I don't know anyone here who would say "it can never happen". Security is never an absolute. The best processes and technology will lower the likelihood and impact towards 0, but never to 0. Viewed from that angle, it's not if Amazon will be hacked, it's when and to what extent. It is my sincere hope that if we have an incident, we rise up to the moment with transparency and humility. I believe that's what most of us are looking for during and after an incident has occurred.
To our customers: Do your best, but have a plan for what you're going to do when it happens. Incidents like this one here from checkout.com can show examples of some positive actions that can be taken.
> But I don't know anyone here who would say "it can never happen". Security is never an absolute.
Exactly. I think it is great for people like you to inject some more realistic expectations into discussions like these.
An entity like Amazon is not - in the longer term - going to escape fate, but they have more budget and (usually) much better internal practices which rule out the kind of thing that would bring down a lesser org. But in the end it is all about the budget, as long as Amazon's budget is significantly larger than the attackers they will probably manage to stay ahead. But if they ever get complacent or start economizing on security then the odds change very rapidly. Your very realistic stance is one of the reasons it hasn't happened yet, you are acutely aware you are in spite of all of your efforts still at risk.
Blast radius reduction by removing data you no longer need (and that includes the marketing department, who more often than not are the real culprit) is a good first step towards more realistic expectations for any org.
Google just got hacked in June:
https://cloud.google.com/blog/topics/threat-intelligence/voi...
https://www.forbes.com/sites/daveywinder/2025/08/09/google-c...
That was a Salesforce instance with largely public data, rather than something owned and operated by Google itself. It's a bit like saying you stole from me, but instead of my apartment you broke into my off-site storage with Uhaul. Technically correct, but different implications on the integrity of my apartment security.
It was a social engineering attack that leveraged the device OAuth flow, where the device gaining access to the resource server (in this case the Salesforce API) is separate from the device that grants the authorization.
The hackers called employees/contractors at Google (& lots of other large companies) with user access to the company's Salesforce instance and tricked them into authorizing API access for the hackers' machine.
It's the same as loading Apple TV on your Roku despite not having a subscription and then calling your neighbor who does have an account and tricking them into entering the 5 digit code at link.apple.com
Continuing with your analogy, they didn't break into the off-site storage unit so much as they tricked someone into giving them a key.
There's no security vulnerability in Google/Salesforce or your apartment/storage per se, but a lapse in security training for employees/contractors can be the functional equivalent to a zero-day vulnerability.
There's no vulnerability per se, but I think the Salesforce UI is pretty confusing in this case. It looks like a login page, but actually if you fill it in, you're granting an attacker access.
Disclosure: I work at Google, but don't have much knowledge about this case.
Nah.
The Chinese got into gmail (Google) essentially on a whim to get David Petraeus' emails to his mistress. Ended his career, basically.
I'd bet my hat that all 3 are definitely penetrated and have been off and on for a while -- they just don't disclose it.
source: in security at big orgs
Do you have a source that the Google hack was related to David Petraeus? This page doesn't mention it:
https://en.wikipedia.org/wiki/Petraeus_scandal
Everybody includes Google, Amazon and Meta.
They too will get hacked, if it hasn't happened already.
Google got hacked back in 2010, lookup Operation Aurora. It wasn't a full own, but it shows that even the big guys can get hacked.
The relevant difference here is that these companies have actual security standards on the level that you would only find in the FAA or similar organisations were lives are in danger. For every incident in Google cloud for example, they don't just apologise, but they state exactly what happened and how they responded (down to the minute) and you can read up exactly how they plan to prevent this from happening again: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...
This is what incident handling by a trustworthy provider looks like.
Didnt Edward Snowden release documents that the NSA had fully compromised googles internal systems?
fair or not, if their customers get hacked it's still on them to mitigate and reduce the damage. Ex: cloud providers that provide billing alerts but not hard cut-offs are not doing a good job.
They also have plenty of domestic and foreign intelligence agents literally working with sensitive systems at the company.
... that we know of. Perhaps some of those "outages" were compromised systems.
"shit it's compromised. pull the plug ASAP"
Meta once misconfigured the web servers and exposed the source. https://techcrunch.com/2007/08/11/facebook-source-code-leake...
There are millions of companies even century or decade old ones without a hacking incident with data extraction. The whole everyone gets hacked is copium for a lack of security standards or here the lack of deprecation and having unmantained systems online with legacy client data. Announcing it proudly would be concerning if I had business with them. It's not even a lack of competence... it's a lack of hygiene.
>There are millions of companies even century or decade old ones without a hacking incident with data extraction.
Name five.
The pedantic answer is to point to a bunch of shell companies without any electronic presence. However in terms of actual businesses there’s decent odds the closest dry cleaners, independent restaurant, car wash, etc has not had its data extracted by a hacking incident.
Having a minimal attack surface and not being actively targeted is a meaningful advantage here.
There are definitely companies who have never been breached and it's not that hard. Defense in depth is all you need
Isn't defense in depth's whole point that some of your defenses will get breached?
Take the OP. What defenses were breached? An old abandoned system running unmantained in the background with old user data still attached. There is no excuse.
I like your stance.
We also have to remember that we have collectively decided to use Windows and AD, QA tested software etc (some examples) over correct software, hardened by default settings etc.
The intent of the South Park sketch was to lampoon that BP were (/are) willingly doing awful things and then give corpo apology statements when caught.
Here, Checkout has been the victim of a crime, just as much as their impacted customers. It’s a loss for everyone involved except the perpetrators. Using words like “betrayed” as if Checkout wilfully mislead its customers, is a heavy accusation to level.
At a point, all you can do is apologise, offer compensation if possible, and plot out how you’re going to prevent it going forward.
> At a point, all you can do is apologise, offer compensation if possible, and plot out how you’re going to prevent it going forward.
I totally agree – You've covered the 3 most important things to do here: Apologize; make it right; sufficiently explain in detail to customers how you'll prevent recurrences.
After reading the post, I see the 1st of 3. To their credit, most companies don't get that far, so thanks, Checkout.com. Now keep going, 2 tasks left to do and be totally transparent about.
They are donating the entire ransom amount to two universities for security research. I don't care about the words themselves, but assuming they're not outright lying about this, that meant a lot to me. They are putting their (corporate!) money where their mouth is.
No trolling on my side, I think having people who think just like you is a triumph for humanity. As we approach times far darker and manipulation takes smarter shapes, a cynical mind is worth many trophies.
> prevent this scenario from ever happening again.
Every additional nine of not getting hacked takes effort. Getting to 100% takes infinite effort i.e. is impossible. Trying to achieve the impossible will make you spin on the spot chasing ever more obscure solutions.
As soon as you understand a potential solution enough to implement it you also understand that it cannot achieve the impossible. If you keep insisting on achieving the impossible you have to abandon this potential solution and pin your hope on something you don't understand yet. And so the cycle repeats.
It is good to hold people accountable but only demand the impossible from those you want to go crazy.
What you request is for them to divulge internal details of their architecture that could lead to additional compromise as well as admission of fault that could make it easier for them to be sued. All for some intangible moral notion. No business leader would ever do those things.
In attacks on software systems specifically though, I always find this aggressive stance toward the victimized business odd, especially when otherwise reasonable security standards have been met. You simply cannot plug all holes.
As AI tools accelerate hacking capabilities, at what point do we seriously start going after the attackers across borders and stop blaming the victimized businesses?
We solved this in the past. Let’s say you ran a brick-and-mortar business, and even though you secured your sensitive customer paperwork in a locked safe (which most probably didn’t), someone broke into the building and cracked the safe with industrial-grade drilling equipment.
You would rightly focus your ire and efforts on the perpetrators, and not say ”gahhh what an evil dumb business, you didn’t think to install a safe of at least 1 meter thick titanium to protect against industrial grade drilling!????”
If we want to have nice things going forward, the solution is going to have to involve much more aggressive cybercrime enforcement globally. If 100,000 North Koreans landed on the shores of Los Angeles and began looting en masse, the solution would not be to have everybody build medieval stone fortresses around their homes.
Right. Transparency doesn't mean telling about the attack that already happened. It means telling us about their issues and ways this could happen again. And they didn't even mention the investment amount for the security labs.
Words are cheap, but "We are sorry." is a surprisingly rare thing for a company to say (they will usually sugarcoat it, shift blame, add qualifiers, use weasel words, etc.), so it's refreshing to hear that.
This is a classic example of a fake apology: "We regret that this incident has caused worry for our partners and people" they are not really "sorry" that data was stolen but only "regret" that their partners are worried. No word on how they will prevent this in the future and how it even happened. Instead it gets downplayed ("legacy third-party","less than 25% were affected" (which is a huge number), no word on what data exactly).
How would the apology need to be worded so that it does not get interpreted as a fake apology?
In terms of "downplaying" it seems like they are pretty concrete in sharing the blast radius. If less than 25% of users were affected, how else should they phrase this? They do say that this was data used for onboarding merchants that was on a system that was used in the past and is no longer used.
I am as annoyed by companies sugar coating responses, but here the response sounds refreshingly concrete and more genuine than most.
IMO something like:
We are truly sorry for the impact this has no doubt caused on our customers and partners businesses. This clearly should never have happened, and we take full responsibility.
Whilst we can never put into words how deeply sorry we are, we will work tirelessly to make this right with each and every one of you, starting with a full account of what transpired, and the steps we are going to be taking immediately to ensure nothing like this can ever happen again.
We want to work directly with you to help minimise the impact on you, and will be reaching out to every customer directly to help understand their immediate needs. If that means helping you migrate away to another platform, then so be it - we will assist in any way we can. Trust should be earn't, and we completely understand that in this instance your trust in us has understandably been shaken.
"Up to 25% of users were affected." "As many as 25% of users were affected."
"A quarter of user accounts were affected. We have calculated that to be 7% of our customers."
> How would the apology need to be worded so that it does not get interpreted as a fake apology?
"We regret that we neglected our security to such degree that it has caused this incident."
It's very simple. Don't be sorry I feel bad, be sorry you did bad.
They stated clearly in the article:
> This was our mistake, and we take full responsibility.
I wonder how much of the negative sentiment about this is from a knee jerk reaction and careless reading vs. thoughtful commentary.
an effective apology establishes accountability, demonstrates reflection on what caused the problem, and commits to concrete changes to prevent it from reoccurring
I always presume the "We are sorry" opens up to financial compensation, whereas the "we regret that you are worried" does not.
In my country, this debate is being held WRT the atrocities my country committed in its (former) colonies, and towards enslaved humans¹. Our king and prime minister never truly "apologized". Because, I kid you not, the government fears that this opens up possibilities for financial reparation or compensation and the government doesn't want to pay this. They basically searched for the words that sound as close to apologies as possible, but aren't words that require one to act on the apologies.
¹ I'm talking about The Netherlands. Where such atrocities were committed as close as one and a half generations ago still (1949) (https://www.maastrichtuniversity.nl/blog/2022/10/how-do-dutc...) but mostly during what is still called "The Golden Age".
This was our mistake, and we take full responsibility.
That preceding line makes it, to me, a real apology. They admit fault.
Seems a bit harsh to leave out the rest of the apology and only focus on the part that is not much of an apology.
> No word on how they will prevent this in the future and how it even happened.
Because these things take time, while you need to disclose that something happened as fast as possible to your customers (in the EU, you are mandated by the GDPR, for instance).
Agreed. It's just a classic way to manipulate the viewers. They just wanted to sound edgy for not paying a ransom, which is definitely a good thing. Never pay these crooks but you left a legacy system online without any protections? That's serious
> We are fully committed to maintaining your trust.
We are fully committed to rebuilding your trust.
Refreshing to not see "due to an abundance of caution". Kudos to the response in general, they pretty much ticked all boxes.
Since when did owning up to a data breach become such a noteworthy event? Less than 25% sounds more like exactly 25% of impacted customers
I like you like this. For me it’s close but fails in the word selection in the last sentence: “maintaining” trust is not what I would say their job is at this point, it’s “restoring” it.
One places the company at the center as the important point of reference, avoiding some responsibility. The other places the customer at the center, taking responsibility.
If i was a customer id be pissed off, but this is as good as a response you can have to an incident like this.
- timely response
- initial disclosure by company and not third party
- actual expression of shame and remorse
- a decent explanation of target/scope
i could imagine being cyclical about the statement, but look at other companies who have gotten breached in the past. very few of them do well on all points
> - timely response
Timely in what way? Seems they didn't discover the hack themselves, didn't discover it until the hackers themselves reached out last week, and today we're seeing them acknowledging it. I'm not sure anything here could be described as "timely".
I have been doing a self Have I Been Pwned audit and, reading many company blog posts, and it wasn't uncommon to see disclosure months after incidents.
Yeah, that sucks, and I wouldn't call those "timely" either. Is your point that "timely" is relative and depends on what others are doing? Personally, "slow" is slow regardless of how slow others are, but clearly some would feel differently, that's OK too.
If we just let the companies go away with 'we are sorry' and say that is as good as it gets, then this industry is up for far more catastrophic situations in the future. Criminal liability, refunds to customers, requirements from regulators might move things in the right direction, but letting companies have shitty practices by hoarding data they don't need and putting customers at risk is definitely something that should be looked at with more scrutiny.
It depends on the crime though right? This was all legacy data and from the description the worst thing they got was contact information that's five years older or more ("internal operational documents and merchant onboarding materials at that time.").
For that level of breach their response seems about right to me, especially waving the money in ShinyHunters' face before giving it away to their enemies.
I agree, it depends, but this wouldn't be the first time company underplayed (or simply lied) about the extent of the breach. I am sure even if it was current data or a more serious breach, the messaging would be similar from their side.
> as good as a response you can have to an incident like this.
From customer perspective “in an effort to reduce the likelihood of this data becoming widely available, we’ve paid the ransom” is probably better, even if some people will not like it.
Also to really be transparent it’d be good to post a detailed postmortem along with audit results detailing other problems they (most likely) discovered.
No, that would not help me as a customer. Because I would never believe that that party would keep their word, besides, it can't be verified. You'll have that shadow hanging around for ever. The good thing is that those assholes now have less budget to go after the next party. The herd is safe from wolves by standing together, not by trying to see which of their number should be sacrificed next.
There’s a very real difference between the data possibly still being saved in some huge storage dump of a ransomware group and being available for everybody to exploit on a leak site.
It’s a sliding scale, where payment firmly pushes you in the more comfortable direction.
Also, the uncomfortable truth is that ransomware payments are very common. Not paying will make essentially no difference, the business would probably still be incredibly lucrative even if payment rates dropped to 5% of what they are now.
If there was global co-operation to outlaw ransom payments, that’d be great. Until then, individual companies refusing to pay is largely pointless.
> It’s a sliding scale, where payment firmly pushes you in the more comfortable direction.
No, it pushes you in a more comfortable direction, and I'm not you.
Yes, but your concerns are less rooted in reality and more in the fact that you find the idea of paying ransomware groups repulsive. That’s fine, but there’s rational analysis to be done here, and it often leads to paying being the best option.
If your company gets hit by one of these groups and you want to protect your customers, paying is almost always the most effective way to do that. Someone who isn’t particularly interested in protecting their customers probably wouldn’t pay if the damage from not paying would be lower than the cost of paying.
A third possibility is that you simply feel uncomfortable about paying, which is fine, but it isn’t a particularly rational basis for the decision.
I think we can also fairly assume that the vast majority of people have no strong feelings about ransomware, so there’s likely going to be no meaningful reputational damage caused by paying.
Never pay the ransom.
The extortionist knows they cannot prove they destroyed the data, so they will eventually sell it anyway.
They will maybe hold off for a bit to prove their "reputation" or "legitimacy". Just don't pay.
If this is actually frequently happening, your claim should be pretty easy to prove. Most stolen databases are sold fairly publicly.
The ransom payments tend to be so big anyway that selling the data and associated reputational damage is most likely not worth the hassle.
Basic game theory shows that the best course of action for any ransomware group with multiple victims is to act honestly. You can never be sure, but the incentives are there and they’re pretty obvious.
The big groups are making in the neighbourhood of $billions, earning extra millions by sabotaging their main source of revenue seems ridiculous.
Do you think ransomware groups do referrals to their satisfied customers who paid and didn't have their data leaked?
Probably? They have pretty professional customer service pages.
However they don’t really need to because there are plenty of documented cases, and the incident response company you hire will almost certainly have prior knowledge of the group you’re forced to deal with.
If they had a history of fucking over their “customers”, the IR team you hired would know and presumably advise against paying.
> reputational damage
Whoa. You're a crime organization. The data may as well "leak" the same way it leaked out of your victim's "reputable" system.
We’re talking about criminal organisations that depend on a certain level of trust to make any money at all.
Yes, the data might still leak. It’s absurd to suggest that it’s not less likely to leak if you pay.
There’s a reason why businesses very frequently arrive at the conclusion that it’s better to pay, and it’s not because they’re stupid or malicious. They actually have money on the line too, unlike almost everyone who would criticise them for paying.
I strongly disagree. Paying the ransom will put everyone in danger.
I would totally agree with you if we lived in a hypothetical world where ransomware payments aren’t super common anyway.
Until there is legislation to stop these payments, there will be countless situations where paying is simply the best option.
> Until there is legislation to stop these payments, there will be countless situations where paying is simply the best option.
Paying the ransom is not exactly legal, is it? Surely the attackers don't provide you with a legitimate invoice for your accounting. As a company you cannot just buy a large amount of crypto and randomly send it to someone.
Paying the ransoms is almost always legal in basically all western countries unless the recipient has been sanctioned.
> As a company you cannot just buy a large amount of crypto and randomly send it to someone.
You can totally do that, why wouldn’t you be able to?
Because its fraud. You cannot just take money out of the company, you have to put something in your books.
So you obviously put “ransomware payment” in the books.
Most of the time the company doesnt pay directly.
They hire a third party, sometimes their cyber insurance provider, to "cleanup" the ransomware. That third party then pays another third party who is often located in a region of the world with lax laws to perform the negotiations.
At the end of the day nobody breaks any laws and the criminals get paid.
Depends. Not paying ransom decreases the likelihood of being attacked in the future.
Probably not that significantly, these are primarily crimes of opportunity. An attacker isn’t likely to do much research on the company until they already have access, and that point they might as well proceed (especially since getting hit a second time would be doubly awkward for the company, presumably dramatically increasing the chances of payment)
And selling the data from companies like Checkout.com is generally still worth a decent amount, even if nowhere close to the bigger ransom payments.
You mean as a customer you'd feel better if the company victim of ransom would help fund the very group that put the business and your data in jeopardy?
Of course, it makes my data and my customers data less likely to end up public on the internet.
It’s not great, but it’s the least shitty option.
What makes you think they won't get the money and sell the database in the dark web?
This is like falling victim to a scam and paying more on top of it because the scammers promised to return the money if you pay a bit more.
I see no likelihood game to be played there because you can't trust criminals by default. Thinking otherwise is just naive and wishful. Your data is out in the wild, nothing you can do about that. As soon as you accept that the better are your chances to do damage reduction.
Their incentives are well known. You don’t have to trust them to assume that they will act rationally.
Picking up hundreds of thousands at best (very few databases would be worth so much) when your main business pays millions or tens of millions per victim simply isn’t worth it, selling the data would jeopardise their main business which is orders of magnitude more profitable.
Absolutely no IR company will advise their clients to pay if the particular ransomware group is known to renege on their promises.
Did some research and indeed there is a sort of "honor among thieves" kinda vibes when it comes to ransom attacks.
Still, it's illegal or quite bureaucratic in some places to pay up.
And idk... It still feels like these ransom groups could well sit on the data a while, collect data from other attacks, shuffle, shard and share these databases, and then sell the data in a way that is hard to trace back to the particular incident and to a particular group, so they get away with getting the ransom money and then selling the db latter.
It's also not granted that even with the decrypt tools you'd be able to easily recover data at scale given how janky these tools are.
I don't know. I am less sure now than I was before about this, but I feel like it's the correct move not to pay up and fund the group that struck you, only so it can strike others, and also risk legal litigations.
Ah yes let's fund literal criminal groups so they have an incentive to keep hacking people
Completely useless take in the real world where these payments are common, it makes no difference whatsoever if an individual or even vast majority of victims stop paying. Ransomware will remain incredibly lucrative until payments are outlawed.
The cost of an attack like this is in the thousands of dollars at most, the ransom payments tend to be in the millions. The economics of not paying just don’t add up in the current situation.
How do you know it isn't illegal when you pay the ransom?
You could very well be making a payment to a sanctioned individual or country, or a terrorist organization etc.
There are best practices for this, you normally hire a third party to handle the negotiations, payment process and the necessary due diligence.
For example the UK government publishes guidelines on how to do this and which mitigating circumstances they consider if you do end up making a payment to a sanctioned entity anyway https://www.gov.uk/government/publications/financial-sanctio...
They directly state as follows:
> An investigation by the NCA is very unlikely to be commenced into a ransomware victim, or those involved in the facilitation of the victim’s payment, who have proactively engaged with the relevant bodies as set out in the mitigating factors above
i.e you’re not even going to be investigated unless you try to cover things up.
This is a solved problem, big companies with big legal departments make large ransomware payments every day. Big incident response companies have teams of negotiators to work through the process of paying, and to get the best possible price.
The donation is more or less virtue signaling rather than actual insight.
The problem can not be helped by research research against cybercrime. Proper practices for protections are well established and known, they just need to be implemented.
The amount donated should've rather be invested into better protections / hiring a person responsible in the company.
(Context: The hack happened on a not properly decomissioned legacy system.)
> The donation is more or less virtue signalling rather than actual insight.
I see it more as a middle finger to the perps: “look, we can afford to pay, here, see us pay that amount elsewhere, but you aren't getting it”. It isn't signalling virtue as much as it is signalling “fuck you and your ransom demands” in the hope that this will mark them as not an easy target for that sort of thing in future.
It also serves as a proxy for a punishment. They are, from one perspective, paying a voluntary fine based on their own assessment of their security failings.
For customers it signals sincerity and may help dampen outrage in their follow up dealings.
Yes but I think it's a good virtue to signal considering the circumstances. If they paid the ransom that would signal that ransoming this company works, incentivizing more ransoms. If they refuse to pay the ransom it might signal that they care more about money than they do integrity. Taking the financial hit of the ransom, but paying it to something that signals their values, is about the best move I can imagine.
At the stage we're at, I would far prefer virtue signalling to the more widespread vice signalling.
It is virtue signaling, especially considering the fact that doing the hard to swallow thing of paying the ransom would probably be the best outcome from a customer perspective.
Yes there are negative externalities in funding ransomware operations, not paying is still much more likely to hurt your customers than paying.
Paying ransomware fines is never the smart move to do unless you happen to trust what cyber criminals tell you.
You send them the payment, they tell you they deleted the data, but they also sell the data to 10 other customers over the dark-web.
Why would you ever trust people who are inherently trustworthy and who are trying to screw you? While also encouraging further ransomware crimes in the future.
It’s a sliding scale.
If you don’t pay, the odds they will publish your data are closer to 100%. If you do pay, the odds have historically been much closer to 0% than 100%
You aren’t paying to be sure, but to improve your chances.
> Proper practices for protections are well established and known
Endpoint security is a well known open problem for what no sufficient practices and protections exist.
What is the problem with virtue signaling? By all means signal virtue! Perhaps you are concerned by cheap virtue signals, which have little significance.
The point here is that this is an expensive virtue signal. Although, it would be more effective if we knew how expensive it was.
They should have watched Ransom (1996).
https://www.youtube.com/watch?v=xllIU0lPgqs
I don't know what virtue signaling means. I think you mean they just did it out of spite.
There is not much to research. If companies want security, they should pay for security.
> If companies want security, they should pay for security.
Or just properly follow best-practise, and their own procedures, internally.⁰
That was the failing here, which in an unusual act of honesty they are taking responsibility for in this matter.
--------
[0] That might be considered paying for security, indirectly, as it means having the resources available to make sure these things are done, and tracked so it can be proven they are done making slips difficult to happen and easy to track & hopefully rectify when they inevitably still do.
Security is an arms race. Don't expect a leap; do your part to stay ahead.
> The attackers gained access to a legacy, third-party cloud file storage system.
I think the answer is ok but the "third-party" bit reads like trying to deflect part of the blame on the cloud storage provider.
The whole codebase & tools at whatever company I ever worked at was using 99% legacy stuff. Its wild...
Often times it would have been easier to rebuild the whole project over trying to upgrade 5-6 year old dependencies.
Ultimately the companies do not care about these kinda incidents. They say sorry, everyone laughs at them for a week and then after its business as usual, with that one thing fixed and still rolling legacy stuff for everything else.
> Often times it would have been easier to rebuild the whole project
Sure buddy, sure
I inherited a few codebases as solo dev and I am confident in my abilities to refactor each of them in 1-2 months without issues.
I can imagine that in a team that might be harder, but these are glorified todo apps. I am well aware that complete rebuilds rarely work out.
Interesting spin for a core infrastructure provider who deals with the most sensitive part of most businesses, tries to bury the lede of getting hacked with a tale of their virtuous refusal to pay a ransom; is this supposed to make them attractive or just have people skip the motivating events? Swing and a miss in my books.
While a nice gesture, I'm not so certain that if I were one of their "less than 25%" of customers impacted that I'd be so pleased. Why not compensate them instead?
"The system was used for internal operational documents and merchant onboarding materials at that time"
To me it seems most likely that this is data collected during the KYC process during onboarding, meaning company documents, director passport or ID card scans, those kind of things. So the risk here for at least a few more years until all identity documents have expired is identity theft possibilities (e.g. fraudsters registering their company with another PSP using the stolen documents and then processing fraudulent payments until they get shut down, or signing up for bank accounts using their info and tax id).
Passport or ID card scans would never be be stored alongside general KYB information, e.g. the standard forms PSPs use.
If you read between the lines of the verbiage here, it looks like a general archived dropbox of stuff like PDF documents which the onboarding team used.
Since GDPR etc, items like passports, driving license data etc, has been kept in far more secure areas that low-level staff (e.g. people doing merchant onboarding) won't have easy access to.
I could be wrong but I would be fairly surprised if JPGs of passports were kept alongside docx files of merchant onboarding questionnaires.
> Passport or ID card scans would never be be stored alongside general KYB information
How do you qualify this statement? Did you mean “should never”? Even then, you’re likely overstating things. Nothing prevents co-locating KYC/KYB information. On the contrary, most businesses conducting KYB are required to conduct UBO and they’re trained to combine them both. Register as a director/officer with any FSI in North America and you’ll see.
Fair point! Yeah, it could be. Although Europe tends to be stricter about those things, i.e. where PII is stored. I was trained way back in like 2018 about ensuring I never have any PII stored on my PC and around the requirements of the GDPR in terms of access to information and right to delete etc.
docx files of merchant onboarding questionnaires
Why would merchants fill out docx files? They would submit an online form with their business, director and UBO details, that data would be stored in the Checkout.com merchants database, and any supporting documents like passport scans would be stored in a cloud storage system, just like the one that got hacked.
If it was just some internal PDFs used by the onboarding team, probably they wouldn't make such a big announcement.
Another person wrote a good response to this but yeah, I would say, as someone that has worked in fintech, you will almost always have some integrations with systems which require Microsoft word format, as well as obviously PDFs, CSVs, etc.
Every country you operate in has different rules and regulations and you have to integrate with many third party systems as well as governmental entities etc, and sometimes you have to do really really technically backwards things.
Some integrations I remember were stuff like cron jobs sending CSV files via FTP which were automatically picked up.
If you are dealing with financial services (and payment provider most certainly would), you will be forced to interface with infuriating vendor vetting and onboarding questionnaire processes. The kinds that would make Franz Kafka blush, and CIA take notice for their enhanced interrogation techniques.
The sheer amount of effectively useless bingo sheets with highly detailed business (and process) information boggles the mind.
Some time ago I alluded to existence and proliferation of these questionnaires in another context: https://bostik.iki.fi/aivoituksia/random/crowdstrike-outage-...
I dont understand some of the cynicism in this thread. This is a bold move and I support. It is impossible to not have incidents like this and until theres a proper post mortem we wont really know how much of it can be attributed to carelessness. They could have just kept is hush hush but I appreciate that they came forward with it and also donated money to academia. The research will be open and everybody benefits.
I don't think they meant OXCIS, that seems to be a centre for Islamic Studies https://en.wikipedia.org/wiki/Oxford_Centre_for_Islamic_Stud...
I can't quite work out who they donated to - it seems there are a number of Oxford Uni cybersec/infosec units. Any idea which one?
I guess it just means this: https://www.cybersecurity.ox.ac.uk/
"Cyber Security Oxford is a community of researchers and experts working under the umbrella of the University of Oxford’s Academic Centre of Excellence in Cyber Security Research (ACE-CSR)."
Probably, I'm not sure it's not https://gcscc.ox.ac.uk/
I don't think it's https://www.infosec.ox.ac.uk/
There's also this AI security research lab, https://lasr.plexal.com/
It looks like Oxford are quite busy in this space.
So, I used to work in the fintech world and it looks to me like what was hacked was merchant KYB documents. I.e. when a merchant signs up for a PSP they have to provide various documentation about the business so the PSP can underwrite the risk of taking on this business. I.e. some PSPs won't deal with porn companies or travel companies or companies from certain regions etc.
This sort of data is generally treated very differently to the actual PANs and payment information (which are highly encrypted using HSMs).
So it's obviously shitty to get hacked, but if it was just KYB (or KYC) type information, it's not harming any individuals. A lot of KYB information is public (depending on country).
Fair play on them for being open about this.
It's not just business data though - usually it will include ultimate beneficial owner and directors' passports, tax ID, etc. So there is a risk of identity theft there of potentially some very wealthy individuals.
When they say "The episode occurred when threat actors gained access to this third party legacy system which was not decommissioned properly. " for me it sounds like a not properly wiped disk that got into the the bad guys hands. It would be interesting to know more to be prepared for proper decommissioning of hardware.
sounds like an S3 bucket that wasn't deleted
Or a cloud server which was never turned off.
They're "sorry", they want to be "transparent" and "accountable", they want your "trust", but not enough to publicly explain what happened or what kind of data got taken (is a full CRM backup from 6 years ago considered "legacy" "internal operational documents"?). There's not even a promise to produce more information about their mistake.
> Jimmy, where did the cookies go?
> Something that was on the counter is gone! I don't know how! It might not even be my fault! But I'm sorry!
What kind of an apology is that? It's not. It's marketing for the public while they contact the "less than 25% of [their] current merchant base" whose (presumably sensitive) information was somehow in "internal operational documents".
Oh but also took some of what they charge their customers and gave that (undisclosed?) sum away to a university. They must be really sorry.
Sometimes cyber insurance will come to the rescue. That’s why companies Don’t pay.
Isn't it illegal in many countries to pay a ransom?
(If not, why not?)
(Imho, it would make sense if only the state can pay ransoms)
Typically, companies wouldn't really pay an actual ransom like unmarked bills stacked in a paper bag and thrown out from a bridge onto a passing barge.
Instead, you would pay (exhorbitant) consulting fees to a foreign-based "offensive security" entity, and most of the time get some sort of security report that says if you'd simply plug this and that holes, your systems would now be reasonably safe.
> Typically, companies wouldn't really pay an actual ransom like unmarked bills stacked in a paper bag and thrown out from a bridge onto a passing barge.
Yes, that's why cryptocurrencies are a gift from heaven for these hacker groups.
Therefore, even if paying ransom money (somehow) must be legal, maybe it should be illegal to use crypto for it. You don't want to make it too easy to run this type of criminal business.
Could this be aws s3?
I’m thinking a SFTP or file sharing gateway. Think MoveIT, GoAnywhere, ShareFile, etc.
IMO, these aren’t safe to use anymore.
I was guessing it's a OneDrive, Google Drive, DropBox or something.
Probably someone was phished and they still had access to an old shared drive which still had this data. Total guess but reading between the lines it could be something like this.
yeh, I am skeptical about "third party"
Giving me MBA vibes. Will they close up shop and go when it's the remaining 75% of their infrastructure next time?
If everyone refuses to pay, such incidents would largely reduce.
They are downplaying the severity of the data theft, which most likely includes user identification documents, the most dangerous type of breach, since it directly enables identity theft
Reading between the lines reveals the severity they're obfuscating, with contradictions:
> This incident has not impacted our payment processing platform. The threat actors do not have, and never had, access to merchant funds or card numbers.
> The system was used for internal operational documents and merchant onboarding materials at that time.
> We have begun the process to identify and contact those impacted and are working closely with law enforcement and the relevant regulators
They stress that "merchant funds or card numbers" weren't accessed, yet acknowledge contacting "impacted" users, this begs the question: how can users be meaningfully "impacted" by mere onboarding paperwork?
Yeah, they keep repeating what wasn't accessed but never say what actually was.
> Checkout.com hacked, refuses ransom payment, donates to security labs
This submission's edited title reads like the "target headline" from The Office (US):
> Scranton Area Paper Company - Dunder Mifflin - Apologizes - to Valued Client - Some Companies - Still Know - How - Business - is - Done
At this point I think we all understand that we will never be able to trust any company in this world with our data.
In most cases they can get away with "We are sorry" and "Trust me, bro" attitude.
> We will be donating the ransom amount to Carnegie Mellon University and the University of Oxford Cyber Security Center (OXCIS) to support their research in the fight against cybercrime.
Can this be tax deducted? Because this it sounds like gaslighting to change the narrative.
Something being tax deductible doesn't mean it is free. It still costs money, you just don't pay taxes over that money.
This one doesn't change that much like others said, but it is still burning money. Universities and their projects waste a lot of money - from buying hardware via complicated processes to projects wasting millions of USD (in cases I know it is EUR). Sponsored by companies like Samsung or Siemens, not releasing anything useful for years and still extending projects for "further research" :(
It's their money in this case so they can burn it any way they want and great to see they didn't support script kiddies here (assuming it was some leftover files on forgotten object storage bucket, sadly unencrypted or with keys available nearby).
Lots of companies waste money too. I'd rather see universities spend it on research and studies than companies developing useless products and shutting down after a year.
It's not gaslighting. They were transparent enough to own their mistake. The donation isn't really the main story.
Jerry, all these big companies, they write off everything.
I believe you may be misusing the term gaslighting.
To me this looks like getting hacked, donating to some public non-profit, deduct it via taxes (essentially spending nothing) and spin it online as a positive.
I've met a few people who genuinely believe that 'tax deductible' equates to 'essentially spending nothing' or somehow equate that the amount you donate would be an amount you would otherwise give to the Government in taxes so from your perspective it doesn't change anything.
This is definitely not the case. If you make $100 profit and you would have had to pay 20% corporate tax, then you pay $20 in taxes, you'd be left with $80 to buy chocolate or whatever you want.
If you donate $20 and deduct it from your profit, then your profit is now calculated at $80. So you pay $16 in taxes. So you saved $4 but spent $20, so you're $16 dollars down and now you only have $64 for chocolate, so not 'essentially nothing'.
What if I buy chocolate as a corporate gift to my clients?! /jk
> deduct it via taxes (essentially spending nothing)
Unless you're positing some very specific, unusual situation, this isn't how tax deductibility works. The dollar amount of a tax deductible donation is subtracted from your taxable income, not from your tax bill. So you're getting a discount on the donation equal to your marginal tax rate.
> deduct it via taxes (essentially spending nothing)
That's not how tax deduction works.
That’s not how tax deductions work because a tax deduction doesn’t give you the full amount of your donation back it only reduces your taxable income, not your tax bill dollar-for-dollar.
Example:
You earn $100,000.
You donate $10,000 to a qualifying charity.
You can now deduct that $10,000, i.e. you’ll be taxed as if you earned $90,000, not $100,000.
If your marginal tax rate is 30%, you’ll save 30% of $10,000 = $3,000 in taxes. So you’re still out $7,000 in real money.
Though if that 100K to 90K move had actually changed your tax bracket, you'd stand to maybe save a bit more.
It changes nothing. If you get taxes 20% til 90k and 30% above that, then donating 10k still saves you 3k in taxes, you're still out 7k and you're still paying 18k in taxes on the 90k.
Even if it were, it'd be much more than anything others that got hacked have been doing..
This should be law. Any company that is hacked should be required by law to make a sizeable investment in a third-party security research company.
Security reasearch lab during the day day, ransomware org at night conspiracy coming soon.
"Firefighter arson is a persistent phenomenon involving a very small minority of firefighters who are also active arsonists ... It has been reported that roughly 100 U.S. firefighters are convicted of arson each year."
Interesting, that number is much higher than I would expect.
It wouldn't require a conspiracy for these companies to 'invest' in security companies they have ties to. Throw in tax incentives and loopholes and whatnot and it turns out not to hurt the original company at all.
I have checkout.me domain, and it is for sale. email me if you want to get it.