1) If you make legal disclosure too hard, the only way you will find out is via criminals.
2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.
3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.
I use a different email address for every service. About 15 years ago, I began getting spam at my diversalertnetwork email address. I emailed DAN to tell them they'd been breached. They responded with an email telling me how to change my password.
I guess I should feel lucky they didn't try to have me criminally prosecuted.
well, it is. quick search revealed a name of a certain big player, although there are some other local companies whose policies can be extended to "extreme sports"
If you follow the jurisdictional trail in the post, the field narrows quickly. The author describes a major international diving insurer, an instructor driven student registration workflow, GDPR applicability, and explicit involvement of CSIRT Malta under the Maltese National Coordinated Vulnerability Disclosure Policy. That combination is highly specific.
There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.
> Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
> Wanna trash the site's design, you should open a top level thread instead.
Or better, don't[1]:
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Get a better browser I'd say. Firefox Reader mode makes short work of such sites, including the submission. I use it very often, so I can enjoy the content rather than get frustrated over styling issues.
As well as contrast issues, could also be that there was a javascript error on their end (or they don't whitelist sites for JS by default). This is unfortunately one of those sites that renders a completely blank page unless you use reader mode, enable JS, or disable CSS.
> every account was provisioned with a static default password
Hehehe. I failed countless job interviews for mistakes much less serious than that. Yet someone gets the job while making worse mistakes, and there are plenty of such systems on production handling real people's data.
Literally found the same issue in a password system, on top of passwords being clear text in the database... cleared all passwords, expanded the db field to hold a longer hash (pw field was like 12 chars), setup "recover password" feature and emailed all users before End of Day.
My own suggestion to anyone reading this... version your password hashing mechanics so you can upgrade hashing methods as needed in the future. I usually use "v{version}.{salt}.{hash}" where salt and the resulting hash are a base64 string of the salt and result. I could use multiple db fields for the same, but would rather not... I could also use JSON or some other wrapper, but feel the dot-separated base64 is good enough.
I have had instances where hashing was indeed upgraded later, and a password was (re)hashed at login with the new encoding if the version changed... after a given time-frame, will notify users and wipe old passwords to require recovery process.
FWIW, I really wish there were better guides for moderately good implementations of login/auth systems out there. Too many applications for things like SSO, etc just become a morass of complexity that isn't always necesssary. I did write a nice system for a former employer that is somewhat widely deployed... I tried to get permission to open-source it, but couldn't get buy in over "security concerns" (the irony). Maybe someday I'll make another one.
Years ago I worked for a company that bought another company. Our QA folks were asked to give their site a once-over. What they found is still the butt of jokes in my circle of friends/former coworkers.
* account ids are numeric, and incrementing
* included in the URL after login, e.g. ?account=123456
* no authentication on requests after login
So anybody moderately curious can just increment to account_id=123457 to access another account. And then try 123458. And then enumerate the space to see if there is anything interesting... :face-palm: :cold-sweat:
When you are acting in good faith and the person/organization on the other end isn't, you aren't having a productive discussion or negotiation, just wasting your own time.
The only sensible approach here would have been to cease all correspondence after their very first email/threat. The nation of Malta would survive just fine without you looking out for them and their online security.
Agree - yet, security researchers and our wider community also needs to recognize that vulnerabilities are foreign to most non-technical users.
Cold approach vulnerability reports to non-technical organizations quite frankly scare them. It might be like someone you've never met telling you the door on your back bedroom balcony can be opened with a dummy key, and they know because they tried it.
Such organizations don't kmow what to do. They're scared, thinking maybe someone also took financial information, etc. Internal strife and lots of discussions usually occur with lots of wild specualation (as the norm) before any communication back occurs.
It just isn't the same as what security forward organizations do, so it often becomes as a surprise to engineers when "good deed" seems to be taken as malice.
If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).
Here all databases with personal information must be registered there and data must be secure.
One way how to improve cybersecurity is let cyber criminals loose like predators hunting prey. Companies needs to feel fear that any vulnerability in their systems is going to be weaponized against them. Only then they will appreciate an email telling them about security issue which has not been exploited yet.
There should exist a vulnerability disclosure intermediary. They can function as a barrier to protect the scientist/researcher/enthousiast and do everything by the book for the different countries.
I’ve worked in I.T. For nearly 3 decades, and I’m still astounded by the disconnect between security best practices, often with serious legal muscle behind them, and the reality of how companies operate.
I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?
By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
> I came across a pretty serious security concern at my company this week. The ramifications are alarming. […] Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
I was in a very similar position some years ago. After a couple of rounds of “finish X for sale Y then we'll prioritise those issue”, which I was young and scared enough to let happen, and pulling on heartstrings (“if we don't get this sale some people will have to go, we risk that to [redacted] and her new kids, can we?”) I just started fixing the problems and ignoring other tasks. I only got away with the insubordination because there were things I was the bus-count-of-one on at the time and when they tried to butter me up with the promise of some training courses, I had taken & passed some of those exams and had the rest booked in (the look of “good <deity>, he got an escape plan and is close to acting on it” on the manager's face during that conversation was wonderful!).
The really worrying thing about that period is that a client had a pen-test done on their instance of the app, and it passed. I don't know how, but I know I'd never trust that penetration testing company (they have long since gone out of business, I can't think why).
I wish I could recall the name of a pen test company I worked with when I wrote my auth system... They were pretty great and found several serious issues.
At least compared to our internal digital security group would couldn't fathom, "your test is wrong for how this app is configured, that path leads to a different app and default behavior" it's not actually a failure... to a canned test for a php exploit. The app wasn't php, it was an SPA and always delivered the same default page unless in the /auth/* route.
After that my response became, show me an actual exploit with an actual data leak you can show me and I'll update my code instead of your test.
I read that post as him talking about their company, in the sense of the company they were working for. If that was the case, then an exploit of an unfixed security issue could very much affect them either just as part of the company if the fallout is enough to massively harm business, or specifically if they had not properly documented their concerns so “we didn't know” could be the excuse from above and they could be blamed for not adequately communicating the problem.
For an external company “not your company, not your problem” for security issues is not a good moral position IMO. “I can't risk the fallout in my direction that I'm pretty sure will result from this” is more understandable because of how often you see whistle-blowers getting black-listed, but I'd still have a major battle with the pernickety prick that is my conscience¹ and it would likely win out in the end.
[1] oh, the things I could do if it wasn't for conscience and empathy :)
> These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023
It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.
Put more concretely, couple vignettes:
- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."
- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
> A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
Maybe not when it is as much as 20 seconds, but an old manager of mine would save fixing something like that for a “quick win” at some later time! He would even have artificial delays put in, enough to be noticeable and perhaps reported but not enough to be massively inconvenient, so we could take them out during the UAT process - it didn't change what the client finally got, but it seemed to work especially if they thought they'd forced us to spend time on performance issues (those talking to us at the client side could report this back up their chain as a win).
I've seen into some moderately high levels of "prestigious" business and government circles and I've yet to find any level at which everyone suddenly becomes as competent and sharp as I'd have expected them to be, as a child and young adult (before I saw what I've seen and learned that the norm is morons and liars running everything and operating terrifically dysfunctional organizations... everywhere, apparently, regardless how high up the hierarchy you go). And actually, not only is there no step at which they suddenly become so, people don't even seem to gradually tend to brighter or generally better, on average, as you move "upward"... at all! Or perhaps only weakly so.
Whatever the selection process is for gestures broadly at everything, it's not selecting for being both (hell, often not for either) able and willing to do a good job, so far as what the job is apparently supposed to be. This appears to hold for just about everything, reputation and power be damned. Exceptions of high-functioning small groups or individuals in positions of power or prestige exist, as they do at "lower" levels, but aren't the norm anywhere as far as I've been able to discern.
This is somewhat related, but I know of a fairly popular iOS application for iPads that stores passwords either in plaintext or encrypted (not as digests) because they will email it to you if you click Forgot Password. You also cannot change it. I have no experience with Apple development standards, so I thought I'd ask here if anyone knows whether this is something that should be reported to Apple, if Apple will do anything, or if it's even in violation of any standards?
FWIW, some types of applications may be better served with encryption over hashing for password access. Email being one of them, given the varying ways to authenticate, it gets pretty funky to support. This is why in things like O365 you have a separate password issued for use with legacy email apps.
>whether this is something that should be reported to Apple, if Apple will do anything
Lmao Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off" then never contact you again. Ask me how I know. To their credit, I suspected they ran it through useless rudimentary automated checks which passed and they were back in business like a day later.
If your expectation is they will do something about shitty coding practices half the App Store would be banned.
> Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off"
Ask while you are in an EU country, request appeal and initiate Out-of-court dispute resolution.
Or better yet: let the platform suck, and let this be the year of the linux desktop on iPhone :)
Typical shakedown tactic. I used to have a boss who would issue these ridiculous emails with lines like "you agree to respond within 24 hours else you forfeit (blah blah blah)"
This is extremely disappointing. The insurer in question has a very good reputation within the dive community for acting in good faith and for providing medical information free of charge to non-members.
This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.
I find often that conversations between lawyers and engineers are just two very different minded people talking past each other. I'm an engineer, and once I spent more time understanding lawyers, what they do, and how they do it, my ability to get them to do something increased tremendously. It's like programming in an extremely quirky programming language running on a very broken system that requires a ton of money to stay up.
I've said before that we need strong legal protections for white-hat and even grey-hat security researchers or hackers. As long as they report what they have found and follow certain rules, they need to be protected from any prosecution or legal consequences. We need to give them the benefit of the doubt.
The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.
Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.
As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.
Another comment says the situation was fake. I don't know, but to avoid running afoul of the authorities, it's possible to document this without actually accessing user data without permission. In the US, the Computer Fraud and Abuse Act and various state laws are written extremely broadly and were written at a time when most access was either direct dial-up or internal. The meaning of abuse can be twisted to mean rewriting a URL to access the next user, or inputting a user ID that is not authorized to you.
Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.
For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".
Instead of understanding all of this, and when it does or does not apply, it's probably better to disclose vulnerabilities anonymously over Tor.
It's not worth the hassle of being forced to hire a lawyer, just to be a white hat.
Part of the motivation of reporting is clout and reputation. That sounds harsh or critical but for some folks their reputation directly impacts their livelihood. Sure the data controller doesn't care, but if you want to get hired or invited to conferences then the clout matters.
I think the problem is the process. Each country should have a reporting authority and it should be the one to deal with security issues.
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
If the government wasn't so famous for also locking people up that reported security issues I might agree, but boy they are actually worse.
Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.
The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.
That’s almost what we already have with the CVE system, just without the legal protections. You report the vulnerability to the NSA, let them have their fun with it, then a fix is coordinated to be released much further down the line. Personally I don’t think it’s the best idea in the world, and entrenching it further seems like a net negative.
This is not how CVEs work at all. You can be pretty vague when registering it. In fact they’re usually annoyingly so and some companies are known for copy and pasting random text into the fields that completely lead you astray when trying to patch diff.
Additionally, MITRE doesn’t coordinate a release date with you. They can be slow to respond sometimes but in the end you just tell them to set the CVE to public at some date and they’ll do it. You’re also free to publish information on the vulnerability before MITRE assigned a CVE.
Does it have to be a government? Why not a third party non-profit? The white hat gets shielded, and the non-profit has credible lawyers which makes suing them harder than individuals.
The idea is to make it easier to fix the vulnerability than to sue to shut people up.
For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.
This business of going to the company directly and hoping they don’t sue you is bananas in my opinion.
I find these tales of lawyerly threats completley validate the hackers actions. They reported the bug to spur the company to resolve it. Their reaction all but confirms that reporting it to them directly would not have been productive. Their management lacks good stewardship. They are not thinking about their responsibility to their customers and employees.
Maintaining Cybersecurity Insurance is a big deal in the US, I don't know about Europe. So vulnerability disclosure is problematic for data controllers because it threatens their insurance and premiums. Today much of enterprise security is attestation based and vulnerability disclosure potentially exposes companies to insurance fraud. If they stated that they maintained certain levels of security, and a disclosure demonstratively proves they do not, that is grounds for dropping a policy or even a lawsuit to reclaim paid funds.
So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.
Malta has been mentioned? As a person living here I could say that workflow of the government here is bad. Same as in every other place I guess.
By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.
I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.
Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".
Wish they named them. Usually I don't recommend it. But the combination of:
A) in EU; GDPR will trump whatever BS they want to try
B) no confirmation affected users were notified
C) aggro threats
D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience
Due to B), there's a strong responsibility rationale.
Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.
EU GDPR has very little enforcement. So while the regulation in theory prevents that, in practice you can just ignore it. If you're lucky a token fine comes up years down the line.
Not sure what the name of your complex is, maybe groveling deference to legalese? Whatever it is, I'm sure I would have applied it to your entire country of origin if I knew where you're from, and if I were developmentally around the age of twelve.
He did everything exactly by the book and in the end was even nice enough to not publish the company's name, despite the legal threat being bullshit and him being entirely in the right.
How do you know? Some of the text has a slightly LLM-ish flavour to it (e.g. the numbered lists) but other than that I don’t see any solid evidence of that
Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all
Not them but the formatting screams LLM to me. Random "bolding" (rendered on this website as blue text) of phrases, the heading layout, the lists at the end (bullet point followed by bolded text), common repeats of LLM-isms like "A. Not B". None of these alone prove it but combined they provide strong evidence.
While I wouldn't go so far as to say the post is entirely made up (it's possible the underlying story is true) - I would say that it's very likely that OP used an LLM to edit/write the post.
HN's comment section new favourite sport, trying to guess if an article was generated by LLM. It's completely pointless. Why not focus on what's being said instead?
I thought the same thing. With the rate LLMs are improving, it's not going to be too much longer before no one can tell.
I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.
> This is an LLM-generated article, for anyone who might wish to save the "15 min read" labelled at the top. Recounts an entirely plausible but possibly completely made up narrative of incompetent IT, and contains no real substance.
Nothing in the original message refers to it being clickbait, the core complaint is the LLM-like tone and the lack of substance, which you also just threw it there without references ironically.
> What, exactly, is the problem with disclosing the nature of the article for people who wish to avoid spending their time in that way?
It's alright as long as it's not based on faith or guesswork.
It is not based on guesswork. For whatever it's worth, I have gotten 7 LLM accounts banned from HN in the past week based on accurately detecting and reporting them to moderation[1]. Many of these accounts had between dozens and 100 upvotes, some with posts rated to the top of their threads that escaped detection by others. I have not once misidentified and reported an account that was genuinely human. I am aware that other people have poorly-tuned heuristics and make false accusations, but it is possible to build the skill to detect LLM output reliably, and I have done so. In the end, it is up to you whether you believe me, but I am simply trying to offer a warning for people who dislike reading generated material, nothing more.
[1] Unlike LLM-generated articles, posting LLM-generated comments is actually against the rules.
Congrats, and thanks for your work, but you should be aware that HN comments are completely different from articles. What makes you think the skills/automations required to identify LLM generated HN comments will work seamlessly with submitted articles? You have to do a statistical analysis of this, otherwise it's just guesswork.
You also have to take into account that the medium is the message[1]. In a nutshell, the more people read LLM generated posts and interact with chatbots, the higher the influence of LLM style in their writing -- the whole "delve" comes to mind, and double dashes. So even if you have a machine that correctly identified LLM generated posts, you can't be sure it'll keep working.
Those are a lot of words to say you guessed. And the banning comment is nice I guess but pretty meaningless. Does moderation really always report back to you when you make such an accusation ? Who's to even say all the banned accounts were LLMs ? You know what would happened if i got banned because someone accused me of being a LLM ? Nothing. I'd take it as a sign to do other things.
Let's say you are the LLM detecting genius you paint yourself to be. Well guess what? You're human and you're going to make mistakes, if you haven't made a bunch of them already. So if you have nothing better to add to a post than to guess this, you probably shouldn't say anything at all. Like you said, it's not even against the rules.
You know I had a thoughtful comment written in response to this that wouldn’t post because your comment got flagged to death when I tried to submit it!
Your firebrand attitude is doing a disservice to everyone who takes vibe hunting vibecraft seriously!
The intended audience doesn’t even care that this is LLM-assisted writing. Whether the narrative is affected by AI is second to the technical details. This is technical documentation communicated through a narrative, not personal narrative about someone’s experience with a technical problem. There’s a difference!
Can you share how you confirmed this is LLM generated? I review vulnerability reports submitting by the general public and it seems very plausible based on my experience (as someone who both reviews reports and has submitted them), hence why I submitted it. I am also very allergic to AI slop and did not get the slop vibe, nor would I knowingly submit slop posts.
I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.
(my experience is roughly a decade in cybersecurity and risk management, ymmv)
The headers alone are a huge giveaway. Spams repetitive sensatational writing tropes like "No X. No Y. No Z." and "X. Not Y" numerous times. Incoherent usage of bold type all throughout the article. Lack of any actually verifiable concrete details. The giant list of bullet points at the end that reads exactly like helpful LLM guidance. Many signals throughout the entire piece, but don't have time to do a deep dive. It's fine if you don't believe me, I'm not suggesting the article be removed. Just giving a heads-up for people who prefer not to read generated articles.
Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible.
Pangram[0] thinks the closing part is AI generated but the opening paragraphs are human. Certainly the closing paragraphs have a bit of an LLM flavor (a header titled "The Pattern", eg)
There are no automated AI detectors that work. False positives and false negatives are both common, and the false positives particularly render them incredibly dangerous to use. Just like LLMs have not actually replaced competent engineers working on real software despite all the hysteria about them doing so, they also can't automate detection, and it is possible to build up stronger heuristics as a human. I am fully confident and would place a large sum of money on this article being LLM-generated if we could verify the bet, but we can't, so you'll just have to take my word for it, or not.
I'm very sensitive to this but disagree vehemently.
I saw one or two sigils (ex. a little eager to jump to lists)
It certainly has real substance and detail.
It's not, like, generic LinkedIn post quality.
You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."
I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.
But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.
Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.
Insurance company would not cover a decompression chamber for someone who has severe decompression sickness, it is a life-threatening condition that requires immediate remediation.
The idea that you possible neurological DCS and you must argue on the phone with an insurance rep about if you need to be life-flighted to the nearest chamber is just.... Mind blowing
It is probably among the standard forms required to participate in a diving class/excursion for travelers from other countries; and, Malta was probably chosen as the official HQ for legal or liability shelter reasons.