Kickstarter is full of projects like this where every possible shortcut is taken to get to market. I’ve had some good success with a few Kickstarter projects but I’ve been very selective about which projects I support. More often than not I can identify when a team is in over their heads or think they’re just going to figure out the details later, after the money arrives.
For a period of time it was popular for the industrial designers I knew to try to launch their own Kickstarters. Their belief was that engineering was a commodity that they could hire out to the lowest bidder after they got the money. The product design and marketing (their specialty) was the real value. All of their projects either failed or cost them more money than they brought in because engineering was harder than they thought.
I think we’re in for another round of this now that LLMs give the impression that the software and firmware parts are basically free. All of those project ideas people had previously that were shelved because software is hard are getting another look from people who think they’re just going to prompt Claude until the product looks like it works.
The people doing these kickstarters are outsourcing the work because they can’t do it themselves. If they use an LLM, they don’t know what to look for or even ask for, which is how they get these problems where the production backend uses shared credentials and has no access control.
The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
> they'd rather vibe code themselves than trust an unproven engineering firm
You could cut the statement short here, and it would still be a reasonable position to take these days.
LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you're holding it wrong.
LLMs definitely write more robust code than most. They don't take shortcuts or resort to ugly hacks. They have no problem writing tedious guards against edge cases that humans brush off. They also keep comments up to date and obsess over tests.
I had 5.3-Codex take two tries to satisfy a linter on Typescript type definitions.
It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use.
On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
The discourse around LLMs has created this notion that humans are not lazy and write perfect code. They get compared to an ideal programmer instead of real devs.
LLM's at best asymptotically approach a human doing the same task. They are trained on the best and the worst. Nothing they output deserves faith other than what can be proven beyond a shadow of a doubt with your own eyes and tooling. I'll say the same thing to anyone vibe coding that I'd say to programmatically illiterate. Trust this only insofar as you can prove it works, and you can stay ahead of the machine. Dabble if you want, but to use something safely enough to rely on, you need to be 10% smarter than it is.
I'm the founder of neurotech/sleeptech company https://affectablesleep.com, and this post shows the major issue with current wellness device regulation.
I believe there was some good that came from last months decision to be more open to what apps and data can say without going through huge regulatory processes (though because we apply auditory stimulation, this doesn't apply to us), however, there should be at least regulatory requirements for data security.
We've developed all of our algorithms and processing to happen on device, which is required anyway due to the latency which would result from bluetooth connections, but even the data sent to the server is all encrypted. I'd think that would be the basics. How do you trust a company with monitoring, and apparently providing stimulation, if they don't take these simple steps?
How about complaining that brain waves get sent to a server?
I'm a neuroscientist, so I'm not going to say that the EEG data is mind reading or anything, but as a precedent, non privacy of brain data is very bad.
How useful could something like this be for research? I'm not a neuroscientist so I have no clue, but it seems like the only justification I can think of..
The general idea of an EEG system that posts data to a network?
Very, but there are already tons of them at lots of different price, quality, openness levels. A lot of manufacturers have their own protocols; there are also quasi/standards like Lab Streaming Layer for connecting to a hodgepodge of devices.
This particular data?
Probably not so useful. While it’s easy to get something out of an EEG set, it takes some work to get good quality data that’s not riddled with noise (mains hum, muscle artifacts, blinks, etc). Plus, brain waves on their own aren’t particularly interesting—-it’s seeing how they change in response to some external or internal event that tells us about the brain.
Not a neuroscientist either but I would imagine that raw data without personal information would not be useful for much. I can imagine that it would be quite valuable if accompanied with personal data plus user reports about how they slept each night, what they dreamed about if anything, whether it was positive dreams or nightmares etc. And I think quite a few people wouldn’t mind sharing all of that in the name of science, but in this case they don’t seem to have even tried to ask.
Millions of people voluntarily use Gmail which gives a lot more useful data than EEG output to DHS et al without a warrant under FAA702. What makes you think people who “have nothing to hide” would care about publishing their EEG data?
Ok, obviously unethical to do it, but this sounds like you've got the power to create some sci-fi shared dreaming device, where you can read people's brainwaves and send signals to other people's masks based on those signals. Or send signals to everyone at the same time and suddenly people all across the world experience some change in their dream simultaneously.
Like, don't actually do it, but I feel like there's inspiration for a sci-fi novel or short story there.
I have deployed open MQTT to the world for quick prototypes on non personal (and healthcare) data. Once my cloud provider told me to stop because they didn’t like it, that could be used for relay DDOS attacks.
I would not trust the sleep mask company even if they somehow manage to have some authentication and authorisation on their MQTT.
I would love to see the prompt history. Always curious how much human intervention/guidance is necessary for this type of work because when I read the article I come away thinking I prompt Claude and it comes out with all these results. For example, "So Claude went after the app instead. Grabbed the Android APK, decompiled it with jadx." All by itself or the author had to suggest and fiddle with bits?
The shared hardcoded credentials pattern isn't just an IoT problem. I work in AWS security and see the same thing constantly. Teams hardcode a single set of AWS access keys into their application, share them across every environment, and hope nobody runs strings on the binary. Same logic, same laziness, same outcome.
The difference is when it's a sleep mask, someone reads your brainwaves. When it's a cloud credential, someone reads your customer database. Per-device or per-environment credential provisioning isn't even hard anymore. AWS has IAM roles, IoT has device certificates, MQTT has client certs and topic ACLs. The tooling exists. Companies skip it because key management adds a step to the assembly line and nobody budgets time for security architecture on v1.
> nobody budgets time for security architecture on v1
It’s quite literally why the internet is so insecure, because at many points all along the way, “hey, should we design and architect for security?” is/was met with “no, we have people to impress and careers to advance with parlor tricks to secure more funding; besides, security is hard and we don’t actually know what we are doing, so tow the line or you’ll be removed.”
Nevermind. I have just described my iPhone as a “generic chinese mobile device” to Claude, and he successfully gained root access with admin privileges to my iPhone, and even captured a couple minutes of EEG from 30 genetic mobile devices in my neighborhood. Seems like iPhones are tracking your thoughts, Claude could prove that, just ask it to tell you everything
Author doesn’t spell out why they are not naming them, but my guess is they are trying to not promote the product to malicious actors who would be interested in the sleep data of others.
I guess that’s not a huge problem, though, since all users are presumably at least anonymous.
This feels like a reason to buy the device to me? I would want to block all of the data going to the cloud and would only want operations happening locally. But the MQTT broadcast then allows me to create a local only integration in Home Assistant with all of the data.
What's the real risk profile? Robbers can see you are asleep instead of waiting until you aren't home?
I have not implemented MQTT automations myself, but it's there a way to encrypt them? That could be a nice to have
Sounds like you cannot control which MQTT endpoint it is headed to? It just goes to the server of the device. Assuming you could modify the firmware, you could program it to send to a local MQTT.
The narrator in the article acts as a third person observer and identifies "Claude" as the active hacker. So assuming the (unidentified) company that sells/manages the product wants to prosecute a CFAA violation, who do they go after? Was Claude the one responsible for all of the hacking?
This guy bought an internet connected sleep mask so it's not surprising that it was collecting all kinds of data, or that it was doing it insecurely (everyone should expect IoT anything to be a security nightmare) so to me the surprising thing about this is that the company actually bothered to worry about saving bandwidth/power and went through the trouble of using MQTT. Probably not the best choice, and they didn't bother to do it securely, but I'm genuinely impressed that they even tried to be efficient while sucking up people's personal data.
"The ZZZ mask is an intelligent sleep mask — it allows you to sleep less while sleeping deeper. That’s the premise — but really it is a paradigm breaking computer that allows full automation and control over the sleep process, including access to dreamtime."
or if this is another scifi variation of the same theme, with some dev like embellishments.
the shared MQTT credentials pattern is unfortunately super common in budget IoT. seen the exact same thing in smart plugs and air quality sensors. the frustrating part is per-device auth is not even hard to set up, mosquitto supports client certs and topic ACLs with minimal config. manufacturers skip it because per-device key provisioning adds a step to the assembly line and nobody wants to think about key management. so they hardcode one set of creds and hope nobody runs strings on the binary.
I discovered a very similar vulnerability in Mysa smart thermostats a year ago, also involving MQTT, and also allowing me to view and control anyone's thermostat anywhere in the world:
https://news.ycombinator.com/item?id=43392991
Also discovered during reverse-engineering of the devices’ communications protocols.
This smells like bullshit to me, although I am admittedly not experienced with Claude.
I find it difficult to believe that a sleep mask exists with the features listed: "EEG brain monitoring, electrical muscle stimulation around the eyes, vibration, heating, audio." while also being something you can strap to your face and comfortably sleep in, with battery capacity sufficient for several hours of sleep.
I also wonder how Claude probed bluetooth. Does Claude have access to bluetooth interface? Why? Perhaps it wrote a secondary program then ran that, but the article describes it as Claude probing directly.
I'm also skeptical of Claude's ability to make accurate reverse-engineered bluetooth protocol. This is at least a little more of an LLM-appropriate task, but I suspect that there was a lot of chaff also produced that the article writer separated from the wheat.
If any of this happened at all. No hardware mentioned, no company, no actual protocol description published, no library provided.
It makes a nice vague futuristic cyperpunk story, but there's no meat on those bones.
Claude could access anything on your device, including system or third party commands for network or signal processing - it may even have their manuals/sites/man pages in the training set. It’s remarkably good at figuring things out, and you can watch the reasoning output. There are mcp tools for reverse engineering that can give it even higher level abilities (ghidra is a popular one).
Yesterday I watched it try and work around some filesystem permission restrictions, it tried a lot of things I would never have thought of, and it was eventually successful. I was kinda goading it though.
A lot of BLE peripherals are very easy to probe. And there are libraries available for most popular languages that allow you to connect to a peripheral and poke at any exposed internals with little effort.
As for the reverse engineering, the author claims that all it took was dumping the strings from the Dart binary to see what was being sent to the bluetooth device. It's plausible, and I would give them the benefit of the doubt here.
Found that in seconds. EEG, electrical stimulation, heat, audio, etc. Claims a 20 hour battery.
As to the Claude interactions, like others I am suspicious and it seems overly idealized and simplified. Claude can't search for BT devices, but you could hook it up with an MCP that does that. You can hook it up with a decompiler MCP. And on and on. But it's more involved than this story details.
That appears to be more than a centimeter thick, and not particularly flexible. It's more like ski goggles than a sleep mask.
So yeah, a product exists that claims to be a sleep mask with these features. Maybe someone could even sleep while wearing that thing, as long as they sleep on their back and don't move around too much. I remain skeptical that it actually does the things it claims and has the battery life it claims. This is kickstarter after all. Regardless, this would qualify as the device in question for the article. Or at least inspiration for it.
Without evidence such as wireshark logs, programs, protocol documentation, I'm not convinced that any of this actually _happened_.
Claude, or any good agent, doesn't need MCP to do things. As long as it has access to a shell it can craft any command that it needs to fulfill its prompt.
There are no shell commands to do what is described. I could get Claude to interact with BLE devices, but it did it by writing and running various helper applications, for instance using the Bleak library. So I guess not an MCP per se.
I was originally going to ask something similar, but from a different angle.
These blog posts now making the rounds on HN are the usual reverse engineering stories, but made a lot more compelling simply because they involve using AI.
Never mind that the AI part isn't doing any heavy lifting and probably just as tedious as not using AI in the first place. I am confused why the author mentions it so prominently. Past authors would not have been so dramatic and just waved their hands that they had some trial and error before finding out how the app is built. The focus would have been on the lack of auth and the funny stuff they did before reporting it to the devs.
Is this some kind of joke? Claude hallucinated everything, including capacity of device to accurately measure EGG of brain waves and hallucinated the process of decoding APK to some paranoidal user who has posted his conspiracy level AI hallucinations “finds” to his blog post and everyone is like “Yeah, Claude can do this”. Is everyone here insane? I am insane?
As an aside, it seems cool that the bar to reverse engineering has lowered from all the LLMs. Maybe we'll get to take full control of many of these "smart" devices that require proprietary/spyware apps and use them in a fully private way. There's no excuse that any such apps solely to interact with devices locally need to connect to the internet, like dishwasher.
It's disappointing to see. It doesn't take much work to configure a MQTT server to require client certificates for all connections. It does require an extra step in provisioning to give each device a client certificate. But for a commercial product, it's inexcusably negligent.
Then there's hardening your peripheral and central device/app against the kinds of spoofing attacks that are described in this blog post.
If your peripheral and central device can securely [0] store key material, then (in addition to the standard security features that come with the Bluetooth protocol) one may implement mutual authentication between the central and peripheral devices and, optionally, encryption of the data that is transmitted across that connection.
Then, as long as your peripheral and central devices are programmed to only ever respond when presented with signatures that can be verified by a trusted public key, the spoofing and probing demonstrated here simply won't work (unless somebody reverse engineers the app running on the central device to change its behaviour after the signature verification has been performed).
To protect against that, you'd have to introduce server-mediated authorisation. On Android, that would require things like the Play Integrity API and app signatures. Then, if the server verifies that the instance of the app running on the central device is unmodified, it can issue a token that the central device can send to the peripheral for verification in addition to the signatures from the previous step.
Alternatively, you could also have the server generate the actual command frames that the central device sends to the peripheral. The server would provide the raw command frame and the command frame signed with its own key, which can be verified by the peripheral.
I guess I got a bit carried away here. Certainly, not every peripheral needs that level of security. But, into which category this device falls, I'm not sure. On the one hand, it's not a security device, like an electronic door lock. And on the other hand, it's a very personal peripheral with some unusual capabilities like the electrical muscle stimulation gizmo and the room occupancy sensor.
[0]: Like with the Android KeyStore and whichever HSMs are used in microcontrollers, so that keys can't be extracted by just dumping strings from a binary.
Interesting project. Here's a thought which I've always had in the back of my mind, ever since I saw something similar in an episode of Buck Rogers (70s-80s)! Many people struggle with falling asleep due to persistent beta waves; natural theta predominance is needed but often delayed. Imagine an "INEXPENSIVE" smart sleep mask that facilitates sleep onset by inducing brain wave transitions from beta (wakeful, high-frequency) to alpha (8-13 Hz, relaxed) and then theta (4-8 Hz, stage 1 light sleep) via non-invasive stimulation. A solution could be a comfortable eye mask with integrated headphones (unintrusive) and EEG sensors.
It could use binaural beats or similar audio stimulation to "inject" alpha/theta frequencies externally, guiding the brain to a tipping point for abrupt sleep onset. Sensors would detect current waves; app-controlled audio ramps from alpha-inducing beats to theta, ensuring natural predominance. If it could be designed, it could accelerate sleep transition, improve quality, non-pharmacological.
> For obvious reasons, I am not naming the product/company here, but have reached out to inform them about the issue.
Coward. The only way to challenge this garbage is "Name and Shame". Light a fire under their asses. That fire can encourage them to do right, and as a warning to all other companies.
Doesn't disclosing this to the world at the same time as you disclose it to the company immediately send hundreds of black hats to their terminals to see how much chaos they can create before the company implements a fix?
Perhaps the author is not a coward, but is giving the company time to respond and commit to a fix for the benefit of other owners who could suffer harm.
So your plan is to let the blackhats in the know attack user devices, rather than send out a large warning to "Quit using immediately"?
If we applied this similar analogy to a e.coli infection of foods, your recommendation amounts to "If we say the company name, the company would be shamed and lose money and people might abuse the food".
People need to know this device is NOT SAFE on your network, paired to your phone, or anything. And that requires direct and public notification.
I did consider naming, but they were very responsive to the disclosure and I was not entirely familiar with potential legal implications of doing so. (For what it's worth, it is not Luuna)
It's good that they were responsive in the disclosure, but it's still a mark of sloppiness that this was done in the first place, and I'd like to know so I can avoid them.
Even if naming and shaming doesn't work, I sure want to know so I can always avoid them for myself and my family. Thanks for the call-out and the educated guess.
There should be two separate lines of products. One in which privacy is priority and adheres to government regulations (around privacy) and probably costs 2x and one with zero government intervention (around privacy) which costs less and time-to-market is faster.
I don't want a few irrationally paranoid people bottlenecking progress and access to the latest technology and innovation.
I'm happy to broadcast my brainwaves on an open YouTube channel for the ZERO people who are interested in it.
> I don't want a few irrationally paranoid people bottlenecking progress and access to the latest technology and innovation.
Paranoid? Is there not enough evidence posted almost daily on HN that tech companies are constantly spying on their users through computers, Internet-of-Shit devices, phones, cars and even washing machines? You might not care about the brainwave data specifically, but there is bound to be information on your devices that you expect remains private.
Things have become so bad that I now refuse to use computers that don't run a DIY Linux distro like Arch that allows users to decide what goes into their system. My phone runs GrapheneOS because Google and Apple can't be trusted. I self host email and other "cloud" services for the same reason.
It’s kinda like “qualified investors” - you want to make sure people who are wiling to do something extremely stupid can afford it and acknowledge their stupidity.
We don’t need regulation to protect those that can afford to buy protection: we need it for those who can’t.
It is also technically a user failure to have purchased a connected device in the first place. Does the device require a closed-source proprietary app? Closed-source non-replaceable OS? Do not buy it.
Very few options available, if any, if you actually do that. The IoT market is unfortunately small and dominated by vendors that don’t want at all an open ecosystem. That would hinder their ability to force you to pay for a subscription which is where all the money is.
Yes, that’s right, don’t buy any new car, any phone, any television. Hell don’t buy any x86 laptop or desktop computer, since you can’t disable out replace Intel ME/etc.