Maybe I'm in the minority... but this seems like an extremely compelling offering for certain use-cases. Not for enterprises, but for individuals and small businesses.
My off-site backup is a thinkpad x230 with a 1 TB HDD. It's currently at my friends house, and I access it with tailscale. 7 eur/month to colocate this in a datacenter with stable (and fast) Internet + power seems like a pretty good deal.
I can understand some of the concerns with user-provided hardware. Maybe a better model, would be for CoLaptop to offer hardware themselves. This would allow them to standardize on a few models, which opens up many possible improvements such as central DC power, power efficient BIOS settings, enclosures with cooling ducts, etc. They can still follow the "old laptop as a server" model by buying off-lease laptops from the corporate world.
As many sites do, it may actually invalidate your copyright. You have to put all of the years when you made copyrightable edits to the page. A range like 2010-2025 is only allowed if every single year in that range is included.
1) You don't have to keep copyrights up to date (and in fact you don't have to put them at all), 2) Every single startup i've seen on HN is sketchy af. Racking laptops in a cage at a Hetzner DC is probably the least sketchy product i've seen here.
And honestly, not a terrible idea, I have old laptops that would work as a VPS. $7/month for somebody to host a public server for me, and not on my crappy residential isp? All I have to lose is an old laptop I haven't touched in 5 years? Sign me up
(they do need a real domain before i'll give them money tho, lol)
Yeah but for $6/mo you can get a tiny linode or digital ocean droplet, and not worry about hardware failing. It's true that a laptop probably has more resources than the smallest VMs, but no remote management interface and can't scale if you suddenly had a surge of traffic.
> Yeah but for $6/mo you can get a tiny linode or digital ocean droplet
That gets you, what, 1 "vCPU" with maybe a gig of ram and a couple of dozen gig of disk.
If you (or a friend) work for a company of any size, there's probably a cupboard full of laptops that won't upgrade to Win11 sitting there doing nothing that you could get for free just by asking the right person. It'll have 4 or 8 cores, each of which is more powerful that the "vCPU" in that droplet. It'll have 8 or maybe 16gig of ram, and at least half a TB of disk and depending on that laptop quite likely to be able to be configured with half a TB of fast nVME storage and a few TB of slower spinning rust storage.
If you want 8vCPUs/cores, 16GB of ram, and 500GB of SSD, all of a sudden Digital Ocean looks more like $250/month.
If you are somewhere in that grey area where you need more than ivCPU and 1GB of memory, grabbing the laptop out of the cupboard that your PM or one of the admin staff upgraded from last year and shipping not off to a datacenter with your flavour of linux installed seems like it's worth considering.
Hell, get together with a friend and have two laptops hosted for 14Euro/month between you, and be each others "failing hardware" backup plan...
I bet colos will plug a KVM into your hardware and give you remote access to that KVM. I also bet rachelbythebay has at least one article that talks about the topic.
> ...can't scale if you suddenly had a surge of traffic.
1) If your public server serves entirely or nearly-entirely static data, you're going to saturate your network before you saturate the CPU resources on that laptop.
2) Even if it isn't, computers are way faster than folks give them credit for when you're not weighing them down with Kubernetes and/or running swarms of VMs. [0]
Yeah. I got bored a couple of hours after I posted that speculation and found several other colo facilities that mentioned that they'd do remote KVM. I'd figured that it was a common thing (a fair chunk of hardware you might want to colo either doesn't have IPMI or doesn't have IPMI that's worth a damn), but wasn't sure.
You (the person paying to co-locate hardware) don't buy the KVM that the colo facility uses. The colo facility hooks up the KVM that they own to your hardware and configures it so that you can access it. Once you stop paying to colo your hardware, you take your hardware back (or maybe pay them to dispose of it, I guess) and they keep the KVM, because it's theirs.
k8s doesn't really weigh you down, especially if tuned for the low end use case (k1s). It encourages some dumb decisions that do, such as using Prometheus stack with default settings, but by itself it just eats a lot of ram.
Now using CPU limits in k8s with cgroups v1 does hurt performance. But doing that would hurt performance without k8s too.
> Website copyright is out of date by two years...
Can you explain how a copyright can be "out of date by two years"?
I always thought the copyright notice should reflect the year of creation, and that it's actually bad (from a legal POV) to always show the current year through scripting.
The problem is that the website says they are still working out the logistics details. If the company has been around for 2 years they should have figured that out and updated the page by now.
So many people want to believe in this sort of thing for various reasons that I get fatigued at the very thought of trying to explain to people who believe in it earnestly that it is not a good idea. (e.g. commercial hosting services are really competitive; for a long time the cost of computing has been going down over time though I don't know if that is reversing because we've hit the end of the real Moore's law [1] or if it is a temporary blip)
[1] the motor behind it is cost reduction, once that stops it stops because we can't afford it anymore!
Well, it exists, but it exists if you’re willing to buy server hardware on eBay, hustle to get old parts working together, negotiate a good deal on a cabinet, get space from ARIN and announce it and so on. There are probably 10-50x cost efficiencies vs. renting 5 year old CPU families on AWS at huge markup.
A laptop isn’t the way to do that though. And your typical VC-fueled startup isn’t going to know how to do it either. It takes a very narrow slice of competence to be able to do that correctly.
I think it's most likely testing the waters for a real offering. It's not that weird. Many colo data centers already have policies about hosting laptops because it's already something that happens. It just isn't common and usually isn't for hosting servers.
If the battery in the laptop is still good, it comes with it's own UPS. My MBPs haven't had an ethernet port in a minute, so do you have to supply your own adapters as well??? You could fit ~15 MBPs on their edge in 9RUs. That'd be an interesting looking rack. Not quite a blade chassis. It'd be rather boring looking as there's no blinky-blinkies
I didn't really think that any of what I wrote would be taken seriously to the point of needing a retort. I mentioned blade servers and knew rack unit measurements which as context clues would have suggested I was familiar with actual data center equipment.
If you got creative with cable management you might be able to double up front and rear. It would probably be a PITA to manage but you could probably get some halfway decent density
Looks like they were proposing supplying usb Ethernet adapters, which doesn’t seem crazy, they’re cheap.
Also, isn't this just a huge fire hazard of they actually do what they claim? Or will they remove the batteries from these old, continually plugged in, poorly cooled laptops?
Colocating itself, though isn't new at all. Lots of different ways to host, including servers, mac minis, laptops are conceivable too because they share the same kinds of parts that mac minis might have.
It’s OKish as a starting point into selfhosted world but overall not ideal. The battery is a fire risk and the entire thermal design isn’t really geared towards 24/7 operation.
Not really something I’d co locate unless it was a DC physically near me so that stopping by is easy
> The battery is a fire risk and the entire thermal design isn’t really geared towards 24/7 operation.
I remember having this old Dell Latitude, where you could easily swap out the battery pack with a button/tab thing on the back, without having to open anything else up - I even got a spare bigger capacity battery, but it would work without one altogether when connected to the power brick.
I unironically think that all laptops should be built like that.
First thing I learned attempting the same is that lid open vs closed are two very different situations in terms of thermals.
But overall without aggressive throttling these devices work a maximum of half an hour before the components get saturated with heat and performance tanks.
Yup. Gaming laptops with dedicated GPUs tend to fare better on this because internally the CPU and GPU is often bridged with a heat pipe and shared cooling
So thermals are specd for both running at same time but you only need CPU for home server so shouldn’t throttle
> Your old laptop packs more CPU power, RAM, and storage than their entry-level offerings - and with us, you'll pay just €7/month for professional hosting
This is basically the same price as the cheapest options on Hetzner: https://snipboard.io/C9epWo.jpg. Sure my old laptop does have more RAM and a bigger SSD, but I bet it's also less reliable than Hetzner's servers, and is likely to suddenly die some day. So is the tradeoff really worth it? It's hard for me to believe that this is a genuine improvement for most things. The only definite winning case I can think of is if I have a process I want to run, but I don't care if it just suddenly stops working. But when would that ever be the case? and to save a couple dollars per month?
> I bet it's also less reliable than Hetzner's servers, and is likely to suddenly die some day
I’m a happy Hetzner customer but I have had servers that I rented from them die a couple of times.
I rent physical servers from them that have been previously rented to other customers. At some point hard drives fail.
However, I have solid backup setup in place (ZFS send and recv to other physical hosts in different physical locations) with that in mind, so I haven’t lost data with Hetzner. But if I naively did not have any backup then data would have gotten lost a couple of times.
> I rent physical servers from them that have been previously rented to other customers. At some point hard drives fail.
The comparison in this case is to Hetzner's VPS offerings, which are probably less powerful than the average "old laptop" but have a significant advantage in terms of hardware reliability. It's still possible for the host running the VPS to have problems which result in a crash or the VM equivalent of a hard power off but the VM hosts and their underlying storage should be redundant such that the virtual hardware never fails.
That's not to say rebooting from a crash-consistent state will always work, you should always keep backups even with a high-quality VPS host, but the odds of recovering cleanly from a hardware problem are orders of magnitude better than an old laptop. For the sort of hobby project or personal tinker box that would be reasonable to host on a random laptop shoved in a rack you probably wouldn't even notice the downtime until you saw the event notification email your provider sends you.
I've run a 7 figure business from an SSD shoved in a sata2 DVD-ROM slot in a DC because the end customer was being obtuse about upgrading from their "high end, best practice" raid 10 discs.
You use so many big words for nothing. All you need are backups. When it dies you restore. Nobody will care.
Of course. Just pointing out that even if the hardware might be server grade, doesn’t mean one can assume that the risk of hardware failure is negligibly low. And that one always needs to have offsite backups.
Not sure how Hetzner works, but do they have IDRAC type access to their servers and/or remote hands available to fix stuff? Guess you'd be on the hook for that sort of thing here, making the Hetzner price more appealing if they do include that kind of functionality.
The advantage of a laptop is exactly that you can easily host it at home, and own everything. I have one - with an UPS also holding the router and fiber optic and an external HDD. I'm actually working right now to version 2.0 which is a beefed up version - still used laptop (found a great deal on a lenovo P1), but slightly more expensive and I'm waiting on some parts to upgrade. Should be able to even hold the production environment in a pinch.
Ah, and obviously you put a claude/codex on it, so your actual work is just ... installing claude, and maybe a linux. The rest is done by the AI - setup, scripting etc.
As a colocated option... I see it work for some people. But it'd be a niche offering, when the whole value proposition is "make my own, with blackjack and hookers".
It's ok if you can physically remove the battery. I'm pretty sure to have read multiple times that laptops thermals and battery engineering are optimized for daily use in open areas, not to safely run workloads 24/7 in a closet.
My home lab rack is a toast rack. Literally, that's how I hold the laptops vertically so they get decent airflow, and it also makes for very easy access. As soon as you go past one laptop it's a thing.
There are laptops with ECC RAM, but they are uncommon.
Otherwise, the effect of memory errors depends on the use case.
If the laptop or mini-PC is used as a router/firewall/Internet gateway, then memory errors are usually not important, because they would result in corrupted network packets that are likely to be detected at the endpoints of a network connection.
If the laptop or mini-PC is used as an e-mail server or a Web server, then a fraction of the memory errors may result in a stored file that becomes corrupted.
At the small amounts of memory typical for a laptop or mini-PC, unless the PC is many years old there should be no more than a few memory errors per year at most, and the majority of the errors might not result in file corruption, but sometimes they may cause weird behavior requiring a computer reboot.
Anecdotally, during the years I have seen on the Internet a non-negligible amount of big files, e.g. movies, which appear to have bit flips that are likely to have been caused by their hosting on servers without ECC memory. Fortunately, in movies a small number of bit flips will not cause severe quality degradation.
With more valuable data, one must use ECC memory to avoid such problems.
Its a page hosted on CLoudFlare's "pages.dev" service. Their method of contact is a Google Form which does have an email address on this domain "CoLaptop [dot] com", but that as a web address does not work.
Maybe it's someone's old project idea that they never got around to finishing, and OP randomly found it and posted it here. Maybe it was never meant to be shared.
I work for IPinfo and we operate a distributed network consisting of around 1,400 servers. I think we have reached a point where it is extremely hard for us purchase VPSes from interesting ASNs.
To support lots of ISPs, universities, and different organizations we have been asking them if they have an old laptop lying around that they can host our software on. Goal is to reach 70,000 probes within the next couple of years.
It is a simple probe software and we share some data or we can pay 20-30 bucks a month for it. We have a couple of NUCs in remote regions but no laptops yet. Basically, we are even happy if an ISP (or any one) hosts our software from a laptop dangling by a charging cable from a socket in some random corner.
We can send over a RPI or NUC, but with remote hands, and setup and all that it can get quite expensive. So, we always first ask if they have an old laptop lying around and can install our software there.
For us, at least, we are not interested in the hardware aspect. We are interested in the network. The old laptop approach only acts as a last resort. We will be more than happy to go with the predictability of a traditional VPS hosted in a traditional data center. Colocation, no matter what form it takes, involves a lot of moving parts.
Interesting challenge! My first thought: 70k probes is a lot and having to set that up is quite a task. Why not develop an phone app with exit node capabilities (similar to Tailscale) so you can use that for probing? The real win is that people move around, obtaining you even more data points from other network.
We actually have app-based data collection capabilities and initiatives. Our goal, or more appropriately, vision, is to map the internet in real time. This involves SSH access to devices to run different forms of measurements at a very high frequency and have control over those devices.
Managing 70k probes is not going to be super hard.
Managing 1,400 servers is just a normal business operation, not a technical challenge. Each probe has a standard OS-level configuration. Automation and configuration are deployed from a central system. Each probe is actively monitored and troubleshot. Data is dumped to a data warehouse. We make incremental improvements to our network. When servers go down, we talk to vendors.
We do a lot of novel engineering things from the infrastructure, data, and research team. Having a very identical set of servers really allows us to focus on product and performance engineering, not troubleshooting engineering. With application-based probing, I assume it will complicate things quite a bit, as there are different operating systems, different devices, etc.
For us, lately the challenge is not technical. It has been exclusively procurement. This quarter (https://ipinfo.io/blog/probenet-q1-2026-expansion), we exclusively focused on regional diversity which involved outreach to national ISPs or telecoms. Securing servers from telecoms is an extremely bureaucratic and expensive process. So, we are hoping to partner up with eyeball networks and the larger NOG community.
Suppose we set aside the concerns in this thread about the legitimacy of this.
How would this work when the old hardware inevitably needs to be serviced (mechanical hard drive failure, memory errors, dust buildup, etc)?
Would they have technicians on-site available to service whatever random laptop you send them? If your laptop dies do they ship it back to you so you can fix it and send it back?
Or what if you bork the OS by accident? Will their KVM solution allow you to upload an ISO and plug it in over some USB drive emulation?
Funny, I had a similar idea this morning in the shower. I was thinking about how distributed digital infrastructure could be achieved in practice. Running some music streaming and photo server on an old laptop at home that I access via tailscale has proved surprisingly smooth. I feel there is some future in empowering users by giving them access to a cloud on hardware that is actually owned by the user. It would be a way to achieve absolute digital freedom, no lock-in and if done in a secure way privacy friendly. Hell it's the OG idea of the internet! The question is how to bring this to non-technical users. I know many people who are getting sick of paying each month both to Apple and Google for storing their ever growing pile of pictures. This solution of course does imply some sort of lock-in as your tied to a subscription and it's probably quite the hassle to get your laptop back. Also the fire hazard seems like a legit concern. I nevertheless do hear some music here.
Old laptops as low cost servers? Absolutely, build a homelab in your own basement, rent a cheap VPS, set up wireguard and viola - instant data center for tens of dollars per month. It's not production grade but you'll learn a ton.
But colocation?
Strip away the learning component and add production uptime requirements - why would you even consider using crusty old laptops for this? If you have production grade needs, look to a standard cloud provider or, at the very least, a colo facility where you can put production-grade equipment.
I don't see it. Hobby projects can use a VPN tunnel to make a data center from local equipment. Real projects that choose colocation have uptime requirements that simply can't be met by random consumer hardware. The venn diagrams don't intersect.
There's no middle ground where you try to run a real business on old laptops. That's insane. You either keep things small/hobby and stay simple, or graduate to production-grade equipment once you have real requirements.
The middle ground, taking on production colocation problems plus the unreliability of random hardware, sounds like the worst of both worlds. There are both simpler and more robust options.
Initially I believe Google was known for getting unreliable hardware with good software to manage it (a single laptop probably won't cut it, but a bunch of laptops scattered around the globe could be interesting -- when you grow things fail all the time anyways).
They aren't targeting no one (and looks like they aren't at all).
Just do the math: for a measly €2000 a month, a salary of a cashier in Amsterdam, you already need to have 285 clients - and this is without taxes and revenue.
I have always dreamed of substituting a really expensive rack of servers with a couple of elderly laptops, with their built in UPS, handy screens, keyboards and trackpads. However, for pet projects, I now have a better way of being cheapskate.
Some ecommerce software stacks really need gargantuan amounts of RAM and CPU, which gets expensive on the cloud. However, it is possible with some software to have everything massively cached, with the cloud doing that, with the origin server in my basement, only accessible from the allowed cache arrangement, therefore having the setup reasonably secure and cheap.
Downsides to this, having customer details in the basement rather than a secure facility, but how many developers have huge customer databases just casually lying around on USB sticks and whatnot? It happens.
> it is possible with some software to have everything massively cached, with the cloud doing that, with the origin server in my basement, only accessible from the allowed cache arrangement
Just gonna point this out since I noticed it a few weeks ago and notice is still there, Hetzner has paused selling new colocation service: https://www.hetzner.com/colocation/
I am surprised a serious facility would be happy having 100 old LiPo batteries in a rack. That is a (nasty) fire waiting to happen IMO. These are old batteries that may even have minor physical damage from being dropped and will be in maybe a ~25-30c environment.
What's the difference between this and 1. setting my laptop at home and 2. connect it through Tailscale?
I lose ownership of my laptop, you install whatever software you want on it (with the security risks that it conveys) and in turn "you let me connect to my computer"?
I do this at my homelab and it’s a really fun thing to do. Collect old laptops and install Linux or Tart for macOS and suddenly you have a fleet of computing power equivalent to paying thousands to AWS. Building reliability and failover is actually a fun engineering problem, use CockroachDB and RustFS. Adding capacity is just about scouting for second hand ewaste.
> Your laptop should be fully functional with a working power supply and either an ethernet port or USB port for connectivity. Age isn't a factor. We might modify your laptop to remove or power down the battery, wireless radios, etc. to ensure it can be used safely in the data center.
So they're going to open the laptop up and make hardware modifications to random laptops sent in? May as well have a VPS at that point.
A far better business offering would have been to offer pre-selected physical devices where such things are well known.
I have never colo'd my laptop, but I do work off my Windows laptop from my Mac via Parsec (remote viewing software for gaming) and by flipping system settings so my Windows machine never turns off when connected to the power bank and lid is closed. There are obviously hiccups (if internet goes out, if Windows decides to restart from an update, etc.), but it mostly just works and I think I've only had 2 instances in the past 3 months where it's gone offline. I use Tailscale on top to provide a universal mouse server for my 3d mouse, and I'm able to magically CAD from my Mac.
Highly recommend if you need to use one OS/machine for some specific software (especially if it's beefy/heavy) but prefer using another as your daily driver.
Old laptops are also notorious[1] for being fire bombs with bad batteries.
If I was a hetzner customer I'd be pissed if my server burned because someone's 2 minute battery life 10 year old school pc was hosted in the neighbouring rack.
Doesn't seem like a great business idea.
[1]: anecdotally, seems everyone has a laptop lying around with a cursed battery
There is no way they are partnering with Hetzner, or charging just 7€/month flat rate... they specifically want to know the model of the laptop, and offer to send send a courrier to your door...
I would be really surprised if this was a scam. It doesn't have the smell of a scam at all. Who would target a very tech savvy audience just to get old laptops?
Given that the "sign up" link goes to a survey form, my guess is this is just some idea someone had and they made this page to see if anyone actually wants it before they put any effort into making it happen.
Colo scams are pretty common. Some percentage of people will offer to send expensive laptops, and the scammers can discard the rest of "interested customers".
It is inviable to colo old laptops, a regulatory nightmare - Hetzner would NEVER accept those in their datacenters. It is also absurd to think they are partnering with Hetzner to begin with.
It makes no sense to believe they will even EXPORT laptops from Europe to the US if you choose the US location. It just makes no sense, so I don't get why I am getting downvoted.
Any recommendations on inexpensive colo for personal projects/servers? A few years ago ran across a few links for places to host a box and I didn't save them, and have regretted it.
ISTR one was basically just industrial office space that was running a lower-tier colo, and another was some guys in a metro area that got a rack in a data center and were spreading the cost around with other like-minded folks. At my work I have machines in an Iron Mountain facility, but for personal projects I don't need anything like that, but I'd like something that's more capable than AWS that I'm paying $80/mo for a couple VMs.
EC2 is pricey if it's all you're getting from AWS. If you have wierd requirements, colo may be a good option. Otherwise, just get a VPS or 3 and be done with it. You'll get a virtual KVM that lets you boot it off an ISO and set it up the way you want.
Vultr, DigitalOcean, Linode, ... are long established VPS players.
I'm cheap and buy VPSes off deals on lowendtalk.com. e.g. my backups are on a VPS with 3TB disk, 2GB RAM, 1 vCPU, USD7/mo. I suspect your USD80/mo budget would stretch to something amazing, by comparison.
The core density is really low. You can run a 96 core Epyc from the previous generation at 700 W and that’s a lot of compute. It makes sense for a home server (and I have an old Mac playing that role at home) but otherwise I don’t think it makes sense unless you’re taking off the display and racking them super tight.
Even then, you’re probably better off with Cloudflare tunnel and using it as a home server.
Not sure if this is legit... I could see it working well enough if they require the laptop to support at least say thunderbolt3/usb4 then they can use a single connection interface to a management/dock interface that includes a network connection (1gb/2.5gb)
The trouble is a lot of laptops won't power-on with the screen closed and have heavy sleep/suspend behaviors in general. Not to mention general airflow in whatever shelving system is used with the laptops, assuming 2-4 laptops per shelf, per 1u. Not to mention, one would probably want/need some means of ensuring appropriate driver support, or an appropriate Linux or other setup for said hardware.
While I can see it working, depending on shipping costs can definitely see some problematic bits.
I don't want to crap on peoples ideas. Really, I don't.
But getting some closet case computer with unknown hardware and turning it into a server, at scale, is an impossible scheme.
The only way to make it work would be to buy hundreds of laptops at once and refurb, new storage, and standardize with custom power delivery. Because who wants hundreds of laptop PSU's plugged into power strips. And those do in fact die.
And then there's the horror of manually removing wifi hardware and batteries. Battery disposal is an issue. And having worked on hundreds of laptops, some of them are major pains in the neck to get to the battery. Consumer HP's come to mind. The bottom cover can be difficult to remove without breaking any of the clips.
Is it? It doesn't sound outragous to me, given that they provide you with however much power you need, ans also networking and people to maintain the facility.
I asked ChatGPT to estimate how much darwing 15W continuously in Amsterdam would cost you per month, and it came up with a range of 2.58€-3.41€ per month. So that's potentially more than half of their fee.
If your laptop is particularly power efficient, you'd also be subsidizing higher-powered laptops. As far as I see there's nothing preventing you from sending a 400W gaming laptop and mining crypto or running an LLM agent 24/7.
The cheapest USB KVM-over-IP costs about €50 - that's 8 months of colo fees gone.
Colo 'remote hands' in western countries can cost €120/hour, once all expenses and overheads are taken into account. Admittedly, that's for someone to drop what they're doing and rush to your sever. But getting that laptop unpacked, checked over, labelled, installed in a rack, associated to a customer account, powered up and working is going to cost 3 months of fees at least.
One laptop gets lost or damaged during shipping, or shows up mysteriously broken when the customer claims it worked when they sent it? That's a €200 device gone, 28 months of colo fees. You can argue your way out of it, but the guy doing the arguing is the €120/hour remote hands guy.
All the big 3 cloud providers suck if you use them purely as VPS. I’ve tried AWS Lightsail (basically, slightly cheaper EC2) and it’s so much slower than what I’d expect from a similar spec VM from a normal hosting provider.
Hetzner, DigitalOcean, OVH, Vultr are some of the better-known ones. Personally, I’m very happy with SSD Nodes. Paying $90/yr for a 4 vCPU¹ / 16 GB / 320 GB SSD, had some downtime exactly once in two years (they’ve had to switch their IPv4 space in Tokyo). Affiliate link: https://ale.sh/r/ssdnodes
[1]: Intel Xeon E5-2650 v4 (4) @ 2.199GHz – not great, I know, but to reiterate: that’s for $90 a year.
There is one scenario it would be good for. People running stock trading programs often need a better network and always on environment than they can get at home
I’m curious if they remove the displays. Not every laptop works with the display closed and it might cause heat issues that throttle the CPU or reduce the life of the machine to run it like that long-term.
What’s the point if I can just connect it to my router and not pay any money to anyone, expect some electricity price, which would be like ten times cheaper. My old laptop is capable of a gigabit connection and so my home internet. That’s plenty for anything I can imagine.
Redundancy, I hear you saying! What if you’d have no electricity for an hour? OK then. I’d have another laptop at some else place then, and have two powerful servers for like still one fifth of the price. Can you beat that?
They remove the battery! That was my first question.
I have an old Lenovo laptop that works fine with the battery completely removed--but I have to disconnect the power and reconnect it before the soft power-on switch will work. I wonder how they handle powering on finicky laptops with those "soft" power buttons.
+ The usual limiting factor in data centers is power, so laptops could be more optimized for greater cycle efficiency per power than comparable old servers.
+ Laptops are generally compact and so achieve greater rack densities than individual co-lo servers. I'm thinking about 34 or 51 laptops could be stored in 9 or 10U either 2 or 3 rows deep by 17 wide.
+ Shipping a laptop to a co-lo data center is cheaper than a 1U server.
~ Reusing electronics saves e-waste and reduces unnecessary consumption, either old servers or old laptops.
- Laptops lack ECC RAM.
- Laptops typically don't use nearly as fast CPUs or RAM as contemporaneous servers.
- Laptops are limited in their storage options.
- Laptops lack remote, lights-out management of real servers.
- Repairing old failed laptop components is more difficult than old servers.
~ Old laptops tend not to have usable batteries, so there's unlikely to be much an inherently distributed battery backup capability.
- Old laptop batteries of various origins could be a li-ion NMC fire hazard at scale.
~ Reusing old stuff at any sort of scale would prefer standardization, and it's sometimes difficult to amass many of the same discontinued model.
Conclusion: Do it if it works for you. It's kinda cool.
I think it's one of those ideas that only works with nostalgia or hoarding impulses to support it.
I think normal virtualization approaches are far more power efficient, at a fleet level, than any kind of cluster of laptop scenarios. You can pile in the cores and amortize the costs of memory controllers etc. over a large set of guests.
It is a funny way to get features of both worlds. One reason to want colo (rather than VMs) is for predictability, but laptops still give you the funny throughput problems, because of thermal throttling instead of competing guests.
Sure that can work for individuals and small groups with physical separate high availability. It maybe faster and simpler to find another replacement, but I'm thinking about it from a permaculture perspective of sometimes old parts inventory exist somewhere for cheap or it's only a small broken component that could be fixed to avoid unnecessary e-waste contributions and spending more money on consumption to fix a problem.
Typical enterprise server lifecycle is 4-6 years purposefully throws away uncertain remaining potential value because budgets needs to be spent, risk aversion to repairing what's considered "outdated", and possibly acquiring faster and more energy-efficiency equipment. I would guess it's about the same lifecycle length for enterprise and personal laptops too.
Eeek, I can't imagine what this is like if it scales. What happens to the fire risk when theres 20,000 laptops with aging batteries all sitting together? I hope they take the batteries out, however many laptops use batteries to smooth out power fluctuations.
Laptops aren't designed to be servers - peg your laptop CPU and GPU at 100% and see how long it lasts, I've done this before and the answer is about "2 months", yep sure, this effort isn't targeting that workload, but how many bad apples does it take to start a fire? In their page they say "kubernetes server - no problem" kubernetes DOES keep the CPUs busy, not pegged, but busy enough so that they wont step down their frequency.
I admire the effort to reuse old tech, but boy oh boy would I not want to be a sysadmin here!
My old Lenovo t420 has been running 24/7 pegged as a multi-camera DVR since 2011, no issues whatsoever. Of course the battery is removed, but I don't see many decent laptops struggling running under load for prolonged periods.
I worked for a place that did something akin to this in the early 2010s. Someone figured out how to add 32-bit company laptops to the virtualization cluster (likely because they were using one as a stand in for a server that at the time would have been in the works but not yet purchased) and so once that work had been incurred they just kept "retiring" unserviceable company laptops to the cluster. Imagine a standard wire metro-rack crammed in a telecom closet beside a normal server rack. Now imagine that metro rack literally full of Toshiba Satellite Pro's from about 2005-9. The cluster hosted virtual machines for testing.
No fires, no hardware problems. No special cooling other than the mini-split that was in the closet to cool the server rack. They just kept trucking. But modern hardware is much more high strung and I don't doubt you'd have weird failures.
Edit: Back then VMs were how things were done and RAM was seemingly always the bottleneck by a mile, so the cluster did add up to a meaningful amount of extra performance compared to not having it.
Yea this is a stupid idea. Old laptops don't have good performance per watt compared to new servers once you factor in that they are many many times slower.
A ton of old batteries in one place. The batteries themselves are probably not a concern, but if something happens to the facility, then you have a ton of problems.
Security of the facility is a concern if someone can get in and walk out with an armful of laptops.
Laptops don’t scale from a stacking stand point. Sure, close the lids and line them up. Then you’ll have a lot of failures. Older laptops are intended to cool through the keyboard and top vents by the screen.
That surely depends on a country. Data centre is still better in theory. But in practice, I have very little imagination to use a gigabit connection all to myself.
This particular project imply a financial commitment (it's not like you can walk to the data center right now with random assortment of laptops to be setup today without reserving a rack for a month or so), using a free hosting mean they didn't even spend the minimum.
If your 'project' can't allocate $15 for a domain name then you have a bigger problem with your project. Especially if your project involves taking money from customers.
uh yeah i mean we 'colo' at work because its cheaper than buying a windows server with multiple RDP licenses. We have some legacy stuff that must be run on site.... so we buy $200 laptops and people can remote in for years.