257 points by bwannasek 20 hours ago | 38 comments
embedding-shape 19 hours ago
I've been on a somewhat binge to move a bunch of stuff to self-hosting at home. Yesterday I finally completed my self-hosted Forgejo instance at home, together with Linux, Windows (via VM) and macOS (via Mac Mini) runners/workers for CI/CD, so everything finally lives in-house (literally), instead of all source code + Actions being on GitHub but the infrastructure actually living locally.

This is probably the first time I felt vindicated with my self-hosting move literally the day after I finished the migration, very pleasant feeling. Usually it takes a month or two before I get here.

koyote 14 hours ago
And once you start self-hosting, you realise how slow the 'modern' web actually is.

I host forgejo on a single NUC with a bunch of other stuff in Proxmox, the page loads in 6ms! Immich is not quite as fast but still a ton faster than Google photos.

oefrha 2 hours ago
That’s not a given. Self-hosted GitLab on my pretty good hardware is still slow. I just opened a very small repo with ~20 files and ~5 commits. The page spun for 5s+ before showing me the directory listing and readme. Subsequent loads are faster (~1s) but still not instant.
cedws 19 hours ago
The idea of a homelab is appealing to me, but then I actually start building one and get tired of it quickly. When I’ve been fixing broken systems at work all day I don’t really want to have to be my own sysadmin too.

I’ve got a nice and powerful Minisforum on my desk that I bought at Christmas not even switched on.

embedding-shape 19 hours ago
I've tried for 15 years to have my homelab, but always get lost in the complexity after a year or so, in the past. About 3 years ago I gave NixOS a try instead for managing everything, which suddenly made everything easier (counter-intuitively perhaps) as now I can come back after months and still understand where everything is and how it works after just reading.

Setting up Forgejo + runners declaratively is probably ~100 lines in total, and doesn't matter I forget how it works, just have to spend five minutes reading to catch up after I come back in 6 months to change/fix something.

I think the trick to avoid getting tired of it is trying to just make it as simple as humanly possible. The less stuff you have, the easier it gets, at least that's intuitive :)

Cyph0n 15 hours ago
Just to echo what others are saying: NixOS and Proxmox are the answer.

I run both right now, but I am in the process of just running NixOS on everything.

NixOS really is that good, particularly for homelabs. The module system and ability to share them across machines is really a superpower. You end up having a base config that all machines extend essentially. Same idea applies to users and groups.

One of the other big benefits, particularly for homelabs, is that your config is effectively self-documenting. Every quirk you discover is persisted in a source controlled file. Upgrades are self-documenting too: upstream module maintainers are pretty good about guiding you towards the new way to do things via option and module deprecation.

WestCoader 14 hours ago
I mean this in a good way, but I'm slightly chuckling to myself that it reads like people are just discovering IaC...on HN. That's all Nix configs are, at the end of the day.

No matter the tool, manage your environment in code, your life becomes much easier. People start and then get addicted to the ClickOps for the initial hit and then end up in a packed closet with a one way ticket to Narnia.

This happens in large environments too, so not at all just a home lab thing.

Cyph0n 13 hours ago
I and many other NixOS users know what IaC is :)

A NixOS config is a bit different because it’s lower level and is configuring the OS through a first-party interface. It is more like extending the distro itself as opposed to configuring an existing distro after the fact.

The other big difference is that it is purely declarative vs. a simulation of a declarative config a la Ansible and other tools. Again, because the distro is config aware at all levels, starting from early boot.

The last difference is atomicity. You can (in theory) rely on an all or nothing config switch as well as the ability to rollback at any time (even at boot).

On top of all this are the niceties enabled by Nix and nixpkgs. Shared binary caches, run a config on a VM, bake a live ISO or cloud VM image from a config (Packer style), the NixOS test framework, etc.

0cf8612b2e1e 17 hours ago
Unless you actually need hardware (local LLM host, massive data transformation jobs), it is also easy to get into the many machines trap. A single old laptop, N97, optiplex, etc sitting in a corner is actually a huge amount of computer power that will rival most cloud offerings. Single machine can do so much.
httpsterio 15 hours ago
Yeah true. I have an old Asus X550L from 2014, a very budget / mid basic home laptop with the battery removed running as my server. I do some dev on it with VSCode remoting into it and Claude Code, run Jellyfin, Audiobookshelf, Teamspeak, IRC and TS bots, nginx, SyncThing and some static websites.

I'm still usually under 10% cpu usage and at 25% ram usage unless I'm streaming and transcoding with Jellyfin.

It's been fun and super useful. Almost any old laptop from the past 15 years could run and solve several home computing needs with little difficulty.

dml2135 18 hours ago
Yup this is what I've got up and running recently and it's been awesome.

My setup is roughly the following.

- Dell optiplex mini running Proxmox for compute. Unraid NAS for storage.

- Debian VM on the Proxmox machine running Forgejo and Komodo for container management.

- Monorepo in Forgejo for the homelab infrastructure. This lets me give Claude access to just the monorepo on my local machine to help me build stuff out, without needing to give it direct access to any of my actual servers.

- Claude helps me build out deployment pipeline for VMs/containers in Forgejo actions, which looks like:

  - Forgejo runner creates NixOS builds => Deploy VMs via Proxmox API => Deploy containers via Komodo API
- I've got separate VMs for

  - gateway for reverse-proxy & authentication

  - monitoring with prometheus/loki/grafana stack

  - general use applications
Since storage is external with NFS shares, I can tear down and rebuild the VMs whenever I need to redeploy something.

All of my docker compose files and nix configs live in the monorepo on Forgejo, so I can use Renovate to keep everything up to date.

Plan files, kanban board, and general documentation live adjacent to Nix and Docker configs in the monorepo, so Claude has all the context it needs to get things done.

I did this because I got tired of using Docker templates on Unraid. They were a great way to get started, but it's hard to pin container versions and still keep them up-to-date (Unraid relies heavily on the `latest` tag). Moving stuff over to this setup bit-by-bit and I've been really enjoying it so far.

ostacke 6 hours ago
Isn't the simplest homelab humanly possible just... no homelab?
cedws 18 hours ago
Thanks. Yeah, I've probably been overcomplicating it before. I was running Kubernetes on Talos thinking that at least it would be familiar. Such power tools for running simple workloads on a single node is inviting headaches.
altmanaltman 6 hours ago
Yeah this is the way.

The problem is that people never stop tinkering and keep trying to make their homelab better, faster, etc. But its purpose is not to be a system that you keep fine tuning (unless thats what you actually are doing it for), its purpose is to serve your needs as a homelab.

The best homelabs are boring in terms of tech stacks imo. The unfortunate paradox is that once you do start getting into homelabs, its hard to get out of the mentality of constantly trying out new stuff.

skydhash 14 hours ago
Maybe my needs are simpler. But I just made do with systemd services and apt (debian). I've also setup Incus for the occasional software testing and playing around. After using OpenBSD as a daily driver, I'm more keen with creating a native package for the OS/Distro than wrangling docker compose files.
ryandrake 13 hours ago
Yea, it's always weird to see people say "I'm simplifying my life and reducing my cloud dependencies by running a homelab and self-hosting!" and then they list the dozens of alphabet soup software they're running on it that they're now relying/depending on. "Oh I run 20 VMs and containers and Docker orchestration and Nextcloud and Syncthing and Jellyfin and Plex and Forgejo and Komodo and Home Assistant and Immich and Trilium and Audiobookshelf and another Nextcloud and This Stack and That Pipeline" and oh my god haven't you really just made your computing even worse?

My "homelab" is basically Linux + NFS, with standard development tools.

embedding-shape 1 hour ago
Depends on your requirements, I'm jealous you can get away with something so simple, I cannot, and I also have poor memory so having it described in code been most helpful, if I ssh into a server after months of not touching it I barely remember what's on it anymore.

I think the most important thing for me is that I chose when I have time to upgrade, it's no longer forced upon me, that's why I prefer to depend on myself rather than 3rd party services for things that are essential. Been so many times I've had to put other (more important) things on hold because some service somewhere decided that they're gonna change something, and to get stuff working again you need to migrate something. Just got so tired of not being in control of that schedule.

VerTiGo_Etrex 15 hours ago
> When I’ve been fixing broken systems at work all day I don’t really want to have to be my own sysadmin too.

There’s only one solution to this.

Quit your job.

kivle 16 hours ago
With the help of coding agents it's easier than ever. Just get Claude/Codex to create Helm Charts / Docker Compose files for you. Struggle with some command line juggling to fix some obscure error? An agent can mostly help you in no-time.
prmoustache 15 hours ago
There isn't much work or maintenance to do really. When you are the sole user everything is over sized and if it is only accessible at home you can be lazy with updates and security anyway.
snailmailman 12 hours ago
I've been running my own private forgejo instance for a while now. I host all my own private side projects and stuff there. Its a much more pleasant experience than github, if only because it has higher than 90% uptime. The UI is mostly identical otherwise.

The number of consistent issues i've had with anything github-related lately is crazy. Even just browsing their site is difficult sometimes with slow loads that often just hang entirely.

lone-cloud 10 hours ago
Are you me? Somebody was talking about gitea on here yesterday and I also ended up self-hosting and moving all of my private projects to Ferjero yesterday after a bit of research. I can't bring myself to move public projects due to job prospects + GitHub network effect. Otherwise I'm role playing as a system admin now with 20 local services for various things I need. I think the most important thing is to have regular backups as you're now in charge of keeping your data from getting lost.
johnmaguire 18 hours ago
I recently did this as well and one of the things that has struck me is just how fast Actions are compared to Github!

That said, I've got Linux and macOS setup with a Mac Mini (using a Claude-generated Ansible task file), but configuring a Windows VM seemed a bit painful. You didn't happen to find anything to simplify the deployment process here, did you?

embedding-shape 15 hours ago
> You didn't happen to find anything to simplify the deployment process here, did you?

No, unfortunately not, the Windows VM setup + Forgejo Windows runner was the most painful thing for me to setup, no doubt. It's just such a hassle to reliably set things up, even getting logs out of it was trouble... To be fair, my Mac Mini was manually setup at first, then I have Nix on top of it, while Windows I've 100% automated it, so not entirely fair comparison, automating the Mac Mini setup would be similarly harsh I think. But it's a mix-match of Nix for configuring the VM and booting it, XML files for "autounattend" setup, ps1 bootstrapping scripts and .cmd script for finalizing, a big mess.

lisplist 19 hours ago
The only problem I've found with Forgejo is a lack of fine grained permissions and also the lack of an API for pulling action invocations. The actions log api endpoints are present in gitea from what I can tell.
mfenniak 18 hours ago
Forgejo 15 was just released last week with repo-specific access tokens. More to come in the future.
dietr1ch 17 hours ago
My Raspberries (and OrangePi) have better availability than github, and if were to be down I'd be out of power/internet and wouldn't be able to work much anyway.
yakattak 19 hours ago
I moved my forge to my home, outside of a little stress getting all the containers wrangled it was pretty effortless to setup Forgejo.

I do need a good backup solution though, that’s one thing I’m missing.

TranquilMarmot 10 hours ago
I use https://github.com/garethgeorge/backrest to manage nightly encrypted backups of my Forgejo instance to a Hetzner Storage Box. <$4/mo for 500GB of storage. It's also where I back up my Immich library to.

Immich automatically dumps its DB every day, for Forgejo I have a little script that runs as part of the Backrest backup that does a pgdumb of the database before doing the backup.

It works great, I even had to do disaster recovery on it once and it went smooth.

lone-cloud 10 hours ago
I use rclone + backblaze. You get 10GB for free which is more than enough for self-hosted stuff.
neilv 18 hours ago
I self-host Forgejo for personal and indie-startup purposes, and like it well enough.

The downside with that is it misses one of the key purposes of GitHub: posturing for job-hunting/hopping. It's another performative checkbox, like memorizing Leetcode and practicing delivery for brogrammer interviews.

If you don't appear active on GitHub specifically (not even Codeberg, GitLab, nor something else), you're going to get dismissed from a lot of job applications, with "do you even lift, bro" style dissing, from people who have very simple conceptions of what software engineers do, and why.

johnmaguire 18 hours ago
There is a fairly straightforward feature in Forgejo to sync your repos to Github, if that's what you want to do. It's not perfect, of course, but should help to advertise your projects and keep your activity heatmap green.

I mostly use Forgejo for my private repos, which are free at Github, but with many limitations. One month I burned all my private CI tokens on the 1st due to a hung Mac runner. Love not having to worry about this now!

nextaccountic 16 hours ago
or you can just have two remotes and push to both sites and enjoy git's distributed nature
TranquilMarmot 10 hours ago
I do this, but beware if you have LFS files. You can easily get into weird states with LFS pushing up to two different remotes and it's really not fun to fix.
8cvor6j844qw_d6 18 hours ago
> If you don't appear active on GitHub specifically... you're going to get dismissed from a lot of job applications

Sometimes wonder if my coursemates back in the days, who automated commits to private repos just to keep the green box packed, actually got any mileage out of it.

gill-bates 18 hours ago
I get that. To counter it I usually try to have at least one public repo on my Forgejo instance and link to that on my resume/LinkedIn. It helps that I'm angling for security/infra positions so the self-hosting aspect actually helps but even without that I would imagine it signals something. Maybe not ideal for the most mainstream jobs (whatever that even means...), but I suspect some people will be intrigued by the initiative.

Edit: to the "do you even lift bro", the response becomes "yeah man, I've built my own gym - oh, you go to Planet Fitness? Good luck."

bmitc 13 hours ago
Fine with me. Not the type of jobs I want anyway.
colechristensen 18 hours ago
Instability aside I found several things about GitHub awkward, annoying, or missing features so I spent a month building my own. I think we're going to be seeing a lot more of this.
rvz 18 hours ago
Self hosting was the correct solution.

6 years early [0] and you have better uptime than GitHub.

[0] https://news.ycombinator.com/item?id=22867803

shevy-java 19 hours ago
Interesting. I speculated not long ago that Microsoft is really taking a dive here, and other companies may look to provide better alternatives to GitHub, as one idea. Today I read your comment about self-hosting here; while that is not quite what I compared or had in mind, it is interesting to read about it, of people who go that route. Microsoft is really putting themselves into trouble in the last year or two. Some things no longer work, so much is clear here.
LorenDB 19 hours ago
https://mrshu.github.io/github-statuses/ says they are down to 88.15% uptime. Even when you consider uptime of individual components, their best is 99.78%, so two nines.
eddyg 1 hour ago
The scale of growth they’re dealing with is insane.

“There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)

GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.”

Source: GitHub COO on April 3, 2026. https://x.com/kdaigle/status/2040164759836778878

jpleger 12 hours ago
I wonder if there is any correlation between them moving towards Azure.

https://thenewstack.io/github-will-prioritize-migrating-to-a...

frakkingcylons 12 hours ago
They are dealing with vastly more activity as a result of AI usage. It's that simple.
sharts 5 hours ago
They’re pushing out AI slop as production services
Ygg2 15 hours ago
I see Microsoft mandated AI is doing wonders. For self hosters and Linux enthusiasts.
hedayet 14 hours ago
Is Github losing any significant business from all these outages?

Curious because for a long time we as an industry maintained that reliability and brand value are business critical; but seems like they are cared very little now a days.

Happy to be corrected about my perception too.

bandrami 9 hours ago
And as recently as two or three years ago it was universally agreed that the only way to reliably and securely deliver software was via repeatable builds with an attested chain of custody and auditable bill of materials and everybody just gave up on that completely when the LLMs got somewhat better.
braiamp 13 hours ago
They are entrenched enough that it's wrote off as cost of doing business. Big business have their internal instances so they are "insulated", everyone else isn't as critical and have the resources to do an internal solution or move.
ozarkerD 9 hours ago
Org instances have been affected by these outages too. Including the one today
datadrivenangel 9 hours ago
They have on-prem / dedicated instances? I thought that microsoft only offered that through their Azure DevOps git offering.
jamesfinlayson 9 hours ago
GitHub Enterprise has existed for a while: https://docs.github.com/en/enterprise-server@3.20/admin/over...

I'm pretty sure it still does - I used it at a previous job and at somewhere that I interviewed recently they said they used GitHub (given their size and being a somewhat regulated industry I can't imagine they rely on github.com).

dankobgd 19 hours ago
Don't worry, status page says that it's 100% working - green color, all good. even though i can't access a static page
eshack94 7 hours ago
At this point, I feel like there should be a HN post whenever there ISN'T an issue with some GitHub service. Otherwise, it's business as usual...
cjonas 19 hours ago
It would be wild if they dropped below the "two 9's" metric. I think they would need an additional ~16hr of outage in the 90 day rolling period.
waiwai933 19 hours ago
https://mrshu.github.io/github-statuses/ suggests that their combined uptime doesn't even meet 1 nine, let alone 2.
CamouflagedKiwi 18 hours ago
The intersection of uptime across every possible service they offer isn't a particularly great metric. I get the point that they are doing badly, but it makes it look worse than I think it really is.

What I would like to see is a combined uptime for "code services", basically Git+Webhooks+API+Issues+PRs, which corresponds to a set of user workflows that really should be their bread & butter, without highlighting things you might not care about (Codespaces, Copilot).

dijit 17 hours ago
Depends how integrated those features are.

A service's availability is capped by its critical dependencies; this is textbook SRE stuff (see Treynor et al., The Calculus of Service Availability). Copilot may well be on the side of it (and has the worst uptime, dragging everything down), but if Actions depends on Packages then Actions can be "up" while in reality the service is not functional. If your release pipeline depends on Webhooks, then you're unable to release.

The obvious one is git operations: if you don't have git ops then basically everything is down.

So; you're right about Copilot, but the subset you proposed (Git+Webhooks+API+Issues+PRs) has the exact same intersection problem. If git is at one nine, that entire subset is capped at one nine too, no matter how green the rest of it looks.

And to be clear: git operations is sitting at 98.98% on the reconstructed dashboard linked above[1]. That is one nine. Github stopped publishing aggregate numbers on their own status page, which.. tells you something.

[1]: https://mrshu.github.io/github-statuses/

CamouflagedKiwi 16 hours ago
Well yes you could do that on a status page, but it's basically just lying to put Actions as green if it's actually down because it depends on Packages which is red.

With that set, I wasn't proposing a set of totally independent services to be grouped together, I was talking about a set of things that I think represent pretty core services for Github users. If Git is dragging the rest of those down, fine; PRs are useless without it. In fact it is worse than some but it's not the worst of that group, and it is still a lot better then the dregs of Actions and Copilot.

Having said that, the numbers are of course terrible, two nines on a couple of things and one on everything else would be bad for a startup, it's an utter embarrassment for a company that's been doing this over a decade.

cjonas 19 hours ago
also I never had considered that breaking your up-time into a bunch of different components is just a strategy to make your SRE look better than it actually is. The combined up-time tells the real story (88%!). Thanks for the link
femiagbabiaka 19 hours ago
The number of nines assigned to a suite of services is not indicative of the quality of SRE at any given company, but rather a reflection of the tradeoffs a business has decided to make. Guaranteed there's a dashboard somewhere at Github looking at platform stickiness vs. reliability and deciding how hard to let teams push on various initiatives.
cjonas 9 hours ago
this is fair. I should have just said "Site Reliability", as it's almost certainly out of the engineers control.
cjonas 19 hours ago
ya i was just doing the math on their chart for the git operations. I added up 14.93 hours combined hours, which puts them WAY lower than the reported 99.7 metric they show right next to it.

So based on their own reporting, the uptime number should be 99.31. Which means only like 6 additional hours and they'd fall below 99.0%

roblh 14 hours ago
GitHub is going for “eight 8’s” at this rate.
frereubu 15 hours ago
We have pretty basic needs - git repos + actions - and a bit of downtime here and there doesn't really affect us too much because we're not constantly committing and deploying, but even we're looking around for alternatives now.

Also, looks like people might be pummelling the SourceHut servers looking for an alternative: https://sr.ht/ is down. (Edit: was down when I wrote that, back up now).

nerdypepper 14 hours ago
tangled.org maybe?
agartner 14 hours ago
There was another really bad incident today: https://www.githubstatus.com/incidents/zsg1lk7w13cf

> We have resolved a regression present when using merge queue with either squash merges or rebases. If you use merge queue in this configuration, some pull requests may have been merged incorrectly between 2026-04-23 16:05-20:43 UTC.

We had ~8 commits get entirely reverted on our default branch during this time. I've never seen a github incident quite this bad.

nulltrace 13 hours ago
Downtime is one thing. Silently reverting commits on your default branch is something else entirely.
darknavi 13 hours ago
Similar here. Somewhat ironic that a tool that was supposed to be preventing merge conflicts was authoring completely mangled commits to our mainline branch.
lucasqueiroz 8 hours ago
We've also seen quite a few commits disappear from main, the status of the PRs continued on merged. Was stressful.
robertwt7 11 hours ago
yeah this is crazy we had many PRs reverted as well on many repos. downtime is 1 thing, but reverting PRs is failure on another level
x0ruman 8 hours ago
We lost about a day of git history across several repos on Bitbucket a while back-not an outage, a data issue on their side. Local clones saved most of it, but issues and PRs from that window were just gone. That’s roughly why I started building gitbacker as a side project. Turns out the ‘back up the repo’ part is easy; the metadata is where it gets interesting.
Groxx 14 hours ago
So... three incidents today, all of them ~1h or longer, and everything's green for the day with "no recorded downtime".

These don't really look any different than past incidents which have red bars on their respective days, except maybe that those tended to be several hours.

What do the green bars even mean? Are they changed to non-green retroactively if people complain enough or something? So far as I can tell, literally none of the previous green days have any incident shown in the mouse-over, but there are multiple for today only, so I kinda have to assume the mouse-overs are conveniently "forgotten" or all incidents become non-green and they just don't bother informing anyone on the same day. Either seems intentionally misleading.

delusional 15 hours ago
Once that 10x developer velocity from AI kicks in, I'm sure github stability improves. Did you know AI finally makes it economical to fix all the little bugs?
CamouflagedKiwi 18 hours ago
I wondered. We'd seen for most of today that Actions were slow to trigger, I had at least one that was just missed, it felt like something was definitely off but the status was green all day until this.
BhavdeepSethi 8 hours ago
I'd really like to read the Post Mortem doc for this. I'm still shocked how they allowed this to happen in production.
jonnonz 17 hours ago
Well I suppose they are finding out if you lay off too many people the IP of how the system works goes out the door with them.
throwatdem12311 19 hours ago
Seems like they just can’t deal with the absolute deluge of AI vomit being uploaded every day.

Good riddance I hope it completely destroys them.

mijoharas 19 hours ago
Are you taking about what they write to run the service? Because looking at the uptime, and considering it's Microslop, I wouldn't be surprised.
throwatdem12311 16 hours ago
What they write and the extra demand from vibe coders.
argee 19 hours ago
I moved to Gitlab a while ago. It's a whole new level of freedom not having to pay for self-hosted CI runners.
deferredgrant 14 hours ago
These outages are also a good test of how much local resilience teams actually built. My guess is most shops are much more dependent on GitHub than they like to admit.
bakies 19 hours ago
I definitely have better uptime hosting my own gitea instance. It's faster too. It's basically a knock off GitHub. Plus with privacy concerns, I'm just happier overall. Easy setup, all I did was deploy the helm chart.
redwood 2 hours ago
Guessing the sheer volume of pull requests related to AI code jitter is leading to instability of this Microsoft product.
jasoncartwright 19 hours ago
Just cancelled my GitHub Copilot Pro+ year subscription. Removal of Opus 4.6 stung, but the repeated continued downtime makes it unusable for me. Very disappointed.

No fuss instant refund of my unused subscription (£160) appreciated.

miltonlaxer 19 hours ago
What will you use now?
jasoncartwright 19 hours ago
Claude Code
tgrowazay 18 hours ago
Doesn’t GitHub Copilot Pro+ only have month-to-month payment option?

Only Pro (without plus) can be paid annually for some reason.

pkaye 17 hours ago
Pro+ does have a annual plan but recently they paused or dropped the annual plans because they are trying to adjust the pricing model.
jasoncartwright 18 hours ago
I paid 390 USD for a year Pro+ subscription in November 2025.

I used all the 'Premium Requests' every month on (mainly) Opus 4.5 & 4.6. From what I've read on here it seems I was probably a rather unprofitable customer - it felt like a steal.

djeastm 17 hours ago
Yes, it was definitely a good value for devs using those models. I was hoping since Github Copilot was rarely talked about compared to the Anthropic/OpenAI offerings, MS would continue to subsidize it to encourage people to move over, but maybe it just got too expensive.
fishgoesblub 19 hours ago
At this point it'll be better to have alerts for when GitHub is online, rather than offline.
AnkerSkallebank 19 hours ago
Some of my jobs are completing, some are failing. Seems to be random. Kind of wish they would just fail outright, instead of running for 10 minutes and then failing.
surya2006 19 hours ago
even vercel also have more downtime nowadays
supakeen 19 hours ago
I mean; this is the normal mode of operation for GitHub at this point.
napolux 19 hours ago
0 nines.
causal 19 hours ago
9 nines found somewhere after the decimal point if you measure with enough precision
surya2006 19 hours ago
what are the good alternatives available for github i find some alternative but as long as widely people use github i cant use other service right like i cant share my alternative to other developer and force him to use this for me. so i feel like i locked in even i want to move i can't
nosioptar 15 hours ago
I'm probably going to use source hut in the future. It allows contributions via email without an account requirement.

https://sourcehut.org/

shimman 17 hours ago
codeberg.org is a thing, and it's perfectly suited for open source projects. Many neovim plugins and home lab tech I use are hosted on codeberg with no issues. If you just want to github as social media, you will never be happy.
nerdypepper 14 hours ago
give tangled.org a go perhaps. its got the self-hostability that cgit/forgejo does and a the social bits that github does.
mghackerlady 17 hours ago
gitlab is about as close as you'll get
pxc 16 hours ago
GitLab annoys me in tons of ways, but I feel it's generally better than GitHub in lots of ways.
embedding-shape 19 hours ago
Huh? Why not? Say "My git repository is here $URL" then if they want to visit and/or clone it, they'll do that, otherwise don't, why does it matter?

Sure, if you're out after reaching the most people, gaining stars or otherwise try to attract "popularity" rather than just sharing and collaborate on code, then I'd understand what you mean. But then I'd begin with questioning your motivation first, that'll be a deeper issue than what SCM platform you use.

tomjen3 19 hours ago
At this point it should almost be news when it works.
ChrisArchitect 19 hours ago
Multple 9s
19 hours ago
buildbot 19 hours ago
Anyone also seeing Active Directory/Entra issues?
chrisweekly 11 hours ago
mods: title tupo "multple"
josefritzishere 19 hours ago
Microslop is destroying Github
nhhvhy 17 hours ago
If the day ends in Y…
linhns 19 hours ago
Business as usual.
0xbadcafebee 19 hours ago
I am this > < close to just running Gogs or Forgejo on some Hetzner boxes, quit my job, charge people for access. Why aren't there like 10 startups doing this yet? Please? I want to give you my money. Just give me a git host that doesn't suck. (All the current ones suck)
apparatur 13 hours ago
Gitea has a paid plan. On Forgejo forums you can find 3rd party offers of paid hosting as well.
0xbadcafebee 12 hours ago
I actually tried to use Gitea and their login page wouldn't work, so that told me all I needed to know about them
pocksuppet 14 hours ago
Codeberg and Sourcehut are doing it for free, for open source. Corporate probably won't ever move off Github, because they need the prestige of using Github - the actual service quality is completely irrelevant. This is an aspect of the enshittocene epoch - I repeat, quality is irrelevant to corporates.
0xbadcafebee 12 hours ago
Sourcehut isn't free and has weird UX, Codeberg is free but has poor performance and weirdly over-moderates discussions. I know corporate will always suck, I'm just talking about having something that approximates the "old GitHub" for personal/professional use
gsliepen 4 hours ago
The SourceHut UI looks weird compared to commercial offerings, but every time I use it I am pleasantly surprised how fast it is and how little clutter there is.
apparatur 13 hours ago
Corporate will move off GitHub as soon as it loses prestige and they will move on to the next thing.
poplarsol 17 hours ago
Azure webapp deploys are also trash right now. Microsoft needs to stop slathering h1b copilot slop and get basic things like Windows patches working.
shevy-java 19 hours ago
Microsoft again.

I think it is time that Microsoft lets go of GitHub. They are handling it too poorly.

bmd1905 8 hours ago
[dead]
ossa-ma 19 hours ago
Seems like outages are increasingly more frequent nowadays. Obviously, this is not the best state of affairs, and developers should not be limited by services. In the meantime I've been experimenting with building third spaces for people to chill while they wait for the services they are dependent on to go back up.

The first one I've built is a little ASCII hangout for Claude @ https://clawdpenguin.com but threads like this make me want to build it for Github too.