Is this microsoft stating that they aren't able to get acceptable reliability from Azure? (I mean, I think a lot of us have heard that, but it's interesting to hear it from microsoft themselves).
Pretty damming that two Microsoft subsidiaries - GitHub and LinkedIn - either shelved their forced migration to Azure or are looking at non-Azure options.
Really? I thought retail was. It's been almost a decade since I worked at prime video but I think everything was running on AWS. (Some things didn't use brazil etc, but I think all the servers etc. were on AWS)
It's a distinction without a difference. All new development is nAWS (native AWS) legacy is mAWS (not sure about the acronym) which is still AWS under the hood and is mostly just a pool of EC2 instances with preconfigured networks. Nothing made in the last five or six years is on maws, and amazon is a micro service shop so things are always being built new. If you joined today there's a good chance you'd join a team without any maws infra
MAWS is “Move to AWS”, the name of the internal campaign to get legacy services into a somewhat-retrofitted AWS environment. It was a single VPC at one point.
I just finished a nearly five year stint at amazon and didn't realize there was pre-maws stuff still around. Never encountered any of it. I was like two months from my yellow badge but, uh, life is really better outside amazon.
Man, you should have been there 6 months ago when they decided to start tearing down GitHub's own data centers and move everything exclusively to Azure. Seems they themselves realized this after they started moving, but imagine if you could have helped them realize this before they even started :)
> Seems they themselves realized this after they started moving
I guess most people at Github knew exactly it makes no sense but they didn't really have a choice. Maybe some voiced their statement, got "we hear you" in response and were told to proceed anyway.
Yeah, I don't know how it went down, but I also know exactly how it went down:
Microsoft Execs: Everyone needs to move to Azure!
GitHub developers: But Azure is not gonna be able to handle our load, we literally have our own data centers!
Microsoft Execs: Sure, but you're Microsoft now, please publish blog post about how in half a year you'll be 100% on Azure.
Few months later...
GitHub Developer: We've tried our best, users are leaving in droves and Azure can't keep up!
Microsoft Execs: Ok fine, you can use something else too, but only if you mainly use Azure and continue publishing blog posts about how great Azure is.
That sounds like the worst of both worlds? The Azure devision that can't even reliably can't provide decent infrastructure products based on their own data center trying to do the same one a bespoke data center.
You’d think they could have had the existing GitHub on whatever continue as is (maybe for paying customers) while all the AI new inrush goes to the Azure setup.
The entire concept of multi cloud is amusing if you think what cloud originally was supposed to be. They could call them meta clouds (might infringe trademarks), and with the current growth trajectory of AI generated code eventually multi-meta-clouds, renamed to beyond-clouds, and then multi-beyond-clounds. I see no limits.
Show HN timing matters more than people think. Monday-Thursday, 9-11am Pacific, is when the front page has the most engaged readers. Weekend posts get less competition but also less engagement.
There was somewhat recently a post here about how priorities, pressure, and management subverted Dave Cutler's vision for Azure (which was to have near zero human involvement) - my Google fu isn't strong enough to find it. Supposedly, someone running over or opening a serial to a rack/VM is now typical operational procedure.
openai, anthropic, google and a plethora of chinese models all end up pushing code into github. you can discuss whether gpt 5.5 is better than opus 4.7, but for github it doesn't matter: they'll be receiving the code no matter which llm spits it out.
amazing on one hand, quite scary on the other for github and all other forges if this continues and there is no reason why it wouldn't.
And/or provide a baseline free tier, corresponding to how much a typical human user would at most push/clone etc. They have pre-LLM statistics on that.
Glad that they released some data about new repo/issues/commits over the last years. It confirms what everyone else already believed from the outside: agents are putting a lot of extra, sudden pressure on GitHub. It's like a startup that is growing exponentially, with the difference that they already have a large user base to serve - and that keeps them in the bullseye - and probably a not-so-fast-moving organization when it comes down to changes. On the other side of the coin, they also have a lot of talent, infra and money a startup might not have yet.
Stop subsidizing tokens now that we extracted enough training data from you and we have enough agentic junkies business to keep the flywheel going up and cut on the loss leaders. [0]
> While we were already in progress of migrating out of our smaller custom data centers into public cloud, we started working on path to multi cloud. This longer-term measure is necessary to achieve the level of resilience, low latency, and flexibility that will be needed in the future.
It's kind of hard to read this with a straight face.
The unlabelled graph with big numbers on top, the priorities that don't match with what we're experiencing, and a list of things that they're doing without a real acknowledgement of the _dire_ uptime over the last 12 months....
What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale, when 10x YoY is not enough?
As a business user, our costs have gone up while service has gone down dramatically. Meanwhile our marginal cost to GitHub has hardly changed. Where our costs to them have increased, they mostly charge us per cpu minute, so obviously aren’t making any kind of loss on our account.
I’m sure they’re experiencing scaling issues across the platform, but it’s unacceptable for that to have a negative impact on us when we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.
I understand that, and maybe GitHub became a bad deal because of that.
But if anything, their post and your reply are precisely an endorsement of usage based billing.
The bit that's growing 13x YoY (and which they expect will easily blow past that) is unmetered - commits. The bit that is metered (for some, not all folks) - action minutes, grew only 2x YoY.
GitHub was not built to limit the number of commits, checkouts, forks, issues, PRs, etc - nor do we want them to - but that's what's growing ridiculously as people unleash hordes of busy beaver agents on GitHub, because their either free or unlimited.
Where there are limits - or usage based billing - people add guardrails and find optimizations.
Because for all the talk, agents don't bring a 10x value increase; otherwise, they'd justify a 10x cost increase.
Besides, other forges are having issues too. Even running your own. We have Anubis everywhere protecting them for a reason.
That sounds bad. Paying users don't want huge and every-growing numbers of freeloaders reducing the return for each dollar they spend...
That would only lead to further and further degradation of service until the paying customers were absolutely desperate to find a deal that didn't require them to lug around such a heavy ball and chain.
It all made sense at the beginning when Github was free for OSS and OSS was thriving, but now these billions of commits are mostly incredibly low value. I'd bet the average commit now doesn't create 1/10th of the value the average commit did in, say, 2018
I'm curious how Azure DevOps reliability has been for comparison. My current job is managing stories in DevOps with SCC in GitHub ent. While I like Github slightly more, have been curious about the decision.
We use Azure DevOps at work for few things. It's been pretty rock solid since all agents don't recommend it and it's different architecture.
It's also legacy at this point since Microsoft is pouring all resources into GitHub but for most people/companies, they could probably use Azure DevOps just fine.
These numbers should have been in the blog post, not the graphs that are present.
> What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale
I think you're putting words in my mouth here; I didn't say either of those things. I'm saying that this blog post is a meaningless platitude when the github stability issues predate this, and that all this post says is "we hear you're having issues".
I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).
Either those charts are a bald-faced lie (the tweet could be as well) or there is no way for that chart to be something else.
The only way to fake exponential growth like that would be to use an inverse log scale (which would be a bald-faced lie).
It doesn't even really matter what's the y-axis baseline, unless we really think growth was huge in 2020, then cratered to zero by 2023, now back to the previous normal.
As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.
You can already see people complaining loudly where they instead of "we'll do better" decided to limit usage.
> I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).
The problem is that these charts show the massive exponential growth in 2026. But this didn't start in 2026, this has been going on since early last year. My team had more build failures in 2025 due to actions outages or "degraded performance" than _any other reason_ and that includes PR's that failed linting or tests that developer were working on.
> As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.
IMO, this needed to be written a 6 months ago (around the time that the memo of them prioritising the migration to Azure was released), and then this post should have been "We're still struggling, this isn't good enough. Here's the amount of growth, here's what we've done to try and fix it, and here's what we're planning over the next 3-6 months", instead of "Our priorities are clear: availability first, then capacity, then new features" and "We are committed to improving availability, increasing resilience, scaling for the future of software development, and communicating more transparently along the way." This isn't transparency (yet).
These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly. The growth between 2023->2024->2025->2026 is growing quickly. And that in the end/beginning of 2026 they say more growth than the three years before, combined!
You don't need to know the bottom left axis number. We do have to assume the graph is linear, and not some kind of negative exponent log graph. But given the rest of the content, I think that is safe to assume.
Any company that experiences significantly more growth than they were planning for will have capacity issues.
The priorities are most inline with that. The are way beyond the point that they can just add more hardware. They need to make the backend more efficient, and all the stated goals are about helping there.
> These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly.
No, they're completely useless. Using the "New repos per month" as an example, if the bottom left is 1m, then that's a 20x increase in 2 years which is a lot. If the bottom left is 19m, it's a 5% increase in 2 years which is nothing.
The massive surge on their labelled X axis starts in 2026, and these issues have been going on for a lot longer than that. GHA has been borderline unusable for a year at this point, if not longer.
> But given the rest of the content, I think that is safe to assume.
The rest of the content is "we're working on it", and "here's two outages in the last 14 days, one of which caused actual data loss"
> You don't need to know the bottom left axis number.
We very much do. The graph suggests an insane growth in PRs from almost zero to 90M. Now compare this misleading graph with this much clearer one, which shows that the growth over the last three years has been less than 80%: https://github.blog/wp-content/uploads/2025/10/octoverse-202...
Personally, I’m sympathetic. We know that GitHub did a huge amount of work over the last decade to make Git scale, which has benefited us all. These new scaling challenges are real challenges, 30x growth would be a nightmare for any system that was already pushing the limits of what was possible, I think we are being far too hard on GitHub, they deserve a little grace.
GitHub's scaling issues are caused by their own vendor-lock approach and monopoly. Yes, of course _their_ goal is to be even bigger and even more all-consuming, so _they_ have to deal with the scale. Why a user would be sympathetic to that?
The user (and not a big tech monopoly) answer to scaling issues is almost always to stop scaling and start federating and interoperating.
For all the negatives about github I agree. They offer a lot of free stuff, and LLMs seem likely to put massively increase their costs with no guarantee they'll be making money off it. I can't think of many (any?) large businesses which could scale up to meet so much new demand without some significant growing pains along the way.
I'm biased (founder of tangled.org), but the future really should be federated forges. Host repositories on sovereign infra with global identity + federated "metadata" (issues, pulls, etc.).
Global indices for this should be trivial to spin up so availability is never a concern (we're working towards this!).
Love the idea, would replace the LLM generated content ony our site, though.
I recently migrated to codeberg because I'm okay with self-hosting big runners, while using codeberg's available runners for smaller cron-based things (they even have lazy runners for this).
But, there are? I can host a repo on GitHub, Codeberg and self host it too. Then I need to watch over main to keep it consistent between those. After that's established, I can do updates from wherever. Link'em in the README.
There are distributed forges? Yes, git is distributed, but often everything around it isn't. The case parent is trying to make, is that the rest ("federated forges") should also be distributed, not just git.
> what is the missing killer feature that needs federation?
Issues, pull requests, collaboration/permissions/access, "staring"/"favoriting", etc.
I think ultimately the goal is that people can run their own forges, yet still collaborate on repositories hosted in other forges, leveraging your existing authentication so you no longer need to sign up individually for each forge.
Yeah sorry it's marketing BS speak for self-hosted or just infra that you control. It could be a VPS, it could be a Raspberry Pi at home. Your repos live on your servers. (And we support this on Tangled today!)
But a VPS isn't actually infrastructure you control, you essentially have as much control over it as "cloud", so I don't think that'd be counted as "sovereign", would it?
So if a company self hosts their physical infrastructure which will burn down once a fire sets in, they are more "sovereign" than a company running on a redundant cloud? I definitely would not want to be "sovereign" then.
Point is: This discussion is much more multi-dimensional than some suggest.
I know it's just marketing speak, but the term made me think of the scenes in the Matrix where what's left of humanity (ignoring all the cyclical lore that was added on top of it) has to make sure the machines can't remote in to any of their tech.
I would love if it coding agents didn't default to GitHub for their deep VCS integration.
If I could get the same bells and whistles by wiring up another forge, so long as it offered a decent API and/or sent events over a webhook, I'd have everything self-hosted.
The agents would need to expose an interface on their own end but as long as you implemented it with a plugin, it'd take the dependency of GitHub and you could use MCP or skills for the rest of it.
The neat thing about Tangled is it's built on an open protocol (https://atproto.com)—this allows us to effectively build an API-free system since all data on Tangled can effectively be ingested via the AT Protocol firehose.
Which is to say, this is perfect for agents given they don't need any bespoke SDK from us: simply write Tangled records for issues, pulls, whatever to your PDS and it'll show up on Tangled. We plan to start working on some exemplar agents first-party that would 1. enhance Tangled itself, 2. showcase cool things you can do with an open data firehose.
The internet should not be centralised, but you can't make a billion dollar company without capturing the world and selling your company to a trillion dollar company
It's cute idea but most people don't want to host their own stuff.
And if they are using 3rd parties to host their stuff, inevitable 1-3 big players will show up offering that as a service.
And even if you do host your own stuff to avoid availability problems, the big actors can still fail just like GH and you can't do shit coz your dependencies need it.
So the solution is same as it is now, proxy or mirror everything you use
* we had to resolve a variety of bottlenecks that appeared faster than expected from moving webhooks to a different backend (out of MySQL)
* * redesigning user session cache to redoing authentication and authorization flows to substantially reduce database load.
* we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go.
I'd like to know what database backend they migrated to. I was also surprised to read that the migration from Ruby to a more performant language had not already been completed. I assume this is because it a large code base with many moving parts, etc.
Hah, love that now they say "Our priorities are clear: availability first, then capacity, then new features" when 6 months ago, it was seemingly exactly the same except Azure supposedly was gonna save them:
> GitHub Will Prioritize Migrating to Azure Over Feature Development - GitHub is working on migrating all of its infrastructure to Azure, even though this means it'll have to delay some feature development.
> In a message to GitHub’s staff, CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center. “It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.
So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example. And it was "existential" 6 months ago yet they keep stumbling on the exact same issue today?
Even if they're focused exclusively on reliability and uptime, we get the experience that we have today, kind of incredible how a company with the resources of Microsoft seemingly are unable to stop continuously shot themselves in the foot. It's kind of impressive actually. As icing on the cake, they've decided to buy up all popular developer services then migrate them all to the same platform, great idea too.
> So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example.
This seems uncharitable. Priorities aren't exclusive, especially at scale across large engineering orgs like GitHub. It could be that these are the top level priorities, but teams or individuals who aren't able to contribute to these priorities will work on other things like new features.
Ditto. I agree though, just because the priority is reliability, doesn't mean others can't work on features, especially features that might help with reliability, which I read was the motivation behind the new single-issue view, so that's my bad, might have been a bit much.
I still think the rest of my point stands, especially the last one which is the move that has the biggest impact to the most of us developers.
Agree that priorities aren't exclusive and there may be teams/individuals that aren't able to contribute if they stay in their current teams/roles
Where it becomes questionable though is when enough progress isn't being made on the top priority (reliability). If Github is being true to their word, they need to be pulling people off of teams that are working on features to work on reliability so that top priority gets the resourcing it needs.
Given the pace of improvement, and the cited example of moving to Azure from months ago, it's not super clear they are doing that. Also not clear that they aren't, maybe the move to Azure is just a more than 6mo project no matter how many people are on it.
Sure, but frontend devs fundamentally cannot contribute to the structural reliability issues.
The person who rewrote the issue page view probably doesn't know anything about multi-cloud scaling for millions of users with Azure-crippling throughput. That's an incredibly specialized set of knowledge and experience that is utterly disjunct to frontend work.
But at the same time, given the state that GitHub is in, I personally wouldn't want to allow any devs to push anything to prod that doesn't immediately affect stability. I'd completely freeze frontend work until the infrastructure is more stable. But then again I write C for microcontrollers so what do I know?
I don't know their architecture but I would bet if FE devs wants to contribute to availability in a capacity-constrained world (as GH CTO mentions) they could focus on profiling and optimization, backend-access patterns for example, caching, etc. Maybe they already have people dedicated on that but if they are coming out of a "new features first" operating regime I would bet there's some fruit to pick there.
It's entirely possible the move to Azure has made the availability problems worse. Dedicated hardware is much more predictable than cloud. "Let's not move to Azure and instead buy a few more racks" was likely a decision beyond the pay grade of github's management.
This entire exercise if anything is a huge indictment of Azure.
But that doesn't matter because the kind of person that buys Azure, just like the kind of person that buys MS Teams, is entirely driven by price and does not care about anything else.
I've had Windows Server VMs soft crash and hard crash on Azure. Some soft-lock and a restart via Azure gets them back. Some times the only fix has been to power off / deprovision - then power on again (i.e. a restart didn't fix it). It's not common, but I've encountered it multiple times. These are with operating systems that were created in Azure from their images.
there's a lot that can go wrong with a hypervisor, even including hiding hardware issues from the guest OS.
We don't think about it because we've been quite spoiled with excellent virtual machine platforms (KVM, Xen and even VMWare).
Those that have worked a lot with VirtualBox will be aware of this, it can be deeply unnerving that VM technology is the default way to deploy things after you've spent sufficient time with VirtualBox. (which: is very good for its original purpose, but not for reliability).
The question is: Does Azure use something more like VirtualBox, or more like KVM?
If they had not added or changed any features to GitHub for the past 5 years, nobody would be upset, and yet, they keep changing it. It's a website that doesn't need to be reworked every five minutes. I assume the main development teams maintaining GitHubs codebase are ran by managers who cannot justify their jobs unless they deliver new features for the sake of delivering new features to keep their jobs going, and / or in the hopes of getting new people to join GH, when in reality the more they wind up breaking, the more the opposite becomes true.
They severely nerfed their search, I'm not sure why every other major tech company (Google - Search and YouTube) keeps breaking search for everything when it was working fine previously.
What's a bigger joke is Microsoft has Azure DevOps which looks like it might be abandoned? But then you also have GitHub... My least favorite thing about both is the ticketing system, I cannot believe that I'd ever utter the phrase "I miss Jira" when every Jira project I've ever been in had been so inconsistently setup, every, single, one.
>What's a bigger joke is Microsoft has Azure DevOps which looks like it might be abandoned?
My favorite was trying to figure out how to publish debug symbols with NuGet packages to Azure DevOps artifact feeds. Horrible documentation and I was never able to get it figured out.
Is this microsoft stating that they aren't able to get acceptable reliability from Azure? (I mean, I think a lot of us have heard that, but it's interesting to hear it from microsoft themselves).
Prime video does use some AWS services, but live and on-demand are two entirely different beasts.
While Azure feels like Temu clone of Cloud
There's no intrinsic reason they should be vulnerable to themselves.
But Github don't have that rationale.
I guess most people at Github knew exactly it makes no sense but they didn't really have a choice. Maybe some voiced their statement, got "we hear you" in response and were told to proceed anyway.
Microsoft Execs: Everyone needs to move to Azure!
GitHub developers: But Azure is not gonna be able to handle our load, we literally have our own data centers!
Microsoft Execs: Sure, but you're Microsoft now, please publish blog post about how in half a year you'll be 100% on Azure.
Few months later...
GitHub Developer: We've tried our best, users are leaving in droves and Azure can't keep up!
Microsoft Execs: Ok fine, you can use something else too, but only if you mainly use Azure and continue publishing blog posts about how great Azure is.
Then it's up to Azure how they will manage this
https://isolveproblems.substack.com/p/how-microsoft-vaporize...
XXXXL size project. May not ever deliver. But if it fails, it will only do so after years grinding through people, resources, etc.
amazing on one hand, quite scary on the other for github and all other forges if this continues and there is no reason why it wouldn't.
Stop subsidizing tokens now that we extracted enough training data from you and we have enough agentic junkies business to keep the flywheel going up and cut on the loss leaders. [0]
[0] https://news.ycombinator.com/item?id=47923357
Wild
The unlabelled graph with big numbers on top, the priorities that don't match with what we're experiencing, and a list of things that they're doing without a real acknowledgement of the _dire_ uptime over the last 12 months....
What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale, when 10x YoY is not enough?
I’m sure they’re experiencing scaling issues across the platform, but it’s unacceptable for that to have a negative impact on us when we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.
You know, you can just host your own code forge. Or you can just drop gitolite on a server. Or pull directly from each others' dev machines on a LAN.
GitHub is not git.
so start a GitHub competitor which bills $50/dev/yr for solving this easy problem and make a lot of money?
But if anything, their post and your reply are precisely an endorsement of usage based billing.
The bit that's growing 13x YoY (and which they expect will easily blow past that) is unmetered - commits. The bit that is metered (for some, not all folks) - action minutes, grew only 2x YoY.
GitHub was not built to limit the number of commits, checkouts, forks, issues, PRs, etc - nor do we want them to - but that's what's growing ridiculously as people unleash hordes of busy beaver agents on GitHub, because their either free or unlimited.
Where there are limits - or usage based billing - people add guardrails and find optimizations.
Because for all the talk, agents don't bring a 10x value increase; otherwise, they'd justify a 10x cost increase.
Besides, other forges are having issues too. Even running your own. We have Anubis everywhere protecting them for a reason.
That would only lead to further and further degradation of service until the paying customers were absolutely desperate to find a deal that didn't require them to lug around such a heavy ball and chain.
It all made sense at the beginning when Github was free for OSS and OSS was thriving, but now these billions of commits are mostly incredibly low value. I'd bet the average commit now doesn't create 1/10th of the value the average commit did in, say, 2018
It's also legacy at this point since Microsoft is pouring all resources into GitHub but for most people/companies, they could probably use Azure DevOps just fine.
> What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale
I think you're putting words in my mouth here; I didn't say either of those things. I'm saying that this blog post is a meaningless platitude when the github stability issues predate this, and that all this post says is "we hear you're having issues".
I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).
Either those charts are a bald-faced lie (the tweet could be as well) or there is no way for that chart to be something else.
The only way to fake exponential growth like that would be to use an inverse log scale (which would be a bald-faced lie).
It doesn't even really matter what's the y-axis baseline, unless we really think growth was huge in 2020, then cratered to zero by 2023, now back to the previous normal.
As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.
You can already see people complaining loudly where they instead of "we'll do better" decided to limit usage.
> I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).
The problem is that these charts show the massive exponential growth in 2026. But this didn't start in 2026, this has been going on since early last year. My team had more build failures in 2025 due to actions outages or "degraded performance" than _any other reason_ and that includes PR's that failed linting or tests that developer were working on.
> As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.
IMO, this needed to be written a 6 months ago (around the time that the memo of them prioritising the migration to Azure was released), and then this post should have been "We're still struggling, this isn't good enough. Here's the amount of growth, here's what we've done to try and fix it, and here's what we're planning over the next 3-6 months", instead of "Our priorities are clear: availability first, then capacity, then new features" and "We are committed to improving availability, increasing resilience, scaling for the future of software development, and communicating more transparently along the way." This isn't transparency (yet).
You don't need to know the bottom left axis number. We do have to assume the graph is linear, and not some kind of negative exponent log graph. But given the rest of the content, I think that is safe to assume.
Any company that experiences significantly more growth than they were planning for will have capacity issues.
The priorities are most inline with that. The are way beyond the point that they can just add more hardware. They need to make the backend more efficient, and all the stated goals are about helping there.
No, they're completely useless. Using the "New repos per month" as an example, if the bottom left is 1m, then that's a 20x increase in 2 years which is a lot. If the bottom left is 19m, it's a 5% increase in 2 years which is nothing.
The massive surge on their labelled X axis starts in 2026, and these issues have been going on for a lot longer than that. GHA has been borderline unusable for a year at this point, if not longer.
> But given the rest of the content, I think that is safe to assume.
The rest of the content is "we're working on it", and "here's two outages in the last 14 days, one of which caused actual data loss"
We very much do. The graph suggests an insane growth in PRs from almost zero to 90M. Now compare this misleading graph with this much clearer one, which shows that the growth over the last three years has been less than 80%: https://github.blog/wp-content/uploads/2025/10/octoverse-202...
The user (and not a big tech monopoly) answer to scaling issues is almost always to stop scaling and start federating and interoperating.
Global indices for this should be trivial to spin up so availability is never a concern (we're working towards this!).
I recently migrated to codeberg because I'm okay with self-hosting big runners, while using codeberg's available runners for smaller cron-based things (they even have lazy runners for this).
Disclaimer: the author is a colleague of mine
Though to be fair, what the parent meant by federated forges is different than this approach.
https://stackoverflow.com/questions/849308/how-can-i-pull-pu...
I'd say we have emails, mailing lists and bug trackers. Or maybe: what is the missing killer feature that needs federation?
Issues, pull requests, collaboration/permissions/access, "staring"/"favoriting", etc.
I think ultimately the goal is that people can run their own forges, yet still collaborate on repositories hosted in other forges, leveraging your existing authentication so you no longer need to sign up individually for each forge.
But a VPS isn't actually infrastructure you control, you essentially have as much control over it as "cloud", so I don't think that'd be counted as "sovereign", would it?
Point is: This discussion is much more multi-dimensional than some suggest.
If I could get the same bells and whistles by wiring up another forge, so long as it offered a decent API and/or sent events over a webhook, I'd have everything self-hosted.
The agents would need to expose an interface on their own end but as long as you implemented it with a plugin, it'd take the dependency of GitHub and you could use MCP or skills for the rest of it.
Which is to say, this is perfect for agents given they don't need any bespoke SDK from us: simply write Tangled records for issues, pulls, whatever to your PDS and it'll show up on Tangled. We plan to start working on some exemplar agents first-party that would 1. enhance Tangled itself, 2. showcase cool things you can do with an open data firehose.
The internet should not be centralised, but you can't make a billion dollar company without capturing the world and selling your company to a trillion dollar company
And if they are using 3rd parties to host their stuff, inevitable 1-3 big players will show up offering that as a service.
And even if you do host your own stuff to avoid availability problems, the big actors can still fail just like GH and you can't do shit coz your dependencies need it.
So the solution is same as it is now, proxy or mirror everything you use
* we had to resolve a variety of bottlenecks that appeared faster than expected from moving webhooks to a different backend (out of MySQL)
* * redesigning user session cache to redoing authentication and authorization flows to substantially reduce database load.
* we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go.
I'd like to know what database backend they migrated to. I was also surprised to read that the migration from Ruby to a more performant language had not already been completed. I assume this is because it a large code base with many moving parts, etc.
That's a delayed April fool's right?
> GitHub Will Prioritize Migrating to Azure Over Feature Development - GitHub is working on migrating all of its infrastructure to Azure, even though this means it'll have to delay some feature development.
> In a message to GitHub’s staff, CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center. “It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.
https://thenewstack.io/github-will-prioritize-migrating-to-a...
So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example. And it was "existential" 6 months ago yet they keep stumbling on the exact same issue today?
Even if they're focused exclusively on reliability and uptime, we get the experience that we have today, kind of incredible how a company with the resources of Microsoft seemingly are unable to stop continuously shot themselves in the foot. It's kind of impressive actually. As icing on the cake, they've decided to buy up all popular developer services then migrate them all to the same platform, great idea too.
They did that as a panic mode hack to mitigate performance: https://news.ycombinator.com/item?id=47912521
I still think the rest of my point stands, especially the last one which is the move that has the biggest impact to the most of us developers.
Where it becomes questionable though is when enough progress isn't being made on the top priority (reliability). If Github is being true to their word, they need to be pulling people off of teams that are working on features to work on reliability so that top priority gets the resourcing it needs.
Given the pace of improvement, and the cited example of moving to Azure from months ago, it's not super clear they are doing that. Also not clear that they aren't, maybe the move to Azure is just a more than 6mo project no matter how many people are on it.
The person who rewrote the issue page view probably doesn't know anything about multi-cloud scaling for millions of users with Azure-crippling throughput. That's an incredibly specialized set of knowledge and experience that is utterly disjunct to frontend work.
But at the same time, given the state that GitHub is in, I personally wouldn't want to allow any devs to push anything to prod that doesn't immediately affect stability. I'd completely freeze frontend work until the infrastructure is more stable. But then again I write C for microcontrollers so what do I know?
But that doesn't matter because the kind of person that buys Azure, just like the kind of person that buys MS Teams, is entirely driven by price and does not care about anything else.
I might buy that argument if Azure compensated for its awful availability and security with lower prices.
But the kind of person who buys Azure is the kind of person who buys Windows and Teams, perfectly happy to pay a premium for all the extra abuse.
There is so much workload running on Azure, i never heard of VMs go away.
If Microsoft can source hardware for Azure, Microsoft can source hardware for Github.
We don't think about it because we've been quite spoiled with excellent virtual machine platforms (KVM, Xen and even VMWare).
Those that have worked a lot with VirtualBox will be aware of this, it can be deeply unnerving that VM technology is the default way to deploy things after you've spent sufficient time with VirtualBox. (which: is very good for its original purpose, but not for reliability).
The question is: Does Azure use something more like VirtualBox, or more like KVM?
HyperV exhibits properties closer to VirtualBox.
They severely nerfed their search, I'm not sure why every other major tech company (Google - Search and YouTube) keeps breaking search for everything when it was working fine previously.
What's a bigger joke is Microsoft has Azure DevOps which looks like it might be abandoned? But then you also have GitHub... My least favorite thing about both is the ticketing system, I cannot believe that I'd ever utter the phrase "I miss Jira" when every Jira project I've ever been in had been so inconsistently setup, every, single, one.
My favorite was trying to figure out how to publish debug symbols with NuGet packages to Azure DevOps artifact feeds. Horrible documentation and I was never able to get it figured out.