GitLab is right there. And overall provides a better product than GitHub, if nothing else on these two points:
* You can actually have an organisational structure (folders/namespaces), and projects can be moved around with automatic redirects. Also, inheritance of access controls, variables between the namespaces
* GitLabCI is organised in a way that makes supply chain attacks less of a risk. GitHub Actions takes the NPM/JS approach, where every step is an action, one you usually need to get off someone, with shoddy versioning, tons of transient dependencies, etc. In GitLabCI you can have templates, but you don't have to use an external template for every bit. It's shell scripting on top of containers, so you can have custom container images with your stuff, or custom scripts, or templates that bundle it all.
SSO, access tokens, secrets are all bound to the Organization level - if you work on multiple Organizations you have to log in separately... You also cannot have nested Organizations.
Yeah, agreed it's not great for that. I'm not real happy with GitHub's worsening UX either, but it'll at least show the _names_ of all the files in the PR.
With GitLab, when you hit the rate limit, any file "past" that limit doesn't even show that it exists in the MR. It just looks like the MR is missing a bunch of stuff, with no workaround available. :( :( :(
I’ve personally been deeply unappreciated of Github’s changes in the last few years to automatically not show diffs to “large files” without having to click to open them - which seems to be a threshold that continues to shrink. Maybe like 3 screenfuls of content is the limit now per file. It’s crazy.
I mean more like a full git competitor. Gitlab exists but more competition is generally better for the consumer and it looks like Github's lead is starting to falter with all these incidents.
To be fair, it feels like the DNS service has been the most reliable part of our Azure infra. Never really had issues with it, whether with traffic or API calls.
More seriously, keeping a local cache of external npm packages, and a local artifact storage for internal npm packages looks like a wise thing to have done long ago. Might be cheaper in the long run.
Ironically, both Nandu and Verdaccio are implemented in Tyepscript and install via npm.
(Same logic obviously applies to Python packages, Docker images, etc.)
Caching NPM was easier when you could pull the Couchbase replicate API. Afaik that's gone and now you just have to send a bazillion http requests instead.
Does IPFS support content eviction now? If not, that could go wrong really fast. You get a compromised package out there and then, I think, literally every node needs to unpin it or it remains.
Presumably, how ever you mark a version as latest would also be how you mark one as compromised. IPFS files are immutable and keyed by hash. But this seems like overengineering.
At my former job we had a private registry that was a mirror of npm’s with an approval gate for packages devs would request and it would always pin versions
I took that for granted back then and just assumed it was standard enterprise policy
Multiple previous jobs had this too (local Packagist is thing, Artifactory is another) but my current job got rid of theirs. Seemed a little short-sighted given the risks but I don't make the decisions.
> a local artifact storage for internal npm packages looks like a wise thing to have done long ago
Deno already does this invisibly by default.
All packages are stored in the global cache.
No need to store multiple versions of the same dependencies across projects.
To the code in your projects: there is no such thing as a global cache. Just import your dependencies like normal and deno maps them to the global cache.
libc is still working just fine, as is the linux kernel. Mayhaps having 2000 dependencies on 3000 packages from 4000 unvetted sources was a mistake afterall?
They seem to be doing a pretty good job at wrecking both GitHub and npm at the same time.
* You can actually have an organisational structure (folders/namespaces), and projects can be moved around with automatic redirects. Also, inheritance of access controls, variables between the namespaces
* GitLabCI is organised in a way that makes supply chain attacks less of a risk. GitHub Actions takes the NPM/JS approach, where every step is an action, one you usually need to get off someone, with shoddy versioning, tons of transient dependencies, etc. In GitLabCI you can have templates, but you don't have to use an external template for every bit. It's shell scripting on top of containers, so you can have custom container images with your stuff, or custom scripts, or templates that bundle it all.
so, while you’re technically right, these features are apparently paywalled heavily on github.
ime you get more features on gitlab for the same price (or less). i switched fully two years ago and im not going back.
SSO, access tokens, secrets are all bound to the Organization level - if you work on multiple Organizations you have to log in separately... You also cannot have nested Organizations.
Its a problem they know about, but have no plan to fix before 2027.
Thus, we're moving off GitLab.
The "surprise, you can't review all the files in your PR" using GitLabs standard web based tooling makes it a no-go.
With GitLab, when you hit the rate limit, any file "past" that limit doesn't even show that it exists in the MR. It just looks like the MR is missing a bunch of stuff, with no workaround available. :( :( :(
https://developers.cloudflare.com/artifacts/
[0] https://arstechnica.com/information-technology/2026/01/odd-a...
If it's not DNS it's MTU if you're a person and BGP if you're a company.
Ironically, both Nandu and Verdaccio are implemented in Tyepscript and install via npm.
(Same logic obviously applies to Python packages, Docker images, etc.)
Both yarn and pnpm support http/2 which speeds up the bazillion requests quite a bit.
I took that for granted back then and just assumed it was standard enterprise policy
Deno already does this invisibly by default.
All packages are stored in the global cache.
No need to store multiple versions of the same dependencies across projects.
To the code in your projects: there is no such thing as a global cache. Just import your dependencies like normal and deno maps them to the global cache.
:)
Keep up the good work Microsoft.
Let's shoot for 100% downtime though. Thanks.