Show HN: LocalLLM – Recipes for Running the Local LLM (Need Contributors) (locallllm.fly.dev)
I built localLLLM: a small community project for running local models.
Live: https://locallllm.fly.dev
The goal is simple: if someone has model + OS + GPU + RAM, they should get steps that actually work (ideally one liner)
I need help populating and validating guides.
If you run local models, please submit one working recipe (or report what failed). Would love to hear general feedback as well!
*Edit: op has been vibe coding all over, could just vibe code the guides themselves without human input?