Can I run AI locally? Cairn checks your GPU against Llama, DeepSeek, Qwen and 50+ open-weight LLM...
Same question every time a new model drops: can my GPU actually handle this? Stumbled on can i run ai, which compares your hardware (VRAM, RAM, CPU) against the requirements of popular open models — Llama, Mistral, Qwen, Stable Diffusion, etc. It also estimates rough tokens-per-second.
Curious what tooling other people lean on for this:
Not affiliated, sharing in case it's useful — definitely cleaner than spreadsheet-checking VRAM tables by hand every time HuggingFace ships something new.