45 points by andyyyy64 1 hour ago | 7 comments
pornel 5 minutes ago
It looks nice. I've been searching for something like this recently, and was frustrated with rankings that lack latest models or don't clearly distinguish quantizations.

Showing quality loss per quantization is nice.

I'd prefer this as a website, since I'd handle running of the model with a dedicated inference server anyway.

It would be nice to see what's the maximum context length that can fit on top of the baseline.

I was surprised how much token generation speed tanks when using very long context. 30/s can drop down to 2/s. A single speed metric didn't prepare me for that.

I was also positively surprised that some models scale well with batch parallelism. I can get 4x speed improvement by running 8 requests in parallel. But this affects memory requirements, and doesn't apply to all models and inference engines. It would be nice to show that. Some sites fold it into "what's your workflow", but that's too opaque.

KV cache quantization also makes a difference for speed, VRAM usage and max usable context.

On Apple Silicon MLX-compatible model builds make a difference, so I'd like to see benchmarks reassure they're based on the fastest implementation.

Multi-token-prediction is another aspect that may substantially change speed.

sleepyeldrazi 4 minutes ago
I love this community, I started building a simple website for this exactly a couple of hours ago and you made an even more advanced version already. Hats off to you sir.

If i ever decide to actually publish the site, is it alright if I mention you somewhere as a "If you want a more accurate estimation, check out this project:<your repo>", as i think there is value in having a simple website estimate this information for you, and give you instructions/ common flags on how to start it yourself (also a prompt crafted for you to optionally give to an llm to set it up for you), but im going off simple "choose an os, gpu/vram, here's a list of options" and not actually scanning (which is a lot more accurate).

llagerlof 14 minutes ago
What’s new regarding llmfit?

https://github.com/AlexsJones/llmfit

rvz 11 minutes ago
Other than it (whichllm) being written in Python, nothing else.

I just use llmfit.

Jasssss 39 minutes ago
The plan command is clever. How do you handle the VRAM estimation for models with sliding window attention vs full context? Something like Mistral at 32k context uses way less KV cache than Llama at the same context length, but from the README it looks like the estimation is based on a fixed context size. Does it account for that?
Bigsy 25 minutes ago
Brew install is broken

It seems pretty rubbish I have to say, its recommending me loads of qwen 2.5 which are really old and I'm easy running qwen3.5 and 3.6 models on this mac at decent quants

macwhisperer 13 minutes ago
can you add in the other quants like IQ3_M?

also my personal simple rule of thumb for local ai sizing is:

max model size (GB) = ram (GB) / 1.65

kramit1288 21 minutes ago
accurate memory estimation is key here. it will crash if that accurate and it cant be generic for all local llm. each local llm has different context estimates.