105 points by lostmsu 2 hours ago | 14 comments
mstaoru 10 minutes ago
I periodically try to run these models on my MBP M3 Max 128G (which I bought with a mind to run local AI). I have a certain deep research question (in a field that is deeply familiar to me) that I ask when I want to gauge model's knowledge.

So far Opus 4.6 and Gemini Pro are very satisfactory, producing great answers fairly fast. Gemini is very fast at 30-50 sec, Opus is very detailed and comes at about 2-3 minutes.

Today I ran the question against local qwen3.5:35b-a3b - it puffed for 45 (!) minutes, produced a very generic answer with errors, and made my laptop sound like it's going to take off any moment.

Wonder what am I doing wrong?.. How am I supposed to use this for any agentic coding on a large enough codebase? It will take days (and a 3M Peltor X5A) to produce anything useful.

aspenmartin 1 minute ago
Well Opus and Gemini are probably running on multiple H200 equivalents, maybe multiple hundreds of thousands of dollars of inference equipment. Local models are inherently inferior; even the best Mac that money can buy will never hold a candle to latest generation Nvidia inference hardware, and the local models, even the largest, are still not quite at the frontier. The ones you can plausibly run on a laptop (where "plausible" really is "45 minutes and making my laptop sound like it is going to take off at any moment". Like they said -- you're getting sonnet 4.5 performance which is 2 generations ago; speaking from experience opus 4.6 is night and day compared to sonnet 4.5
furyofantares 1 minute ago
[delayed]
culi 1 minute ago
Well you can't run Gemini Pro or Opus 4.6 locally so are you comparing a locally run model to cloud platforms?
notreallya 5 minutes ago
Sonnet 4.5 level isn't Opus 4.6 level, simple as
zozbot234 8 minutes ago
Running local AI models on a laptop is a weird choice. The Mini and especially the Studio form factor will have better cooling, lower prices for comparable specs and a much higher ceiling in performance and memory capacity.
alexpotato 46 minutes ago
I recently wrote a guide on getting:

- llama.cpp

- OpenCode

- Qwen3-Coder-30B-A3B-Instruct in GGUF format (Q4_K_M quantization)

working on a M1 MacBook Pro (e.g. using brew).

It was bit finicky to get all of the pieces together so hopefully this can be used with these newer models.

https://gist.github.com/alexpotato/5b76989c24593962898294038...

kpw94 3 minutes ago
On my 32GB Ryzen desktop (recently upgraded from 16GB before the RAM prices went up another +40%), did the same setup of llama.cpp (with Vulkan extra steps) and also converged on Qwen3-Coder-30B-A3B-Instruct (also Q4_K_M quantization)

On the model choice: I've tried latest gemma, ministral, and a bunch of others. But qwen was definitely the most impressive (and much faster inference thanks to MoE architecture), so can't wait to try Qwen3.5-35B-A3B if it fits.

I've no clue about which quantization to pick though ... I picked Q4_K_M at random, was your choice of quantization more educated?

freeone3000 17 minutes ago
We can also run LM Studio and get it installed with one search and one click, exposed through an OpenAI-compatible API.
copperx 45 minutes ago
How fast does it run on your M1?
robby_w_g 18 minutes ago
Does your MBP have 32 GB of ram? I’m waiting on a local model that can run decently on 16 GB
solarkraft 1 hour ago
Smells like hyperbole. A lot of people making such claims don’t seem to have continued real world experience with these models or seem to have very weird standards for what they consider usable.

Up until relatively recently, while people had already long been making these claims, it came with the asterisks of „oh, but you can’t practically use more than a few K tokens of context“.

tempest_ 44 minutes ago
Qwen3-Coder-30B-A3B-Instruct is good I think for in line IDE integration or operating on small functions or library code but I dont think you will get too far with one shot feature implementation that people are currently doing with Claude or whatever.
andy_ppp 16 minutes ago
I have been adding a one shot feature to a codebase with ChatGPT 5.3 Codex in Cursor and it worked out of the box but then I realised everything it had done was super weird and it didn't work under a load of edge cases. I've tried being super clear about how to fix it but the model is lost. This was not a complex feature at all so hopefully I'm employed for a few more years yet.
solarkraft 1 hour ago
What are the recommended 4 bit quants for the 35B model? I don’t see official ones: https://huggingface.co/models?other=base_model:quantized:Qwe...

Edit: The unsloth quants seem to have been fixed, so they are probably the go-to again: https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks

sunkeeh 38 minutes ago
Qwen3.5-122B-A10B BF16 GGUF = 224GB. The "80Gb VRAM" mentioned here will barely fit Q4_K_S (70GB), which will NOT perform as shown on benchmarks.

Quite misleading, really.

mark_l_watson 1 hour ago
The new 35b model is great. That said, it has slight incompatibility's with Claude Code. It is very good for tool use.
johnnyApplePRNG 56 minutes ago
Claude code is designed for anthropic models. Try it with opencode!
kristianpaul 46 minutes ago
Or Pi
copperx 44 minutes ago
Or Oh My Pi
kristianpaul 38 minutes ago
https://unsloth.ai/docs/models/qwen3.5#qwen3.5-27b “ Qwen3.5-27B For this guide we will be utilizing Dynamic 4-bit which works great on a 18GB RAM”
kristianp 28 minutes ago
18GB was an odd 3-channel one-off for the M3 Pros. I guess there's a bunch of them out there, but how slow would 27B be on it, due to not being an MOE model.
gunalx 34 minutes ago
qwen 3.5 is really decent. oOtside for some weird failures on some scaffolding with seemingly different trained tools.

Strong vision and reasoning performance, and the 35-a3b model run s pretty ok on a 16gb GPU with some CPU layers.

erelong 1 hour ago
What kind of hardware does HN recommend or like to run these models?
suprjami 1 hour ago
The cheapest option is two 3060 12G cards. You'll be able to fit the Q4 of the 27B or 35B with an okay context window.

If you want to spend twice as much for more speed, get a 3090/4090/5090.

If you want long context, get two of them.

If you have enough spare cash to buy a car, get an RTX Ada with 96G VRAM.

barrkel 35 minutes ago
Rtx 6000 pro Blackwell, not ada, for 96GB.
andsoitis 49 minutes ago
For fast inference, you’d be hard pressed to beat an Nvidia RTX 5090 GPU.

Check out the HP Omen 45L Max: https://www.hp.com/us-en/shop/pdp/omen-max-45l-gaming-dt-gt2...

laweijfmvo 32 minutes ago
I never would have guessed that in 2026, data centers would be measured in Watts and desktop PCs measured in liters.
dajonker 56 minutes ago
Radeon R9700 with 32 GB VRAM is relatively affordable for the amount of RAM and with llama.cpp it runs fast enough for most things. These are workstation cards with blower fans and they are LOUD. Otherwise if you have the money to burn get a 5090 for speeeed and relatively low noise, especially if you limit power usage.
zozbot234 44 minutes ago
It depends. How much are you willing to wait for an answer? Also, how far are you willing to push quantization, given the risk of degraded answers at more extreme quantization levels?
elorant 44 minutes ago
Macs or a strix halo. Unless you want to go lower than 8-bit quantization where any GPU with 24GBs of VRAM would probably run it.
xienze 1 hour ago
It's less than you'd think. I'm using the 35B-A3B model on an A5000, which is something like a slightly faster 3080 with 24GB VRAM. I'm able to fit the entire Q4 model in memory with 128K context (and I think I would probably be able to do 256K since I still have like 4GB of VRAM free). The prompt processing is something like 1K tokens/second and generates around 100 tokens/second. Plenty fast for agentic use via Opencode.
rahimnathwani 1 hour ago
There seem to be a lot of different Q4s of this model: https://www.reddit.com/r/LocalLLaMA/s/kHUnFWZXom

I'm curious which one you're using.

suprjami 58 minutes ago
Unsloth Dynamic. Don't bother with anything else.
rahimnathwani 33 minutes ago
UD-Q4_K_XL?
msuniverse2026 56 minutes ago
I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?
pja 2 minutes ago
> I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?

Sure. Llama.cpp will happily run these kinds of LLMs using either HIP or Vulcan.

Vulkan is easier to get going using the Mesa OSS drivers under Linux, HIP might give you slightly better performance.

wirybeige 37 minutes ago
The vulkan backend for llama.cpp isn't that far behind rocm for pp and tp speeds
CamperBob2 45 minutes ago
I think the 27B dense model at full precision and 122B MoE at 4- or 6-bit quantization are legitimate killer apps for the 96 GB RTX 6000 Pro Blackwell, if the budget supports it.

I imagine any 24 GB card can run the lower quants at a reasonable rate, though, and those are still very good models.

Big fan of Qwen 3.5. It actually delivers on some of the hype that the previous wave of open models never lived up to.

MarsIronPI 35 minutes ago
I've had good experience with GLM-4.7 and GLM-5.0. How would you compare them with Qwen 3.5? (If you have any experience with them.)
PunchyHamster 18 minutes ago
I asked it to recite potato 100 times coz I wanted to benchmark speed of CPU vs GPU. It's on 150 line of planning. It recited the requested thing 4 times already and started drafting the 5th response.

...yeah I doubt it

lachiflippi 6 minutes ago
Qwen3.5 pretty much requires a long system prompt, otherwise it goes into a weird planning mode where it reasons for minutes about what to do, and double and triple checks everything it does. Both Gemini's and Claude Opus 4.6's prompts work pretty well, but are so long that whatever you're using to run the model has to support prompt caching. Asking it to "Say the word "potato" 100 times, once per line, numbered.", for example, results in the following reasoning, followed by the word "potato" in 100 numbered lines, using the smallest (and therefore dumbest) quant unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ2_XXS:

"User is asking me to repeat the word "potato" 100 times, numbered. This is a simple request - I can comply with this request. Let me create a response that includes the word "potato" 100 times, numbered from 1 to 100.

I'll need to be careful about formatting - the user wants it numbered and once per line. I should use minimal formatting as per my instructions."

lumirth 14 minutes ago
well hold on now, maybe it’s onto something. do you really know what it means to “recite” “potato” “100” “times”? each of those words could be pulled apart into a dissertation-level thesis and analysis of language, history, and communication.

either that, or it has a delusional level of instruction following. doesn’t mean it can’t code like sonnet though

PunchyHamster 2 minutes ago
It's still amusing to see those seemingly simple things still put it into loop it is still going

> do you really know what it means to “recite” “potato” “100” “times”?

asking user question is an option. Sonnet did that a bunch when I was trying to debug some network issue. It also forgot the facts checked for it and told it before...

kristianpaul 47 minutes ago
They work great with kagi and pi
aliljet 1 hour ago
Is this actually true? I want to see actual evals that match this up with Sonnet 4.5.
magicalhippo 42 minutes ago
The Qwen3.5 27B model did almost the same as Sonnet 4.5 in this[1] reasoning benchmark, results here[2].

Obviously there's more to a model than that but it's a data point.

[1]: https://github.com/fairydreaming/lineage-bench

[2]: https://github.com/fairydreaming/lineage-bench-results/tree/...

lostmsu 1 hour ago
Not exactly, but pretty close: https://artificialanalysis.ai/models/capabilities/coding?mod...

Somewhere between Haiku 4.5 and Sonnet 4.5

CharlesW 1 hour ago
> Somewhere between Haiku 4.5 and Sonnet 4.5

That's like saying "somewhere between Eliza and Haiku 4.5". Haiku is not even a so-called 'reasoning model'.¹

¹ To preempt the easily-offended, this is what the latest Opus 4.6 in today's Claude Code update says: "Claude Haiku 4.5 is not a reasoning model — it's optimized for speed and cost efficiency. It's the fastest model in the Claude family, good for quick, straightforward tasks, but it doesn't have extended thinking/reasoning capabilities."

pityJuke 50 minutes ago
Haiku 4.5 is a reasoning model. [0]

[0]: https://www-cdn.anthropic.com/7aad69bf12627d42234e01ee7c3630...

> Claude Haiku 4.5, a new hybrid reasoning large language model from Anthropic in our small, fast model class.

> As with each model released by Anthropic beginning with Claude Sonnet 3.7, Claude Haiku 4.5 is a hybrid reasoning model. This means that by default the model will answer a query rapidly, but users have the option to toggle on “extended thinking mode”, where the model will spend more time considering its response before it answers. Note that our previous model in the Haiku small-model class, Claude Haiku 3.5, did not have an extended thinking mode.

CharlesW 39 minutes ago
Sure, marketing people gonna market. But Haiku's 'extended thinking' mode is very different than the reasoning capabilities of Sonnet or Opus.

I would absolutely believe mar-ticles that Qwen has achieved Haiku 4.5 'extended thinking' levels of coding prowess.

DetroitThrow 31 minutes ago
>Sure, marketing people gonna market.

Oh HN never change.

CharlesW 24 minutes ago
I'm marketing people, I can say that.
pinum 59 minutes ago
Looks much closer to Haiku than Sonnet.

Maybe "Qwen3.5 122B offers Haiku 4.5 performance on local computers" would be a more realistic and defensible claim.

xenospn 2 hours ago
Are there any non-Chinese open models that offer comparable performance?
MarsIronPI 34 minutes ago
I think you could look into Minstral. There's also GPT-OSS but I'm not sure how well it stacks up.

What's your problem with Chinese LLMs?

u1hcw9nx 2 hours ago
[flagged]
ramon156 1 hour ago
Ironically, chinese models so far have been less lobotomized compared to OAI and Anthropic's models
u1hcw9nx 58 minutes ago
Qwen has been broadly aligned to give positive messages about China in English. https://chinamediaproject.org/2026/02/09/tokens-of-ai-bias/

An Analysis of Chinese LLM Censorship and Bias with Qwen 2 Instruct https://huggingface.co/blog/leonardlin/chinese-llm-censorshi...

joe_mamba 9 minutes ago
Impressive, very nice, now let's see what would be the odds that the US models developed in SV are also highly positive about Californian and Democrats politics.
andsoitis 48 minutes ago
Does it matter for your work?
andsoitis 2 hours ago
Yes.