I am a physics professor and often use Gemini to check my papers. It is a formidable tool: it was able to find a clerical error (a missing imaginary unit in a complex mathematical expression) I was not able to find for days, and it often underlines connections between concepts and ideas that I overlooked.
However, it often makes conceptual errors that I can spot only because I have good knowledge of the topic I am discussing. For instance, in 3D Clifford algebras it repeatedly confuses exponential of bivectors and of pseudoscalars.
Good to know that ChatGPT 5.5 Pro can produce a publishable paper, but from what I have seen so far with Gemini, it seems to me that it is better to consider LLMs as very efficient students who can read papers and books in no time but still need a lot of mentoring.
I assume you're using the "regular" Pro version of Gemini 3.1 for the above, rather than the Deep Think mode, which is more comparable to GPT-5.5 Pro. To my knowledge, regular 3.1 Pro is a tier below and often makes mistakes.
Moreover, there's no reason to believe the progress of LLMs, which couldn't reliably solve high-school math problems just 3–4 years ago, will stop anytime soon.
You might want to track the progress of these models on the CritPt benchmark, which is built on *unpublished, research-level* physics problems:
This could be right for the current architecture of LLMs, but you can come up with specialized large language models that can more efficiently use tokens for a specific subset of problems by encoding the information differently.
So if instead of text we come up with a different representation for mathematical or physical problems, that could both improve the quality of the output while reducing the amount of transformers needed for decoding and encoding IO and for internal reasoning.
There are also difference inference methods, like autoregressive and diffusion, and maybe others we haven't discovered yet.
You combine those variables, along with the internal disposition of layers, parameter size and the actual dataset, and you have such a large search space for different models that no one can reliably tell if LLM performance is going to flatline or continue to improve exponentially.
If higher bandwidth networking consisted primarily running more and more ethernet lines in parallel, you would most certainly agree that "networking has stagnated".
"Reasoning" and now "Agentic" AI systems are not some fundamental improvement on LLMs, they're just running roughly the same prior-gen LLMS, multiple times.
Hence the conclusion that LLM improvement has slowed down, if not stagnated entirely, and that we should not expect the improvements of switching to these "reasoning" systems to keep happening.
I agree and put it this way: LLMs sound so convincing presenting you the work it does rose colored and promising to give you more if you keep going.
There is a 50/50 chance that it turns out to be right or letting you jump of the cliff.
Only the trip stays the same beautiful 5 star plus travel.
Also, spotting an error and telling LLM makes it in most cases worse, because the LLM wants to please you and goes on to apologize and change course.
The moment I find myself in such a situation I save or cancel the session and start from scratch in most cases or pivot with drastic measures.
Gemini to me is the most unpredictable LLM while GPT works best overall for me.
Gemini lately gave me two different answers to the same question. This was an intentional test because I was bored and wanted to see what happens if you simply open a new chat and paste the same prompt everything else being the same.
Reasoning doesn’t help much in the Coding domain for me because it is very high level and formally right what the LLM comes up with as an explanation.
I google more due to LLMs than before, because essentially what I witnessed is someone producing something that I gotta control first before I hit the button that it comes with. However, you only find out shortly afterwards whether the polished button started working or gave you a warm welcome to hell.
Reusing the same prompt several times is something I've started doing too. The contrast is often illuminating.
In one case, it made a thoroughly convincing argument that an approach was justified. The second time it made exactly the opposite argument, which was equally compelling.
I was using Copilot and asked it a question about a PDF file (a concept search). It turned out the file was images of text. I was anticipating that and had the text ready to paste in.
Instead, it started writing an OCR program in python.
I stopped it after several minutes.
Often Copilot says it can't do something (sometimes it's even correct), that's preferential to the try-hard behaviour here.
> Gemini to me is the most unpredictable LLM while GPT works best overall for me.
This nails an important thing IMHO. I've absolutely noticed this, for better or worse. Gemini can produce surprisingly excellent things, but it's unpredictability make me go for GPT when I only want to ask it once.
LLMs are at their best when you have an expectation for their output. I generally know the shape of the correct response and that allows me to evaluate it's output on it's "vibes", rather than line by line. If there's no expectation then I have to take everything at face value and now I'm at the mercy of the machine.
Exactly, if I generate a large chunk software, I'm going to have expectations about what it will do, how it will do it, etc. You don't just accept the statement that "it's done" for fact but you start looking for evidence.
A scientific approach here is to look to falsify the statement. You start asking questions, running tests, experiments, etc. to prove the notion that it is done wrong. And at some point you run out of such tests and it's probably done for some useful notion of done-ness.
I've built some larger components and things with AI. It's never a one shot kind of deal. But the good news is that you can use more AI to do a lot of the evaluation work. And if you align your agents right, the process kind of runs itself, almost. Mostly I just nudge it along. "Did you think about X? What about Y? Let's test Z"
> Mostly I just nudge it along. "Did you think about X? What about Y? Let's test Z"
Exactly - you need to constantly have your sceptics glasses on and you need to be exacting in terms of the structure you want things to follow. Having and enforcing "taste" is important and you need to be willing to spend time on that phase because the quality of the payoff entirely depends on it.
I recently planned for a major refactor. The discussion with claude went on for almost two days. The actual implementation was done in 10 minutes. It probably has made some mistakes that I will have to check for during the review but given that the level of detail that plan document had, it is certainly 90-95% there. After pouring-in of that much opinion, it is a fairly good representation of what I would have written while still being faster than me doing everything by hand.
I agree, but I would add that they can be very useful even if you do not have clear expectations but have some solid ways to verify their claims. Often in doing this verification I came up with new ideas.
I'm no physics professor but this aligns with the way I use the tools in my "senior engineer" space. I bring the fundamentals to sanity-check the trigger-happy agent and try to imbue other humans with those fundamentals so they can move towards doing the same. It feels like the only way this whole thing will work (besides eventually moving to local models that do less but companies can afford).
Using the word “Mentoring” is anthropomorphic and subconsciously makes you think it will learn. It does not, and it is for the human brain a formidable task to remember that something as smart as an LLM does not learn. I keep catching myself making the same mistake.
It’s also because it is so annoying to have to manage the memory of the LLM with custom prompts/instructions manually.
I have not yet played with the long term memory feature, but I fear it will be even less reliable than prompts, simply because in one year or two years so much will have changed again that this “memory” will have to be redone multiple times by then.
Current LLM architecture doesn't learn - and you're right this is a huge piece that normal folks fail to understand, since in many ways, it's the opposite of what years of AI research has been trying to create.
However, I think it's important to remember that LLMs are embedded in larger systems, and those larger systems do learn.
They can form new associations between concepts via their input prompts and thinking text. That is a form of learning. Just not very durable. I liken it to https://en.wikipedia.org/wiki/Anterograde_amnesia
I hear you. I think we are already seeing some middle ground with agentic systems using RAG, skills.md files, etc. It's a sort of disassociated card catalog memory. An engineer's notebook. Not the integrated, correlated, pre-processed set of relationships in the model. How to go backward from the notebook -> model cheaply without tanking performance is definitely one of those billion dollar questions.
> Using the word “Mentoring” is anthropomorphic and subconsciously makes you think it will learn.
I think this is a bit pedantic. Obviously the parent you’re replying to is referring to the concept of “in-context learning”, which is the actual industry / academic term for this. So you feed it a paper, and then it can use that info, and it needs steering / “mentoring” to be guided into the right direction.
Heck the whole name of “machine learning” suggests these things can actually learn. “reasoning” suggests that these things can reason, instead of being fancy, directed autocomplete. Etc.
In other news: data hydration doesn’t actually make your data wet. People use / misuse words all the time, and that causes their meaning to evolve.
I agree it’s pedantic and personally don’t get bent out of shape with people anthropomorphizing the llms. But I do think you get better results if keep the text prediction machine mental model in your head as you work with them.
And that can be very hard to do given the ui we most interact with them in is a chat session.
I mostly agree, though after a mentoring session you can ask it to write skill or a memory and it can be reasonably durable. For Claude at least, the memories work pretty well (though I am still at a small scale with them. As they grow it might start to break somewhat. Doesn't always work, but has often enough that I thought it worth a mention.
Hi ziotom!
I wonder about you work in 3D Cifford Algebras. May you share some links to the research you do? I also have interest in this topic I research on my own.
Just in case if you don't want to disclose your name my email is northzen@gmail.com
This doesn't surprise me since the coding agents are similar. I've previously compared them to very fast, ambitious junior programmers. I think they are probably mid-level coders now, but they continue to make mistakes that a senior programmer wouldn't. Or at least shouldn't.
We've got a rather extensive AI setup through our equity fund and I've setup a group of agents for data architecture at scale. One is the main agent I discuss with and it's setup to know our infrastructure and has access to image generation tools, websearch, hand off agents and other things. I tend to use Opus (4-6 currently) and I find it to be rather great. As you point out it comes with the danger of making mistakes, and again, as you point out, it's not an issue for things I'm an expert on. What I rely on it for, however, is analysing how specific tools would fit into our architecture. In the past you would likely have hired a group of consultants to do this research, but now you can have an AI agent tell you what the advantages and disadvantages of Microsoft Fabric in your setup. Since I don't know the capabilities of Fabric I can't tell if the AI gives me the correct analysis of a Lakehouse and a Warehouse (fabric tools).
What I do to mitigate this is that I have fact checking agents configured to be extremely critical and non-biased on Opus, Gemini and GPT. Which are then handed the entire conversation to review it. Then it's handed off to a Opus agent which is setup to assume everything is wrong. After this, and if I'm convinced something is correct I'll hand the entire thing off to a sonnet agent, which is setup to go through the source material and give me a compiled list of exactly what I'll need to verify.
It's ridicilously effective, but I do wonder how it would work with someone who couldn't challenge to analytic agent on domain knowledge it gets wrong. Because despite knowing our architecture and needs, it'll often make conceptional errors in the "science" (I'm not sure what the English word for this is) of data architecture. Each iteration gets better though, and with the image generation tools, "drawing" the architecture for presentations from c-level to nerds is ridiclously easy.
Gemini’s smug and over-confident “this is the gold standard in 2026” definitely leaves little space for nuance if you don’t know the subject matter. Human students would, hopefully, know they don’t know everything.
Anthropomorphizing these systems is dangerous, whether coming from the bullish or bearish perspective. The output is statistically generated by a machine lacking the capability to be smug.
It's only "statistically generated" in the same way that your brain is just "neurons firing." That's the low-level description of what's happening, but on a higher level, it's correct to say that it's being smug.
Gemini feels deep and philosophical. Especially for product management. Tell him you're a product manager and we're a team of two.
But regular reminder - All LLMs can be wrong all the time. I only work with LLMs in domains I'm expert in OR I have other sources to verify their output with utmost certainty.
Or when you don't care about results being very correct.
When I'm cooking meatballs with sauce and the recipe calls for frying them, I'll have an LLM guestimate how long and which program to use in an air fryer to mimic the frying pan, based on a picture of balls in a Pyrex. So I can just move on with the sauce, instead of spending time browsing websites and stressing about getting it perfect.
I used to hate these non-deterministic instructions, now I treat it as their own game. When I will publish my first recipe, I'll have an LLM randomize the ingredient amounts, round them up to some imprecise units and also randomize the times. Psychologists say we artists need to participate and I WILL participate.
ChatGPT and Gemini are actually fairly comparable.
Claude has been utterly useless with most math problems in my experience because, much like less capable students, it tends to get overly bogged down in tedious details before it gets to the big picture. That's great for programming, not so much for frontier math. If you're giving it little lemmas, then sure it's great, but otherwise you're just burning tokens.
Seriously, it’s not worth reaching for less intelligence. Use Extended Pro 100% of the time for things you’d spend the amount of time GP spent writing their post.
Chiming in to agree but clarify that the latest sota models are no better than Gemini.
I put my stuff through several sota models and round robin them in adversarial collaboration and they are all useful even though, fundamentally, they don’t “understand” anything. But they are super useful delegates as long as deciding on the problem and approach and solution all sits safely in your head so you can challenge them and steer them.
So I know the article is about one particular new model acing something and each vendor wants these stories to position their model as now good enough to replace humans and all other models, but working somewhere where I am lucky enough to be able to use all the sota models all the time, I can say that all keep making obvious mistakes and using all adversarially is way better than trusting just one.
I look forward to the day one a small open model that we can run ourselves outperforms the sum of all today’s models. That’s when enough is enough and we can let things plateau.
I would guess it's because ChatGPT Pro allows for 80min "think". I've never had even remotely similar think times with Gemini Deep Think. It's generally around 10-15min for math problems, and get increasingly shorter for continued interactions.
> in 3D Clifford algebras it repeatedly confuses exponential of bivectors and of pseudoscalars.
I have no idea what any of those words even mean. I'm sure LLMs make similar obvious-to-professors mistakes in all the domains. Not long ago, we didn't even have chatbots capable of basic conversation...
This is close to my experience with code. LLMs can pick out small mistakes from giant code changes with surprising accuracy, or slowly narrow down a weird. On the other hand I've seen them bravely shoulder on under completely incorrect conceptual models of what they're working with and churn around in circles consequently, spin up giant piles of slop to re-implement something they decided was necessary, but didn't bother to search for, or outright dismiss important error signals as just 'transient failures'. Unlimited stamina, low wisdom.
I've been watching the automation of things like flight control systems for the past decade, and the evolution of the fallback to a real pilot in the event of a emergency is what's most concerning about where LLMs are being embedded.
Right now, we have a lot of smart people who have trained for decades to understand where these things go wrong and how to nudge them back, but the pool of people are going to slowly be replaced by less knowledgeable.
At some point, a rubicon will be crossed where these systems can't fallback to a human operator and will fail spectacularly.
Watching a teenager approach their homework, instead of struggling to answer questions they don't know, they ask Gemini. Unfortunately, I think the mental struggle to approach an answer is where much of the learning is. They also miss out on the reward for persistence of seeing things fall together.
It is troubling. It suggests a plateauing of human understanding.
What that means practically is that we've got a generation - 25 years or less - to evolve these things not to need the fallback. If such a thing is possible.
It's a very long post with a mix of technical (math) and philosophical sections. Here are the most striking points to reflect upon IMHO.
> It seems to me that training beginning PhD students to do research [...] has just got harder, since one obvious way to help somebody get started is to give them a problem that looks as though it might be a relatively gentle one. If LLMs are at the point where they can solve “gentle problems”, then that is no longer an option. The lower bound for contributing to mathematics will now be to prove something that LLMs can’t prove, rather than simply to prove something that nobody has proved up to now and that at least somebody finds interesting.
Training must start from the basics though. Of course everybody's training in math starts with summing small integers, which calculators have been doing without any mistake since a long time.
The point is perhaps confirmed by another comment further down in the post
> by solving hard problems you get an insight into the problem-solving process itself, at least in your area of expertise, in a way that you simply don’t if all you do is read other people’s solutions. One consequence of this is that people who have themselves solved difficult problems are likely to be significantly better at using solving problems with the help of AI, just as very good coders are better at vibe coding than not such good coders
People pay coders to build stuff that they will use to make money and I can happily use an AI to deliver faster and keep being hired. I'm not sure if there is a similar point with math. Again from the post
> suppose that a mathematician solved a major problem by having a long exchange with an LLM in which the mathematician played a useful guiding role but the LLM did all the technical work and had the main ideas. Would we regard that as a major achievement of the mathematician? I don’t think we would.
> by solving hard problems you get an insight into the problem-solving process itself, at least in your area of expertise, in a way that you simply don’t if all you do is read other people’s solutions. One consequence of this is that people who have themselves solved difficult problems are likely to be significantly better at using solving problems with the help of AI, just as very good coders are better at vibe coding than not such good coders
Yes but it's not just that if you solved a problem yourself, you're better at solving other problems; it's also that you actually understand the problem that you solved, much better than if you simply read a proof made by somebody (or something) else.
I see this happening in the enterprise. People delegate work to some LLM; work isn't always bad, sometimes it's even acceptable. But it's not their work, and as a result, the author doesn't know or understand it better than anyone else! They don't own it, they can't explain it. They literally have no value whatsoever; they're a passthrough; they're invisible.
Are you a cutting edge research scientist or something? Everyone I know works in the same domain every day. The problems are the same. People aren't solving brand new problems to humanity every day. We make budgets and look at ticket counts. Roll out patches. Replace hardware. Upgrade software packages. Make a new dashboard to track a project. I guess if every day is a completely novel thing for you, ok. I feel like the goalposts have moved to an absolutely ridiculous place. Oh no, I won't have a bunch of random error log numbers memorized anymore? Who gives a shit. I just want to afford a place to live so I can play my guitar and make something good for dinner. Maybe I'm just old, but I don't see why the average person needs to be a fuckin genius problem solver.
Sure, but the point is that at some point (e.g. when starting a PhD) one needs to do research, not learn the basics. And LLMs make that harder, because they solve the "easy research" part.
Take a young lion "fighting/playing" with another young lion as a way to learn how to fight, and later hunt. And suddenly they get TikTok and are not interested in playing anymore. Their first encounter with hunting will be a lot harder, won't it?
> People pay coders to build stuff that they will use to make money and I can happily use an AI to deliver faster and keep being hired.
Again, that's true but missing the point: if you never get to be a "good coder", you will always be a "bad vibe coder". Maybe you can make money out of it, but the point was about becoming good.
> Here’s a thought experiment: suppose that a mathematician solved a major problem by having a long exchange with an LLM in which the mathematician played a useful guiding role but the LLM did all the technical work and had the main ideas. Would we regard that as a major achievement of the mathematician? I don’t think we would.
This is a cultural choice. It makes sense that in the mathematics culture we currently have, this is alien. But already, other fields, and many individuals, would disagree and say that the human did have a major achievement here.
As long as human-AI collaborations are producing the best results, there is meaningful contribution by the humans, and people that are deeper experts and skilled LLM whisperers should be able to make outsized contributions. The real shoe drops when pure AI beats humans and human-AI collaboration.
I replied to a comment about AI in sports and I build on that.
We praise car drivers despite most of the performance in their sport comes from the car. The driver makes the difference when two cars are close in performance. Brilliances or mistakes. Horse riders too.
In the case of math, the human can lead the LLM on the right track, point it to a problem or to another one. So it deserves some praise.
Then the team that built the car, cared about the horse, built the AI might deserve even more praise but we tend to care more about the single most visible human.
>So if your aim in doing mathematics is to achieve some kind of immortality, so to speak, then you should understand that that won’t necessarily be possible for much longer — not just for you, but for anybody.
I don't know that it's that disappointing. I doubt most of the great mathematicians were actually doing it to achieve immortality. I suspect most of them were either after (possibly indirect) practical applications (via the math -> physics -> engineering pipeline) or just "for the love of the game", appreciation of the beauty of math and the intellectual joy of doing it. AI might also take over the practical application side, but the other aspects are still there for the taking.
Exactly. Gowers is in the unique position to think about the "glory" of frontier mathematics, but for essentially everybody (especially those working outside of number theory), that dream died long ago. There are far too many mathematicians now.
Many mathematicians work because they love the breakthrough (a certain quote of Villani comes to mind). They love finding new results, uncovering new mysteries. From that point of view, having an AI that can build on your basic ideas and refine them into more powerful arguments is awesome, regardless of who gets the credit. There are those that treat it more like solving puzzles so the result is not of interest. From that point of view, I can see the dissatisfaction. But I have found those with that viewpoint don't tend to make it as far in academia as those with the other viewpoint.
Sports are safe. Machines came after runners (motogp, formula 1) and yet we cheer the winners of the 100 m at the Olympics Games. Fully autonomous bikes and cars won't change that. AIs destroy chess players. We still cheer the world champion.
Robot MotoGP would be amazing to see just how far the limits could be pushed without risking the life of a human though. Or even full size remote control.
Sadly I don't think there is any safe tracks for proper autonomous car racing without limits... Still would be interesting to see what is the absolute best you could do if rules include only say minimum number of wheels and maximum dimensions for vehicles.
As a TCS assistant professor from Eastern Europe, I always am a little jealous of the biggest names in math having such an easy access to the expensive, long thinking models.
Paying for Pro from any of my current academic budgets is completely ouf of the field of reality here -- all budgets tend to have restricted uses and software payments fit into very few categories. Effectively, I'd have to ask for a brand new grant and hope the grant rules allow for large software payments and I won't encounter an anti-AI reviewer; such a thing would take one year at least.
As a nail to the coffin, I was "denied" all Claude Opus recently as part of Microsoft's clampdown on individual (and academic) use of Copilot.
(Chagpt 5.5 Plus does not seem sufficient for any deeper investigations into new research topics, I've tried.)
While this sounds generous (and in some ways it is), it does not address the general point that GP is making. That is, the systematic disadvantage which large parts of humanity have w.r.t. to access to the tools. You could say they can't drive a Lambhorgini either, but that also doesn't solve the problem.
An aside: It was a very nice gesture and completely unexpected by me, so even if it doesn't work out, it made my day. I personally believe that kind gestures have a lot of power.
Back on topic: There is a real danger of the gap between rich and poor universities significantly widening in all fields if the rich can afford Pro level models, or even hardware that can run their own comparable models, and this being fiscally inaccessible to the rest.
One can sweep this under the rug by blaming the educational funding but this just shoots down all discussion. Even if GDP of a country goes up by a lot -- such as Poland -- it takes time before any budget benefit trickles to the education budget, and with some governments it might never do.
I believe Microsoft et al do have the most power here to boost affordable access to AI for researchers on a large scale; the fact that they cut some too expensive models (Opus, 5.5) from their academic benefits package is a grim omen. I do realize they would like universities to pay them also, and ultimately the universities should do that -- but then we are back at the institutional level of the problem.
Its a problem of the individual institutions and countries. The budget required for AI tools currently is negligible compared to other university expenses. We don't need to call everything a systemic disadvantage when the disadvantaged (at the institution level) have agency here.
Can you tell me what is the budget necessary to supply AI tools capable of substantial research assistance to all academic staff at a university?
You seem to have a good estimate in your head; I definitely do not.
From personal experience, ChatGPT 5.5 (the Plus tier) is excellent for programming tasks and also for various teaching related tasks but I have not observed the research benefits that Tim Gowers has when I asked it questions in my area of expertise. So the costs are definitely higher than a few dozen $ a month per PhD/professor.
You might be right that universities should immediately spring into action and demand funding for research level AI resources and hardware. One thing you might be mistaken in is that public universities are unfortunately very inflexible institutions; one reason for this is that they have a large internal leadership structure AND they are funded by the state, so even if the entire university agrees on something, the funding is at the whim of the ministry of education and thus the current political leadership.
> Can you tell me what is the budget necessary to supply AI tools capable of substantial research assistance to all academic staff at a university?
I think the GP meant that *if the tools provide substantial benefit* to staff, their costs can be compared to salaries and other large expenses of the university. The $100/month subscription costs less than your office space.
I mean, I don't think OpenAI should be wading into the policies and practices of foreign institutions and governments. Look at all the blowback we see from the collision of Anthropic or OpenAI and the US government.
At present, the tools are available for whomever wants to buy them. Not OpenAI's fault that parent comment's government and/or institutions policies haven't been updated to allow for their purchase and use.
I'd argue that the OpenAI dude/dudettes level of generosity is appropriate given the circumstances.
I will leave the contact up for a bit longer if people want to get in touch and share their experience with the research gap of the models -- or anything, really -- but I do not think there is any need of further support. Like I said elsewhere, the offer of support made my day and the gesture is enough.
You know what, I'm ashamed that I didn't think of this. I'll sponsor three months. Email in my hn profile. I don't understand the math in the article, but I'd love to help you make progress in it.
At my university, everyone had to pay their AI subscriptions out of their own pocket, until a communal AI service was introduced recently. It took 2 years to set up and only serves gpt-oss-120b, so everyone is still using other services. But at least some admin can scatter the word "AI" all over the university's website now and has an excuse to reject any requests for AI subscriptions because "we already have AI".
It’s a classic example of the best positioned people being in the best position to keep reaping all the rewards.
There’s the example of a poor person and a rich person buying boots. The poor person’s boots wear out and have to be replaced while the rich persons boots last for many years due to higher quality craftsmanship. Over years, the poor person’s boots wear will pay may for boots.
I know the example, but as a counter-argument: often more expensive boots are not more durable. It’s about spending time to learn to spot the quality.
Of course if you are really poor, then you have to take expensive shortcuts, but for most people that shouldn’t be the case. Learning to do more with less money isn’t as bad as many people think. It’s also good for the brain to be a bit more creative.
here I think it's less about "poverty" (non-US acedemic budgets are still high, though not in the same sphere), but it's about having red tape when it comes to software. My experience doing a PhD in Japan was: Everything you can touch was basically a free for all - including $500 keyboards and $10k Mac Pros, especially if you are a valued researcher. But software, oh man, how can we prove receipt of goods to accounting...
OpenRouter lets you pay by the token only (no subscription), has all the frontier models (including Opus 4.7, GPT-5.5) and most of the others, and if you use it sparingly it usually turns out to be quite cheap.
API pricing for Claude is about an order of magnitude more expensive than subscriptions (numbers: https://she-llac.com/claude-limits). But it may be worth it with DeepSeek V4 Pro, which is currently on discount.
Depends very much on usage! If you connect it to tools like Cursor, etc. then yes a subscription is probably cheaper -- although, you'd have to subscribe to each provider if you want to use them all.
But if you ask questions occasionally, (and don't resend, for example, your whole codebase with each request), then the API feels really cheap, even for the frontier models.
My problem with pay-by-the-token is that it discourages me using the thing ("oh the prompt will cost me $0.1"), so I pay a subscription which I'm pretty sure costs me about two-three times what I'd pay just for the api costs, but encourages me to use it more ("oh I have a subscription already, better make use of it").
I believe ChatGPT 5.5 Pro access is available for $100/month, is that an unrealistic level of expense for someone in your position and geography? Even if the university won't pay for it, it seems you'd like to use this tool for your own goals.
I'm not trying to shame here, just curious whether this is completely unattainable for most researchers in your area.
I fully understand your rant! I pay ~20€/month for the Pro account, as my university has a deal with Microsoft and only seems to recognize Copilot, so it’s very hard to use one own’s funding for paying something else.
Average European salary is around $4000/month, in eastern Europe is half of that. Median is probably lower than that. Makes me want to quit visiting places like reddit where everybody claims to be making 100k+/year
All salary discussions need a cost of living context. Yes in Europe you earn a bit less but the public services are much better than in the US and one emergency (r.g. healthcare) won't ruin you as it's mostly a public system.
I'll take a Euro salary and qualify life over a FIRE-typs salary and daily fear of falling into the abyss any day.
Given the topic and the fact llm providers charge global rates, the absolute take-home money is much more relevant. Even if you live like a king on $1000/mo, 5.5 pro is still $200.
Their loss if they don't move to regional pricing. AI will continue to remain an upper-management luxury then, and won't reach the mass adoption required to justify their outsized valuations.
Regional pricing makes sense for products that don’t have ongoing costs or where most of the input cost can be offset by local labor. You’re not buying server racks nor electricity at 1/3 of the price to serve poorer markets
That’s what most people spend on their phone and Internet connections per month in the US. That’s what the average American family spends on just five days of food.
People spend much more than that on just commuting to work if you can spend $200 a month to supercharge what you do at work and 1000x your productivity it’s a no-brainer.
From what money? Just pause the health insurance for a while? Stop paying the rent? No diapers for the kid?
Your entire story only makes sense if you have many hundreds of dollars/euros of entirely disposable income every month left, after all unavoidable expenses have been paid for. I understand that this holds for you and everyone you know but I’d like you to appreciate that for very many people it doesn’t.
37% of Americans would be unable to cover a 400 usd unexpected expense* without using one or more credit cards. 13% would flat out be unable to cover it. [1]
Are you honestly saying most families would be able to justify 200 usd a month for ChatGPT?
There is a significant gap between what academics are paid across European countries, and since most top universities here are public institutions, you are right -- Eastern European government employees tend to be on the poorer side.
There are several other philosophical arguments against what you propose but I do not wish to go down that route.
Bruh, $200/m for most people in the US is also a hard "no!". That's a lot of money. Plus Anthropic isn't doing good deals with orgs that spend less than 250k a month. It's ridiculous.
As a graduate student, this piece made me sad. I always believed that my work speaks for itself and transcends beyond my limited time on this cosmic experience. This notion of immortality was just a small intangible bonus I hoped for when I jumped into grad school. AI is making me feel less worthy.
As someone who is much further down the track, I would kindly suggest you drop that line of thought. I've seen far too many brilliant and ambitious people drop into depression because of it.
You are worthy of doing this work because you are able to do it. Do the work because you love it and because you love the mystery. Enjoy every moment that you get to do it. Find joy in the great fortune you have to do this work while others toil away on tasks that bring them no satisfaction. Sometimes it's tedious, but sometimes it's incredibly rewarding in its own right.
Don't work for the possibility of eternal glory though, it just doesn't exist anymore.
You are worthy. You will hone your skills in grad school and be able to command these AIs better than somebody who hasn’t struggled with hard problems for a long time.
I feel bravery transcends time better than the odd scientific breakthrough which are often attributed to one, but whose roots came from a "lesser" unknown
I saw Tim Gowers give a talk at the AMS-MAA joint meeting in Seattle about ten years ago where he predicted that in 100 years humans would no longer be doing research mathematics. I wonder if he’s adjusted his timeline.
At the time I thought the key missing tool was a natural language search that acted like mathoverflow, where you could explain your problem or ideas as you understood them and get references to relevant literature (possibly outside your experience or vocabulary).
> "Even though I can motivate it in retrospect, ChatGPT’s idea to use h^2-dissociated sets to control relations of order at most h feels quite ingenious. As far as I can tell, this idea is completely original."
The question that keep bothering me is can an LLM generate an idea that is truly novel? How would/could that actually happen? But then that leads to the question - what are we actually doing when we think?
Perhaps it's as simple as the ability to just make mistakes that matters, the same things that powers evolution. As long as the LLM can make mistakes, it's capable of generating something genuinely novel. And it can make more mistakes much faster than we can.
Some people like to parrot "next token prediction", "LLMs can only interpolate", and other nonsense, but it is obviously not true for many reasons, in particular since we introduced RL.
Humans do not have the monopoly on generating novel ideas, modern AI models using post training, RL etc can come to them in the same way we do, exploration.
See also verifier's law [0]: "The ease of training AI to solve a task is proportional to how verifiable the task is. All tasks that are possible to solve and easy to verify will be solved by AI."
This applied to chess, go, strategy games, and we can now see it applying to mathematics, algorithmic problems, etc.
It is incredibly humbling to see AI outperform humans at creative cognitive tasks, and realise that the bitter lesson [1] applies so generally, but here we are.
For my paper about ME/CFS, I let an LLM integrate lots of findings of other scientific papers.
Then I ask the LLM to "creatively brainstorm", given all we know of ME/CFS and the newly integrated paper, to generate new hypotheses, treatment ideas or any other kind of insight it can think of.
This works really well.
Now, it's clear that I have no idea how much of this is something we would consider new and original, and how much is a kind of systematic, but not novel, easy of thinking.
What I couldn't do so far is get an LLM to generate a truly new maths theory, with new abstract concepts and dimensions and points of view. The kind that is not just a combination of existing theories and logic.
My own take, and it's veering into the Philosophy of Mathematics, but there's a debate about whether Mathematics is "Invented" or "Discovered".
If it's "invented", then it requires ingenuity.
If it's "discovered", then it was always already there, just waiting for the right connections to be made for it to be uncovered and represented in a way we can understand.
Invention requires ingenuity, but discovery does not. So if LLMs can generate truly novel mathematics, for me that settles it that mathematics is indeed discovered, as LLMs are quite capable of discovery yet I don't consider them possible of invention.
I like this distinction, but it would then seem the only 'invention' would be the axioms of your mathematics. There exists numbers (natural, imaginary...), there exist shapes (a point, a line...). All the work from that point on could be 'discovered'. I agree that I don't see LLMs inventing in this way. But again, it raised the question - what are our brains doing when we 'invent' something?
Trivially the answer is yes by the infinite monkey theorem. If we allow the sampler to pick any token then any stream of arbitrary tokens can be generated. Therefore if an original idea can be represented with written words then a LLM can generate it. That is perhaps not the most satisfying answer, but if you want a better one you'll need to provide a function that determines if an idea is original.
It's about the ability to combine ideas in novel ways, without breaking the rules in relevant frameworks. Sometimes the idea may even be to contradict existing theories where they are weak.
> The lower bound for contributing to mathematics will now be to prove something that LLMs can’t prove, rather than simply to prove something that nobody has proved up to now and that at least somebody finds interesting.
5.5pro is amazing but this implication might not be true & is the core argument of this piece.
AI will prove all sort of things - interesting, boring & incorrect.
Sorry, I'm reposting a comment I made yesterday that seems fitting:
> This reminds me of Antirez's "Don't fall into the anti-AI hype". In a sentence: These foundation models are really good at optimizing these extremely high level, extremely well defined problem spaces (ie multiply matrices faster). In Antirez's case, it's "make Redis faster".
>> but it was definitely a non-trivial extension of those ideas, and for a PhD student to find that extension it would be necessary to invest quite a bit of time digesting Isaac’s paper
The "non-trivial" is for human abilities. The weights lifted by a crane are also "non-trivial". People keep getting amazed at machine's abilities. Just like a radio telescope can see things humans can't, microscope can see the detail humans can't, we need not be amazed. The sensory perception of patterns is at different level for AI. It's a machine.
Too many people are wrapped around the ego axle thinking (assuming) their ideas are both them and somehow unique and special.
It usually takes dissolving that, often through difficult experiences, before they can see it as a machine, something that could be separated from them.
I found the section on publishing very interesting. Even if the quality of the output is up to snuff, where should it go? Arxiv doesn't allow AI written work. The author proposes that only work that has been certified by human should be published. However, now the field is in the same boat as software engineering where we are facing a glut of pull requests and not enough time and people to review them.
I feel like this experiment was successful because those prompting the AI were knowledgeable enough to ask the right questions and verify the output was correct. This shows that there is still a place for expertise, even if the LLM does the actual research.
I feel my input to LLMs is most valuable in the initial idea, big picture design tweaks, and the vast majority of my usefulness is negative feedback. This looks wrong, you've gotten off track, you're cheating with workarounds, you're falling into a rabbithole, etc.
Makes sense as a mathematician basically has two powers (1) using their intuition and (2) an enormous amount of mental stamina. A mathematician builds their intuition by reading maths books. It is thus not surprising that an LLM is well equipped to take over the tasks of the mathematician.
The question of where the creative input is was a big thing around Experiments in Musical Intelligence and co-composing. But it seems perhaps that it’s a transient state we needn’t spend too much effort it. The machine has failed to disappoint repeatedly. Perhaps this is as far as it gets or perhaps we will be like people in Catching Crumbs by the Table by Ted Chiang where almost all science is interpretation of papers by vastly greater intellects.
On complex problems with lengthy proofs, the first step that I would have done is to ask 5.5 pro in a new, unrelated, session, to be very critical, to try to find flaws in the arguments.
And certainly not to send it to a fellow colleague to ask its opinion first.
LLMs are certainly becoming capable to code, find vulnerabilities, solve mathematical problems, but we need to avoid putting their works in production, or in front of other humans, without assessing it by any possible mean.
Otherwise tech leads, maintainers, experts get overwhelmed and this is how the « AI slop » fatigue begins.
To be clear I’m talking about this step:
> That preprint would have been hard for me to read, as that would have meant carefully reading Rajagopal’s paper first, but I sent it to Nathanson, who forwarded it to Rajagopal, who said he thought it looked correct.
> but we need to avoid putting their works in production, or in front of other humans, without assessing it by any possible mean.
I think this is good advice in general, maybe with an emphasis on public vs. private, friendly contact. Having 0 thought AI slop thrown at you out of the blue is rude. "could have been a prompt" indeed. But having a friend/colleague ask for a quick glance at something they know you handle well is another story for me.
If I've worked on a subject for a few years, and know the particulars in and out, I'd have no trouble skimming something that a friend or a colleague sent me. I am sparing those 5-10 minutes for the friend, not for what they sent. And for an expert in a particular domain, often 5 minutes is all it takes for a "lgtm" or "lol no".
The post talks about LLM+human contributions being recognized in some different category from human-only. But is it possible to spot the difference between the two?
This is certainly interesting, though I would say that based on my understanding of how the current models work combinatorial problems would be an area where they could be particularly successful. They are pretty good at combinatorial creativity - its the exploratory and transformational aspects that are still pretty tricky, and I expect would come to bear in other areas of mathematics.
I think mathematicians like LLMs because this is the first time we have something like a computer for the kinds of math most people do, high level, hand wavy abstractions that are (relatively) easy for people to grok but hard to explain to traditional computers.
one thing I was wondering, is, if LLMs are word completions seemingly coming up with new solutions could this just be because stuff that was kept secret and now - is no longer is due to ingestion? I dont know enough about it tho
why would you keep secret this particular mathematical idea? it's not extraordinarily important, it's not on the path to some other major result, doesn't seem useful in financial trading. even author calls it good reasonable problem for a PhD thesis.
M3 module was formalized fully purely from experimental data and from a nudge by earlier versions of codex in 15-30 minutes in a simple write/compile/fix-first-error loop. I was a bit surprised how fast it picked up the pattern but given there was a paper from '70s it became clear why later.
This is of enormous importance but still is being actively ignored by many professionals or dismissed as as a minor issue.
Our emotional human brains are very enthusiastic about these new kind of "intelligent" products ("partners") and we want to believe so hard that they are finally "there" that we tend to ignore how big of a problem it is that LLMs carry a fundamental design problem with them that will make them produce errors even when we use a grotesque amount of resources to build "bigger" versions of them. The potential for errors will never go away with the current AI architecture.
This is a fundamental paradigm shift in computing. Instead of putting a lot of energy into building an architecture that will produce reliable results, we are now maximizing on a system / idea that will never give us 100% reliable results.
Basically it is just a marketing stunt. Probably the computer science guy building it knew very well that he would still need some fundamental break troughs to get to a real product, but the marketing guy saw that there is still potential to make a lot of money by selling a product that will produce correct results only 80% of the time.
The marketing guy was right and marketing is now dominating science, but humanity will pay a big price for that.
Putting enormous amounts of money into a fundamentally flawed system that we can not optimize to produce reliably error free results is just stupid.
The big achievement of "classical" computing is that the results are reliably error free. We have still some known issues eg. with floating point math and bad blocks on disk / bit flipping etc. but these are observable and we can handle / avoid them. Generally "non-ai-computing" was made so reliable, that we can depend on it for many very important things. This came not by accident but was created by a lot of people who put a lot of resources into research to achieve that result.
LLMs introduce a level of uncertainty and unreliability into computing that makes them practically useless.
Because if you have enough knowledge to verify the result and AI is only quicker in producing the result, what is the point then putting so much resources in it (besides making money by re-centralizing computing, of course). Verifying a lot of results that have been produced quicker is still slow, so the people who are now just AI verifiers should just produce the results themselves, makes the whole process quicker.
AI is only of value if it can produce results about things that you or your organization does not know anything about. But these results you can not verify and therefore potentially wrong results can be fatal for you, your organization and all the people that are affected by actions generated based on these wrong results.
Many people have already been killed because decision makers are not able to follow that very simple logic.
So we can still create "interesting and enjoyable results", but finally it is a gigantic miss-allocation of resources of historic idiocy. It fits, of course, very well in a timeline where grifters are on top of societies around the world.
It is a fundamentally wrong path that should not be followed and scientists around the world should articulate exactly that instead of producing marketing blog posts for a system with such fatal inherent issues.
Undergraduate? No. We've had calculators able to solve undergraduate problems for decades. AI doesn't change the need to understand how calculus works any more than calculators did. The foundations remain valuable.
90% of the final grade are in room examinations with proctors, maybe two sets of exams of midterms and finals that the vast majority of the final grade comes from. This is already how most of East and South Asia does it anyways and it’s probably the best.
For publications and theses, as long as the final results hold and can be replicated and validated, I don’t see why we shouldn’t allow the wholesale use of LLMs
> 90% of the final grade are in room examinations with proctors, maybe two sets of exams of midterms and finals that the vast majority of the final grade comes from.
This is really just a glorified undergraduate education, the real point of graduate school is to learn to do real-world relevant research. For the latter, I think LLM use will be accepted but there will be a heavy expectation on the author of making the result very easily digestable for human mathematicians and linking it thoroughly with the existing literature - something that LLMs are very much not successful at, but a student might be able to do quite well with a mixture of expert guidance and personal effort.
I don’t think it’s just mathematics. We don’t hear enough about this, but if I think back to my undergraduate years, which were less than 10 years ago, every homework assignment and every take-home exam I had would be trivial for LLMs to solve at this point I wonder what is actually happening on the ground.
Well... here's something from "boots on the ground": I teach a bachelor's degree where programming is a smallish facet of a curriculum. My course is the last of a series of 3 courses which progressively introduce more concepts and try make practical implementations more feasible. I've been able to grade the course purely based on returns to take-home exercises, some of which are complex, some trivial. When ChatGPT (& Co.) came along I was still able to do that but with a major added workload to me (suddenly everyone started producing mountains of code, often nonsensical, but I still had to read it all). I always requested targeted, atomic changes to code (vs. rewrites) which served me well up to a point (I was still able to grade fairly). I requested them originally to avoid "github copies", but that worked kind of OK with ChatGPT too. However, when ClaudeCode came along it was obvious to me I'm loosing the battle. It does not particularly matter to me whether students use AI or not as long as the rows they add and alter in the assignments make sense, but the "last nail to the coffin" problem now with ClaudeCode is that in the latest batch (this spring) it is clear some students "pay themselves" a good grade (i.e. they pay for ClaudeCode, thus bypassing the need to actually learn). I cannot make assignments that are both complex enough to cause ClaudeCode tripping on something and still humane for those who do not use AI or only use free chatbot options. Essentially ClaudeCode plays havoc with the whole grading process: students not using it (whether they try to write code fully manually or ChatGPT assisted) are left with far less points that students who just push all the code I give to ClaudeCode and "let it rip" for some 15 minutes. This really irks me. So, my solution? Still working on it and hoping to find one! For sure no more points from most take-home assignments: lowest grades still achievable through them (the trivial ones), but that's it, the rest it preparation for an exam. Practically this already means anyone with ChatGPT is going to pass, no doubt about it... As for the higher grades, for autumn I'm desperately now figuring out how to even make a meaningful paper based exam for my course. I've myself completed a master's degree writing C language on paper with a pencil. I sure did not want to start doing that to others, but here we are. Besides, back in my youth the only "library" was pretty much ANSI-parts-of-C! I'm not sure what kind of a 2 inch thick stack of papers I'd have to give my students into the exam these days as reference material. One horrible aspect is that students are now far more dependent on compiler errors to spot pretty much anything and everything... I worry the first paper exam from me will be a total horror story to us all. In any case, interesting times.
I wish people would stop generating stuff they don't understand only to forward it to someone who does. Something about that really rubs me the wrong way.
May I remind you that this is Timothy Gowers. He says he doesn't understand, but he most certainly has far greater capacity than most to detect complete junk from a maybe plausible argument. His colleague is even better able to judge this, hence why he sent it to him.
Also if he did send me complete junk, I would still parse it for multiple days to see what is there.
> Conversely, for problems where one’s initial reaction is to be impressed that an LLM has come up with a clever argument, it often turns out on closer inspection that there are precedents for those arguments, so it is still just about possible to comfort oneself that LLMs are merely putting together existing knowledge rather than having truly original ideas. How much of a comfort that is I will not discuss here, other than to note that quite a lot of perfectly good human mathematics consists in putting together existing knowledge and proof techniques.
This is exactly what leads me to believe that the real impact of LLMs in human history is yet to come. My work as a researcher was mostly spent on two classes of workloads: reading papers that were recently published to gather ideas and keep up with the state of the art, and work on a selection of ideas gathered from said papers to build my research upon. It turns out that LLMs excel at the most critical component of both workloads: parsing existing content and use it when prompting the model to generate additional content based on specific goals and constraints. I mean, papers are already a way to store and distribute context.
"It is the sort of idea I would be very proud to come up with after a week or two of pondering, and it took ChatGPT less than an hour"
This comment about time is very interesting to me. I know it's "just" doing mathematical proofs but the possibilities of speeding up planning, proposals and decision making in the physical world should excite people.
I honestly can't say this isn't AGI anymore.
AGI shouldn't be a bar so taboo that it has to be at the extreme capability in every domain. What human is?
This is as AGI as it needs to be to get my vote. And it's scary.
to quote Demis Hassabis, "these models can solve frontieer problems in math, but also fail in really dumb ways at trivial questions - the car wash question".
Basically medical science too. My wife was able to diagnose her own anemia that the doctors kept missing, and has since been able to have iron infusions.
The human doctors kept ignoring the signals, kept putting it down to 'diet' and 'exercise' (even though she does plenty of both)
This is beyond ridiculous to say considering whose blog this is.
For those that don't know, this is Timothy Gowers. He is one of the most accomplished mathematicians in the world. Like Terence Tao, he is considered one of the world leaders in mathematics and tends to have good judgement in where the field is going.
Even without that knowledge, no, this article is certainly not AI generated. It has none of the tells.
> quite a lot of perfectly good human mathematics consists in putting together existing knowledge and proof techniques
Creativity is connecting ideas from different domains and see if something from one field applies to another. I do think AI is overhyped generally; but a major benefit from AI could be that after ingesting all the existing human knowledge (something no single human can ever hope to achieve) it would "mix and connect" it and come up with novel insights.
Most published research sits ignored and unread; AI can uncover and use everything.
> Creativity is connecting ideas from different domains and see if something from one field applies to another.
That's true. The question is whether the produced pattern has any value. LLMs are incapable of determining this, and will still often hallucinate, and make random baseless claims that can convince anyone except human domain experts. And that's still a difficult challenge: a domain expert is still needed to verify the output, which in some fields is very labor intensive, especially if the subject is at the edge of human knowledge.
The second related issue is the lack of reproducibility. The same LLM given the same prompt and context can produce different results. This probability increases with more input and output tokens, and with more obscure subjects.
The tools are certainly improving, but these two issues are still a major hurdle that don't get nearly as much attention as "agents", "skills", and whatever adjacent trend influencers are pushing today.
And can we please stop calling pattern matching and generation "intelligence"? This farce has gone on long enough.
"After 16 minutes and 41 seconds, it came back" ... "further 47 minutes and 39 seconds" ... "After 13 minutes and 33 seconds" ... "After 9 minutes and 12 seconds" ... "After 31 minutes and 40 seconds" ... plus other computations
Anyone spotting the issue here? What did that really cost?
I am not against compute being used for scientific or other important problems. We did that before LLMs. However, the major LLM gatekeepers want to make all industries and companies dependent on their models. And, at some point, they need to charge them the actual, unsubsidized costs for the compute. In the meantime, companies restructure in the hopes that the compute costs remain cheap.
> "After 16 minutes and 41 seconds, it came back" ... "further 47 minutes and 39 seconds" ... "After 13 minutes and 33 seconds" ... "After 9 minutes and 12 seconds" ... "After 31 minutes and 40 seconds" ... plus other computations
Anyone spotting the issue here? What did that really cost?
Whatever the Joules... (convert to $ using your preferred benchmark price) it is a fraction to what it might take a human Ph. D. weeks to feed and sustain themselves when working on the same problem. The economics on LLMs is just unbeatable (sadly) when compared to us humans.
Compute in science was already subsidized by public funding or by donations. Most supercomputers are financed this way. And that's a good thing. If you have a good science problem that can be computed, apply for compute time. There is nothing wrong to apply that to LLMs as well, like I wrote in my initial post. The human is still required to identity problems that are worth to be computed, to create prompts that the LLM can act on, and to verify results. But, OpenAI providing compute for basically free is still tied to a different incentive: to fuel the hype and to capture the market, while distorting/obfuscating the real costs. That's also the reason for why we cannot claim that 'economics on LLMs is just unbeatable'. It depends on the problem, the reason for a prompt.
Still not as bad for the environment as animal agriculture, and animal agriculture is absolutely not necessary and only causes harm and suffering for taste pleasure. At least with LLMs we get many positive advancements from them. I don't see these sorts of comments every time someone posts a burger review.