> The cost-benefit ratio of Mathematical research has been off-scale. The Federal government spends about $250 Million/year on mathematics research. Yet in the US there are 40 Million MRI scans per year, incurring tens of billions in Medicaid, Medicare and other Federal costs. The financial benefits of the roughly 10-to-1 productivity improvements now being seen in MRI could soon far exceed the annual NSF budget for mathematics research
Pricing a scan based on scanner time doesn’t really work.
Specifically, using a linear approach (like PCA, but slightly fancier), we find that stimulus-related information is present along many, many dimensions of the neural response---much more than previously expected/reported.
[1] https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...
I tend to think of fMRI data as some highly nonlinear transform of whatever neural activity is occurring in a particular region of the brain, at pretty coarse spatial resolution (~1-3 mm) and pretty bad temporal resolution (~5-15 s).
Sure, it's no direct measure of neurons firing, but that doesn't mean there isn't information in the signal that we can interpret and maybe use (see [1] for a recent example of reconstructing seen images from brain activity)
As a cognitive neuroscientist, I tend to abstract away a ton of the details (neurons, molecules) and focus on more general computational principles: how do we get complex behavior from many simple interacting units---voxels in fMRI, for instance?
Regarding the specific paper you posted, I saw some of the discourse around it but haven't read it carefully myself (it's not my area of expertise). I saw some recent re-analysis of that data [2] that argues that the result isn't valid, but need to look at it more carefully.
[1]: https://www.nature.com/articles/s41598-025-89242-3 [2]: https://www.biorxiv.org/content/10.64898/2026.04.21.719913v1
But the pattern of activity of thousands of voxels across cortex does contain reliable information! And a decent amount of it too, at least in sensory cortices.
Constant lower draw devices- chargers, lights, speakers and such- are going to be harder to distinguish, though.
Neuralink is doing interesting BCI research, with decent hardware, but it's not really a step-change above and beyond the rest of the field.
There's definitely a lot of promise in using BCIs for rehabilitation of patients with brain injuries but their input-output capabilities are still incredibly crude: for example, we can't reliably "write" to the brain to make people perceive things beyond very simple stimuli (e.g. a phantom touch sensation, or a visual phosphene).
This is understandable: the brain has a bajillion neurons and we only have ~1,000 electrodes that aren't particularly precise in how/where they zap the brain---and even if they were, we don't really know well enough how the brain works to "control" perception finely.
Other problems for BCIs include (i) "representational drift", where the brain's code changes over time, so you need to keep fine-tuning your interface in some sort of closed loop fashion and (ii) damage/scarring to neural tissue.
> Is there enough signal for this to really work?
I'm not quite sure what Neuralink's marketing claims are, so I'm not sure what you mean by "this" here. But intracranial electrodes do have a surprising amount of signal, especially relative to non-invasive methods (I'm currently collecting some iEEG data myself!)
I really want the sci-fi future where we have brain-computer interfaces that augment our cognition and perception, but we're nowhere close---though we're getting better.
I don't immediately see how that paper's assertion (that some areas' fMRI response is influenced by baseline oxygenation and cerebral blood flow) relate to the reliability of an information modeling experiment?
https://medarc-ai.github.io/mindeye/
Recent studies have demonstrated using fMRI data to reconstruct the images of what the person being scanned is seeing. There's enough information there to produce a highly plausible reconstruction - if someone is seeing a picture of a zebra, the software shows a zebra, but it's not going to get the stripe patterns exactly right.
fMRI provides a great proxy and noisy set of signals. Fortunately, the brain is redundant enough that a bunch of regions getting activated creates a sufficiently differentiable pattern at large that you can get enough good information to do things like MindEye and so on. Fortunately, recent AI breakthroughs have allowed extremely high dimensional geometry to be handled relatively simply, with millions or billions of dimensions being processed into semantically useful tools.
MRI is, in general, a lot harder than people often imagine. It uses complicated physics to measure convoluted physiological changes to indirectly measure brain activity, which is obviously stupifying involved--and then relate that to other, often complicated factors like behavior, lifestyle or disease state.
I think it's reasonably well-known that the BOLD response is complex and doesn't directly reflect "average" spiking activity. Some studies find that it's sensitive to the amount of synchrony (=more neurons firing together in time) rather than the rate. The paper you mention shows another dissociation: neurons can get more fuel by extracting oxygen more efficiently OR have having more overall oxygen to extract at the same rate. Thus, it's not noise, but it is complicated.
[1] https://fintualist.com/chile/ciencia/los-efectos-de-las-pole... [2] https://mathstodon.xyz/@tao/114956840959338146
[0] q.bio
There are some notable exceptions -- Donoho, Vershynin -- but most of them are doing good old fashioned Brunn-Minkowski theory, which is fundamental but a hard sell in its most truthful form.