It is hard to describe how painful was to read about the overwhelming evidence of study manipulation by Masliah and others a few months ago. My father-in-law was diagnosed with this terrible disease in his late fifties, but before that he went through several misdiagnoses, including depression. He lost his job and i am now convinced that it was because he was on early stage of the disease, which affected his memory and ability to communicate. I later learned about this being something under study because apparently is a pattern: https://www.nytimes.com/2024/05/31/business/economy/alzheime... This triggered all kind of troubles as you might imagine.
After that, he went through a violent period and within a few months he could no longer speak or eat on his own. He now wears diapers and we had to hire a professional caregiver to help with his daily routines. Our family impact has been dramatic, we are not a large family so we had to spend a significant amount of resources to help his wife who is his main care giver. We have since received some assistance from the public healthcare system, but it took time, and the support did not keep pace with the rapid progression of his symptoms.
I have seen relatives pass away from other causes but this is by far one of the cruelest ways to die. After a few years of dealing with this disease, i cannot fathom any justification - good or bad - for the massive deception orchestrated, apparently for the sake of Masliag and others' careers. I hope they are held accountable and brought to trial soon for the damage they have caused to society and science.
Science needs an intervention similar to what the CRM process (https://en.wikipedia.org/wiki/Crew_resource_management) did to tamp down cowboy pilots flying their planes into the sides of mountains because they wouldn't listen to their copilots who were too timid to speak up.
...on the evening of Dec 28, 1978, they experienced a landing gear abnormality. The captain decided to enter a holding pattern so they could troubleshoot the problem. The captain focused on the landing gear problem for an hour, ignoring repeated hints from the first officer and the flight engineer about their dwindling fuel supply, and only realized the situation when the engines began flaming out. The aircraft crash-landed in a suburb of Portland, Oregon, over six miles (10 km) short of the runway
It has been applied to other fields:
Elements of CRM have been applied in US healthcare since the late 1990s, specifically in infection prevention. For example, the "central line bundle" of best practices recommends using a checklist when inserting a central venous catheter. The observer checking off the checklist is usually lower-ranking than the person inserting the catheter. The observer is encouraged to communicate when elements of the bundle are not executed; for example if a breach in sterility has occurred
Maybe not this system exactly, but a new way of doing science needs to be found.
Journals, scientists, funding sources, universities and research institutions are locked in a game that encourages data hiding, publish or perish incentives, and non-reproducible results.
The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out. ie it's not the same as the two people in an aircraft cabin - in the research world that plane crashing is all part of the market adjustment - weeding out bad pilots/academics.
However it doesn't work all the time for the same reasons that markets don't work all the time - the tendency for people to choose to create cosy cartels to avoid that harsh competition.
In academia this is created around grants either directly ( are you inside the circle? ) or indirectly - the idea obviously won't work as the 'true' cause is X.
Not sure you can fully avoid this - but I'm sure their might be ways to improve it around the edges.
How is that correction mechanism supposed to work though? Do you mean the peer review process?
Friends in big labs tell me they often find issues with competitor lab papers, not necessarily nefarious but like “ah no they missed thing x here so their conclusion is incorrect”.. but the effect of that is just they discard the paper in question.
In other words: the labs I’m aware of filter papers themselves on the “inbound” path in journal clubs, creating a vetted stream of papers they trust or find interesting for themselves.. but that doesn’t provide any immediate signal to anyone else about the quality of the papers
Yeah I don't think CRM is the correct thing in this case... I just think that there needs to be some new set of incentives put in place such that the culture reinforces the outcomes you want.
There actually are checklists you have to fill out when publishing a paper. You have to certify that you provided all relevant statistics, have not doctored any of your images, have provided all relevant code and data presented in the paper, etc. For every paper I have ever published, every last item on these checklists was enforced rigorously by the journal. Despite this, I routinely see papers from "high-profile" researchers that obviously violate these checklists (e.g.: no data released, a not even a statement explaining why data was withheld), so it seems that they are not universally enforced. (And this includes papers published in the same journals around the same time, so they definitely had to fill out the same checklist as I did.)
Not to mention that scientists spend a crazy amount of time writing grant proposals instead of doing science. Imagine if programmers spent 40% of their time writing documents asking for money to write code. Madness.
Indeed. You do need some idea of what you are going to do before being funded.
The tricky bit is that in research, and this a bit like the act of programming, you often discover import stuff in the process of doing - and the more innovative the area - the more likely this is to happen.
Big labs deal with this by having enough money to self-fund prospective work, or support things for extra time - the real problem is that new researchers - who often have the new ideas, are the most constrained.
If this was applied in science we'd be still be flying blind with regards to stomach ulcers because a lot of 'researchers' thought bacteria couldn't live in the stomach (it's obviously a BS reason)
Yes, CRM procedures are very good in some cases and I would definitely apply it in healthcare in stuff like procedures, or the issues mentioned, etc.
I work in neurotech and sleep, and our focus on slow-wave enhancement has recently had 3 papers looking at the impact in Alzheimer's.
I'm not a scientist or expert, but we do speak with experts in the field.
What I've gathered from these discussions is that Alzheimer's is likely not a single disease but likely multiple diseases which are currently being lumped into the one label.
The way Alzheimer's is diagnosed is by ruling out other forms of dementia, or other diseases. There is not a direct test for Alzheimer's, which makes sense, because we don't really know what it is, which is why we have the Amyloid Hypothesis, the Diabetes Type 3 hypothesis, etc etc.
I fear the baby is being thrown out with the bathwater here, and we need to be very careful not to vilify the Amyloid Hypotheses, but at the same time, take action against those who falsify research.
Here's some of the recent research in sleep and Alzheimer's
The way Alzheimer's is diagnosed is by ruling out other forms of dementia, or other diseases. There is not a direct test for Alzheimer's, which makes sense, because we don't really know what it is,
Correction here: while other tests are sometimes given to rule out additional factors, there is an authoritative, direct test for Alzheimer's: clinically detectable cognitive impairment in combination with amyloid and tau pathology (as seen in cerebrospinal fluid or PET scan). This amyloid-tau copathology is basically definitional to the disease, even if there are other hypotheses as to its cause.
> likely multiple diseases which are currently being lumped into the one label
You also described "cancer" what, 30 years ago?
We knew the symptoms, and we knew some rough classification based on where it first appeared. It took readily accessible diagnostic scans and genetic typing to really make progress.
This - it's more like an end-stage failure mode of a self-regulating, dynamic system which has drifted into dysfunction.
A case of "lots of things have to work perfectly, or near enough, or a brain drifts into this state". This seems to be very much the case with cancers, essentially unless everything regulates properly, the system would on its own devolve into cancer.
Like a gyroscope which will only spin if balanced, only this gyroscope has 10^LARGE moving parts.
And when you consider the ageing process, you're talking about multiple systems operating at 50%, 75%, 85% effectiveness, all of which interact with one another, so it's inevitable that self-regulating mechanisms start to enter failure cascade.
In terms of interventions, a lot of the time it seems like the best fix is to look at which of the critical systems is most deteriorated, and try and bring that one up. So, for example, diet and exercise can restore a degraded circulatory system by a meaningful amount, but you can be an Ironman triathlete and still develop Alzheimers in your 60s. If we can find reliable ways to do the same for sleep, that will be worthwhile, and likely there are other systems where we might do the same - immune, liver, kidneys and so on.
It sounds like corrosion in metals. There are many different damage mechanisms and protective effects, but at the end of the day you see weakened, oxidised metal.
The sleep bit is what we are working on. We increase slow-wave delta power, increasing the effectiveness of the glymphatic system to flush metabolic waste from the brain.
There is more than a decade of research into this process, the studies I pointed at earlier are focused on older adults, which see a larger improvement than a younger population, but lots of studies in university aged subjects due to the nature of research.
So in the clinic - what we usually do is a detailed neuropsychology battery. Also a patient history, but the neuropsych does provide some quantitative measures.
If there's clear amnestic memory loss, verbal fluency decline and visualspatial processing decline, it's more probably Alzheimer's. vs. if there's other features in terms of frontaldysexecutive functioning, behavioral changes, etc, then you think FTD or possibly LBD if there's reports of early visual hallucinations.
Amyloid PETs are getting a bit better so there's that. Amyloid-negative PETs w/ amnestic memory loss are being lumped under this new LATE (Limbic-Predominant Age-Related TDP-43 Encephalopathy) but that definition always felt a bit...handwavey to me.
11-32 adults is a good pilot paper but you have to raise funding for Phase II and III trials.
We're not the researchers, we are developing the technology to support research. They are somewhat hamstrung with the currently available technology. There are other benefits to slow-wave enhancement, non-clinical and beyond dementia. It has been suspected this could play a role in prevention of AD, we were very suspicious (and still are a bit) that we could have a direct impact in treating AD.
Having said that, the paper that looked at people with AD saw improvement in sleep, so even if we can help them subjectively feel less exhausted, that could be a quality of life benefit, even if non-clinical.
I completely agree, that proving effectiveness in treatment is a long road, but we're going through a non-clinical use first. If the research works out, we can look into clinical use at a later date.
>I fear the baby is being thrown out with the bathwater here, and we need to be very careful not to vilify the Amyloid Hypotheses, but at the same time, take action against those who falsify research.
The problem is that as many of these studies build upon each other, many other studies are tainted. A great review would be necessary to sort this mess out - but with the US research insttutions in complete disarray, we are years away from such progress.
So as a lay person with an active interest in the topic, my reading of 2) is that in the treatment group, some people showed improved sleep physiology AND improved memory, and this was attributed to the treatment, but the group as a whole did not.
If some improve, and the group average score remains unchanged, does that mean some got worse, or is it a case of the group average not being statistically significant?
What this suggests to me is that there is _surely_ a link between sleep quality and memory performance, but that whether or not the proposed treatment makes any difference - that is, whether the treatment caused the sleep improvement - is doubtful. At best it seems to be "it works moderately well for some people, and not at all for others". Am I reading it correctly?
You are reading that correctly, however, it is likely a limitation of the technology they used in the study.
Strangely, the didn't mention how they decided on stimulation volume, however, most studies will either set a fixed volume for the study, or measure the users hearing while they are awake, and then set a fixed volume for that person.
Our technology (and we're not the first to do this) adapts the volume based on brain response during sleep in real-time.
When you don't do this you risk either having the volume so low that it doesn't evoke a response, or so high that you decrease sleep depth, and don't get the correct response.
Therefore, anyone who did not get the appropriate volume would end up as a non-responder.
It is also more challenging for previously used algorithms to detect a slow-wave in older adults because the delta power is lower, so some of these participants may have had limited stimulation opportunities.
We've developed methods which improve on the state of the art, but we have not validated those in a study yet.
I feel that sleep will ultimately be the answer and the cure. Personal anecdote - one of my uncles is affected by this disease. One day, my aunt could not wake him up in the morning as hard as she tried. He would mumble and try to go back to sleep. When he finally awoke after 10 minutes of my aunt basically yelling at and slapping him, he was, in her words - "back to the man I used to know". Completely lucid and able to keep up with conversation, remembering everything etc. Two days later he was back to his old confused baseline.
I'm not as optimistic as you that sleep will be a cure, but I'd be very surprised indeed if sleep quality weren't preventive. (Proving this might be more difficult, though - correlation/causation).
It's almost an argument by process of elimination - why else would literally every living thing with a brain need to spend so much of its time asleep? How is it that we still don't fully know what sleep (as distinct from either rest or unconsciousness) is actually for?
Multiple studies show that night shift work is bad for the brain - and for those with a habit of working nights (probably quite a few of us on HN, from time to time), if a recreational drug made your brain feel as bad as an all-nighter can, that would surely be one you'd put in the "treat with great caution" category, no?
No doubt the glymphatic system (a central part of higher animal physiology which was only discovered in the last 25 years) has a role to play. It may be that, as with cancer, once the degenerative process gets beyond a certain point, it's hard to stop - but I'm hopeful that science will unlock a good deal of understanding around prevention over the next decade or so - even if that's not much more than an approach to sleep hygiene analogous to "eat your 5 fruit and veg a day, don't have too much alcohol or HFCS, and make sure to do a couple of sessions of cardio and a few weights every week".
The higher-level problem is that there are tons of scientific papers with falsified data and very few people who care about this. When falsified data is discovered, journals are very reluctant to retract the papers. A small number of poorly-supported people examine papers and have found a shocking number of problems. (For instance, Elisabeth Bik, who you should follow: @elisabethbik.bsky.social) My opinion is that the rate of falsified data is a big deal; there should be an order of magnitude more people checking papers for accuracy and much more action taken. This is kind of like the replication crisis in psychology but with more active fraud.
Unfortunately as you spend more time investigating this problem it becomes clear that replication studies aren't the answer. They're a bandage over the bleeding but don't address the root causes, and would have nearly no impact even if funded at a much larger scale. Because this suggestion comes up in every single HN thread about scientific fraud I eventually wrote an essay on why this is the case:
(press escape to dismiss the banner). If you're really interested in the topic please read it but here's a brief summary:
• Replication studies don't solve many of the most common types of scientific fraud. Instead, you just end up replicating the fraud itself. This is usually because the methodology is bad, but if you try to fix the methodology to be scientific the original authors just claim you didn't do a genuine replication.
• Many papers can't be replicated by design because the methodology either isn't actually described at all, or doesn't follow logically from the hypothesis. Any attempt would immediately fail after the first five minutes of reading the paper because you wouldn't know what to do. It's not clear what happens to the money if someone gets funded to replicate such a paper. Today it's not a problem because replicators choose which papers to replicate themselves, it's not a systematic requirement.
• The idea implicitly assumes that very few researchers are corrupt thus the replicators are unlikely to also be. This isn't the case because replication failures are often due to field-wide problems, meaning replications will be done by the same insiders who benefit from the status quo and who signed off on the bad papers in the first place. This isn't an issue today because the only people who do replication studies are genuinely interested in whether the claims are true, it's not just a procedural way to get grant monies.
• Many papers aren't worth replicating because they make trivial claims. If you punish non-replication without fixing the other incentive problems, you'll just pour accelerant on the problem of academics making obvious claims (e.g. the average man would like to be more muscular), and that just replaces one trust destroying problem with another.
Replication failure is a symptom not a cause. The cause is systematically bad incentives.
>>>• Replication studies don't solve many of the most common types of scientific fraud. Instead, you just end up replicating the fraud itself. This is usually because the methodology is bad, but if you try to fix the methodology to be scientific the original authors just claim you didn't do a genuine replication.
>>>• Many papers can't be replicated by design because the methodology either isn't actually described at all, or doesn't follow logically from the hypothesis. Any attempt would immediately fail after the first five minutes of reading the paper because you wouldn't know what to do. It's not clear what happens to the money if someone gets funded to replicate such a paper. Today it's not a problem because replicators choose which papers to replicate themselves, it's not a systematic requirement.
Incentives are themselves dangerous. We should treat incentives like guns. Instead we apply incentives to all manner of problems and are surprised they backfire and destroy everything.
You have to give people less incentives and more just time to do their basic job.
I think this hits the nail on the head. Academics have been treated like assembly line workers for decades now. So they’ve collectively learned how to consistently place assembled product on the conveyor belt.
The idea that scientific output is a stack of publications is pretty absurd if you think about it for a few minutes. But try telling that to the MBA types who now run universities.
You do need to incentivize something. If you incentivize nothing that's the same thing as an institution not existing and science being done purely as a hobby. You can get some distance that way - it's how science used to work - but the moment you want the structure and funding an institution can provide you must set incentives. Otherwise people could literally just stop turning up for work and still get paid, which is obviously not going to be acceptable to whoever is funding it.
Einstein, Darwin, Linneaus. All science as a hobby. I don't think we should discount that people will in fact do it as a hobby if they can and make huge contributions that way.
Einstein spent almost all of his life in academia living off research grants. His miracle year took place at the end of his PhD and he was recruited as a full time professor just a few years later. Yes, he did science as a hobby until that point, but he very much wanted to do it full time and jumped at the chance when he got it.
Still, if you want scientific research to be done either as a hobby or a corporate job, that's A-OK. The incentives would be much better aligned. There would certainly be much less of it though, as many fields aren't amenable to hobbyist work at all (anything involving far away travel, full time work or that requires expensive equipment).
I think it’s an interesting feature of current culture that we take it as axiomatic that people need to be ‘incentivized’. I’m not sure I agree. To me that axiom seems to be at the root of a lot of the problems we’re talking about in this thread. (Yes, everyone is subject to incentives in some broad sense, but acknowledging that doesn’t mean that we have to engineer specific incentives as a means to desired outcomes.)
I think there is some misunderstanding here. Incentives are not some special thing you can opt to not do.
Who do you hire to do science? When do you give them a raise? Under which circumstances do you fire them? Who gets a nicer office? There are a bunch of scientist each clamouring for some expensive equipment (not necessarily the same one) who gets their equipment and who doesn't? Scientist wants to travel to a conference, who can travel and where can they travel? We have a bunch of scientist working together who can tell the other what to do and what not to do?
Depending on your answers to these questions you set one incentive structure or an other. If you hire and promote scientist based on how nicely they do interpretive dance you will get scientist who dances very well. If you hire and promote scientist based on how articulate they are about their subject matters you will get very well spoken scientist. If you don't hire anybody then you will get approximately nobody doing science (or only independently wealthy dabbling here and there out of boredom.)
If you pay a lot to the scientist who do computer stuff, but approximately no money to people who do cell stuff you will get a lot of computer scientist and no cell scientist. Maybe that is what you want, maybe not. These don't happen from one day to an other. You are not going to get more "cancer research" tomorrow out of the existing cancer researchers if you hike their salary 100 fold today. But on the order of decades you will definitely see much much more (or much less) people working on the problem.
> My opinion is that the rate of falsified data is a big deal
Have anything that backs that up? Other than what you shared here?
I would be very interested in the rate on a per author level, if you have some evidence. Fraud "impact" vs "impact" of article would be interesting as well.
All of those examples have no relative meaning. If there are millions of papers published per year, then 1000 cases over a decade isn't very prevalent (still bad).
‘It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgement of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of the New England Journal of Medicine.’ — Marcia Angell
0.04% of papers are retracted. At least 1.9% of papers have duplicate images “suggestive of deliberate manipulation”. About 2.5% of scientists admit to fraud, and they estimate that 10% of other scientists have committed fraud.
“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” — Richard Horton, editor of the Lancet
The statcheck program showed that “half of all published psychology papers…contained at least one p-value that was inconsistent with its test”.
The GRIM program showed that of the papers it could verify, around half contained averages that weren’t possible given the sample sizes, and more than 20% contained multiple such inconsistencies.
The fact that half of all papers had incorrect data in them is concerning, especially because it seems to match Richard Horton’s intuitive guess at how much science is simply untrue. And the GRIM paper revealed a deeper problem: more than half of the scientists refused to provide the raw data for further checking, even though they had agreed to share it as a condition for being published.
After some bloggers exposed an industrial research-faking operation that had generated at least 600 papers about experiments that never happened, a Chinese doctor reached out to beg for mercy: “Hello teacher, yesterday you disclosed that there were some doctors having fraudulent pictures in their papers. This has raised attention. As one of these doctors, I kindly ask you to please leave us alone as soon as possible … Without papers, you don’t get promotion; without a promotion, you can hardly feed your family … You expose us but there are thousands of other people doing the same.“
Yet despite decades of research, no treatment has been created that arrests Alzheimer’s cognitive deterioration, let alone reverses it.
Nowhere in the article does it mention that anti-amyloid therapies such as donanemab and lecanemab have so far successfully slowed decline by about 30%. They may not yet be "arresting" (fully stopping) the disease, but it's pretty misleading for the article to completely omit reference to this huge success.
We are currently in the midst of a misguided popular uprising against the amyloid hypothesis. There were several fraudulent studies on amyloid, and those responsible should be handled severely by the scientific community. But these fraudulent studies do not constitute the foundational evidence for the amyloid hypothesis, which remains very solid.
From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".
If they are so good at cleaning up the amyloid, why don't people have more of an improvement? I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.
From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".
After many decades of research, we've gone in the last few years from no ability whatsoever to affect the underlying disease, to 30% slowdown. To be clear, that's a 30% slowdown in clinical, cognitive endpoints. Whether you call that "meaningful" is a bit subjective (I think most patients would consider another couple years of coherent thinking to be meaningful), and it has to be weighed against the costs and risks, and there's certainly much work to be done. But it's a huge start.
If they are so good at cleaning up the amyloid, why don't people have more of an improvement?
No one is expected to improve after neurodegeneration has occurred. The best we hope for is to prevent further damage. Amyloid is an initiating causal agent in the disease process, but the disease process includes other pathologies besides amyloid. So far, the amyloid therapies which very successfully engage their target have not yet been tested in the preclinical phase before the amyloid pathology initiates further, downstream disease processes. This is the most likely reason we've seen only ~30% clinical efficacy so far. I expect much more efficacy in the years to come as amyloid therapies are refined and tested at earlier phases. (I also think other targets are promising therapeutic targets; this isn't an argument against testing them.)
I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.
To be clear, the evidence for the amyloid hypothesis is causal. The association between amyloid and Alzheimer's has been known since Alois Alzheimer discovered the disease in 1906. The causal evidence came in the 1990's, which is why the scientific community waited so long to adopt that hypothesis.
Would it be fair to say that it's causal in terms of process, but perhaps not in terms of initiation?
That is, there's a feedback loop involved (or, likely, a complex web of feedback processes), and if a drug can effectively suppress one of the steps, it will slow the whole juggernaut down to some extent?
Am reminded a little of the processes that happen during/after TBI - initial injury leads to brain swelling leads to more damage in a vicious cycle. In some patients, suppressing the swelling results in a much better outcome, but in others, the initial injury, visible or not, has done too much damage and initiated a failure cascade in which treating the swelling alone won't make any difference to the end result.
Reading between the lines if we gave people those drugs before they show any symptoms we should be able to do even better. Has this been tested? How safe are those drugs? What should the average person be doing to avoid accumulating amyloids in the first place?
There were some earlier prevention failures with solanezumab and crenezumab, but these antibodies worked differently and never showed much success at any stage.
How safe are those drugs?
There are some real safety risks from brain bleeding and swelling, seemingly because the antibodies struggle to cross the blood-brain barrier, accumulating in blood vessels and inducing the immune system to attack amyloid deposits in those locations rather than the more harmful plaques in brain tissue. A new generation of antibodies including trontinemab appears likely to be both more effective and much safer, by crossing the BBB more easily.
What should the average person be doing to avoid accumulating amyloids in the first place?
There's not much proven here, and it probably depends on your individualized risk factors. There's some evidence that avoiding/properly treating microbial infection (particularly herpes viruses and P. gingivalis) can help, since amyloid beta seems to be an antimicrobial peptide which accumulates in response to infection. There may also be some benefit from managing cholesterol levels, as lipid processing dysfunction may contribute to increased difficulty of amyloid clearance. Getting good sleep, especially slow wave sleep, can also help reduce amyloid buildup.
Those quoting the 30% figure may want to research where that figure comes from and what it actually means:
“Derek Lowe has worked on drug discovery for over three decades, including on candidate treatments for Alzheimer’s. He writes Science’s In The Pipeline blog covering the pharmaceutical industry.
“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'” he told Big Think.
“Although the effect of the drug will be described as being about a third, it consists, on average, of a difference of about 3 points on a 144-point combined scale of thinking and daily activities,” Professor Paresh Malhotra, Head of the Division of Neurology at Imperial College London, said of donanemab.
What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.
“That’s a minimal difference, and people are unlikely to perceive any real alteration in cognitive functioning,” Alberto Espay, a professor of neurology at the University of Cincinnati College of Medicine, told KFF Health News.
At the same time, these potentially invisible benefits come with the risk of visible side effects. Both drugs caused users’ brains to shrink slightly. Moreover, as many as a quarter of participants suffered inflammation and brain bleeds, some severe. Three people in the donanemab trial actually died due to treatment-related side effects.”
“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'”
It's not quite that simple, and the amyloid hypothesis doesn't claim it to be. It does, however, claim that it's the upstream cause of the disease, and if you stop it early enough, you stop the disease. But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.
What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.
As I point out in another comment, the decline (from a baseline of ~3 points worse than a perfect score) during those 18 months is only 1.66 points in the placebo group, It's therefore very misleading to say this is an 18-point scale, so a 0.45 point benefit isn't clinically meaningful. A miracle drug with 100% efficacy would only achieve a 1.66 point slowdown.
“But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.”
Ok, maybe we’re just arguing different points here. I’ll grant that amyloids have something to do with all of this. I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.
Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?
I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.
My central claim is the the drugs help quite a lot, by slowing down the disease progression by 30%, and that it's highly misleading to say "only 0.45 points benefit on an 18 point scale", since literally 100% halting of the disease could only have achieved 1.66 points efficacy in the 18 month clinical trial.
This is like having a 100-point measure of cardiovascular health, where patients start at 90 points and are expected to worsen by 10 points per year, eventually dying after 9 years. If patients given some treatment only worsen by 7 points per year instead of 10, would you say "only 3 points benefit on a 100 point scale"?
Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?
I do believe that they will be more (close to 100%) efficacious when used in this way, yes.
Downvoters, are you sure you have a rational basis for downvoting this informative post? Do us HNers really know enough to discredit the amyloid hypothesis when 99.9% of us know nothing other than it's gotten some bad press in recent years?
I googled lecanemab and it does have the clinical support claimed. I don't see anyone questioning the data. I'm as surprised as anyone else, even a little suspicious, but I have to accept this as true, at least provisionally.
For anyone who wants to start grappling with the true complexity of this issue, I found a scholarly review [1] from October 2024.
"Lecanemab resulted in infusion-related reactions in 26.4% of the participants and amyloid-related imaging abnormalities *with edema or effusions in 12.6%*."
"After 18 months of treatment, lecanemab slowed cognitive decline by 27% compared with placebo, as measured by the Clinical Dementia Rating–Sum of Boxes (CDR-SB). This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"
Sum of boxes is a 19 point scale. So, for those keeping track at home, this is an incredibly expensive treatment that requires premedication with other drugs to control side affects as well as continuous MRIs for an ~%2.3 absolute reduction in the progression of dementia symptoms compared to placebo, with a 12% risk of cerebral edema.
Now, I'm no neurologist, but I'd call that pretty uninspiring for an FDA-approved treatment.
"This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"
…
Sum of boxes is a 19 point scale.
It's an 18 point scale, but more to the point: the decline in the placebo group was only 1.66 points over those 18 months, and the mean score at baseline was just over 3 points. So even 100% efficacy could only possibly have slowed decline by 1.66 out of 18 points (what you would call a 9.2% absolute reduction) in the 18 months of that experiment. And full reversal (probably unattainable) would have only slowed decline by about 3 points.
I agree that the side effects of anti-amyloid therapies are a serious concern. The reasons for this are being understood and corrected in the next generation of such therapies. For example, I expect trontinemab to achieve better efficacy with much greater safety, and there is already preliminary evidence of that. Furthermore, there are improved dosing regimens of donanemab which improve side effects significantly.
Note that my claim is not that the existing drugs are stellar, and certainly not that they're panaceas. Simply that the amyloid hypothesis is true and there has been tremendous progress based on that hypothesis as of late.
To emphasize your point, I don't think anyone will notice if someone's alzheimers is 2.3% better.
These rating scales like CDR-SB (invented by drug companies or researchers who are funded by drug companies) are very good at making the tiniest improvement sound significant.
And that is the core problem with what happened. There may actually be a grain of truth but now there is a backlash. I'd argue though that the mounds of alternative explanations that weren't followed up on should likely get some priority right now since we know so little about them there is a lot to learn and and we are likely to have a lot of surprises there.
I see this as the same problem with UCT (upper confidence for trees) based algorithms. If you get a few initial random rolls that look positive you end up dumping a lot of wasted resources into that path because the act of looking optimizes the tree of possibilities you are exploring (it was definitely easier to study amyloid lines of research than other ideas because of the efforts put into it). Meanwhile the other possibilities you have been barely exploring slowly become more interesting as you add a few resources to them. Eventually you realize that one of them is actually a lot more promising and ditch the bad rut you were stuck on, but only after a lot of wasted resources. To switch fields, I think something similar happened to alpha-go when it had a game that ended in a draw because it was very confident in a bad move.
Basically, UCT type algorithms prioritize the idea that every roll should optimize the infinite return so it only balances exploration with exploitation. When it comes to research though the value signal is wrong, you need to search the solution space because your goal is not to make every trial find the most effective treatment, it is to eventually find the actual answer and then use that going forward. The trial values did not matter. This means you should balance exploration, exploitation AND surprise. If you do a trial that gives you very different results than you expected then you have shown that you don't know much there and maybe it is worth digging into so even the fact that it may have returned less optimal value than some other path its potential value could be much higher. (Yes I did build this algorithm. Yes it does crush UCT based algorithms. Just use variance as your surprise metric then beat alpha-go.)
People intrinsically understand these two algorithms. In our day to day lives we pretty exclusively optimize exploration and exploitation because we have to put food on the table while still improving, but when we get to school we often take classes that 'surprise' us because we know that the goal at the end is to have gained -some- skill that will help us. Research priorities need to take into account surprise to avoid the UCT rut pitfalls. If they had for the amyloid hypothesis maybe we would have hopped over to other avenues of research faster. 'The last 8 studies showed roughly the same effect, but this other path has varied wildly. Let's look over there a bit more.'
yeeeess...but when you look at the slope of the decline on the NEJM papers describing the clinical trials of lecanumab and donemumab...are you really slowing the decline?
To be clear, I think you're asking whether maybe the drugs just provide a temporary "lift" but then the disease continues on the same basic trajectory, just offset a bit?
The studies aren't statistically powered to know for sure, but on lecanemab figure 2, the between-group difference on CDS-SB, ADAS-Cog14, ADCOMS, and ADCS-MCI-ADL (the four cognitive endpoints) widens on each successive visit. Furthermore, while not a true RCT, the lecanemab-control gap also widens up to 3 years in an observational study: https://www.alzforum.org/news/conference-coverage/leqembi-ca...
On donanemab figure 2, there is generally the same pattern although also some tightening towards the end on some endpoints. This could be due to the development of antidrug antibodies, which occurs in 90% of those treated with donanemab; or it could be statistical noise; or it could be due to your hypothesis.
There is anecdotal evidence and perhaps even some small studies showing that a keto diet can halt and even reverse Alzheimer's symptoms.
Compared to that, reducing the speed of decline isn't terribly impressive. It's better than nothing to be sure! But what people want is BIG progress, and understandably so. Billions have been spent.
Billions have been spent because it's a challenging disease to understand and treat. I want big progress too. But we shouldn't let our desire for big progress cause us to lose our ability to objectively evaluate evidence.
I have no opposition to a properly controlled randomized controlled trial of the keto diet, or other proposed therapies (many of which have been conducted, and are for targets other than amyloid which are completely compatible with the amyloid hypothesis). Until a proper RCT of keto is conducted, anecdotal claims are worth very little compared to the evidence I referred to.
I'm far, far more interested in anecdotes about completely halting or reversing decline than I am in rock solid data about a 30% reduction in decline speed.
Antibiotics started out as an anecdote about something whose effect was so stark it couldn't be missed. Chasing promising anecdotes is far more valuable (in my opinion) than attempting to take a 30% effect to a 100% effect.
Others are free to feel differently of course. I'm open to hearing about 100 different times that finding a tiny effect that got grown and magnified into a huge effect that totally changed medicine. I'm just not aware of many at this point.
You can be interested in what you want. But the interest in anti-amyloid therapy came from the basic science indicating amyloid pathology as the critical but-for cause of the disease. It wasn't just a blind shot in the dark.
To my knowledge, there's no such basic science behind a keto diet for Alzheimer's.
Turns out there are enough studies for a meta analysis. Is that basic science?
Basic science in this context means research investigating the underlying disease process to develop knowledge of how it works mechanistically, as distinguished from (and as a precursor to) developing or testing treatments for the disease. This helps us direct resources in plausibly useful directions rather than merely taking shots in the dark, and it also helps us to interpret later clinical findings: e.g. if we see some cognitive benefit in a three-month trial, is that because the underlying disease process was affected (and hence the benefit might persist or even increase over time), or might it be because there was some symptomatic benefit via a completely separate mechanism but no expectation of a change in trajectory? For example, cholinergic drugs are known to provide symptomatic benefit in Alzheimer disease but not slow the underlying biological processes, so that worsening still continues at the same pace. Or if we see results that are statistically borderline, is it still worth pursuing or was the very slight benefit likely a fluke?
So a meta-analysis of ketogenic diets in Alzheimer disease is not basic science, though that doesn't mean it's useless. But what I'm saying is it's really helpful to have a prior that the treatment you're developing is actually targeting a plausible disease pathway, and the amyloid hypothesis gives us that prior for amyloid antibodies in a way that, to my knowledge, we don't have for ketogenic diets.
Thanks, I just took a look at this meta-analysis. The studies with the strongest benefits on the standard cognitive endpoints of MMSE and ADAS-Cog — Taylor 2018, Qing 2019, and Sakiko 2020 — all lasted only three months, which makes me suspect (especially given the context of no theoretical reason to expect this to work that I'm aware of) this is some temporary symptomatic benefit as with the cholinergic drugs I mentioned above.
But it's enough of a hint that I'd support funding a long-term trial just to see what happens.
If amyloid is truly the critical "but for" cause then how on earth is it possible that reducing amyloid burden doesn't really make a difference?
I've argued elsewhere in the thread that it does make quite a difference, but there's still a lot of work to do, and I've said what I think that work is (mainly: improving BBB crossing and administering the drugs earlier).
There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.
There was no theoretical reason that washing your hands would do anything to combat the spread of disease and all the smart doctors knew otherwise. Some kooky doctor named Semmelweisz proposed that doctors should wash their hands between childbirths in 1847, 14 years before Pasteur published his findings on germ theory in 1861. When some doctors listened to him maternal mortality dropped from 18% to 2%.
I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.
It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined. That's too bad.
The strength of the effect of keto diet interventions in Alzheimer's is pretty strong to my understanding.
Which should be aggressively hinting that there's likely some as-yet unknown causality that's worth investigating. We don't have to spend billions to do that. But we do need more funding for it which is hard to get while all the amyloid hypothesis folks are really invested and clamoring.
There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.
Again, I'm in favor of people investigating all sorts of random shit.
I agree that sometimes unexpected things pan out. If you want to run a carefully conducted, large long-term trial on ketogenic diets in Alzheimer's, I support you. I'm just skeptical it'll pan out, and on priors I'll put greater expectation on the approach with a scientifically demonstrated mechanistic theory behind it.
I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.
It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined.
What are you basing this one? The p-value on lecanemab's single phase 3 trial was below 0.0001. And the causal role (not mere association) of amyloid in the disease has been demonstrated for years before significant efforts were invested developing therapies to target amyloid in the first place; most convincingly in the genetic mutations in APP, PSEN1, and PSEN2.
I agree more science is certainly better than less. But patentable therapies will always get a disproportionate amount of funding for big science (versus a basically-free dietary change).
It's possible you might adopt a different attitude if one day you're diagnosed with rapid onset Altzheimer. At that stage you'd be forgiven for muttering 'basic science be blowed'. Keto (or whatever) offered some relief for my friend Bill, I'll give it a try given it's my survival at stake.
Plate tectonics was suggested in 1913 and not supported (to put it politely) at that point by 'basic science'. It took until 1960 to be accepted. A paradigm shift was needed as Kuhn explained.
concludes "Research conducted has indicated that the KD can enhance the mental state and cognitive function of those with AD, albeit potentially leading to an elevation in blood lipid levels. In summary, the good intervention effect and safety of KD are worthy of promotion and application in clinical treatment of AD."
I'm not aware of any RCT showing long-term improvement of Alzheimer's symptoms from any treatment. I am aware of 1) long-term slowing of worsening (not improvement) from anti-amyloid therapy, 2) short-term benefits but no change in long-term trajectory from other therapies, and 3) sensational claims without an RCT behind them.
Sadly this stems from a structural problem in biology and medicine, and is far from exclusive to the field of Alzheimer's. Some reforms are urgent, otherwise progress is going to be incredibly slow. The same pattern is repeated again and again. Someone publishes something that looks novel, but is either an exaggeration or a downright lie.
This person begins to attract funding, grant reviews and article reviews. Funding is used to expand beyond reasonable size and co-author as many articles as possible. Reviews mean this person now has the power to block funding and/or publication of competing hypotheses. The wheel spins faster and faster. And then, we know what the outcome is.
The solution is to make sure reviews are truly independent plus limitations on funding, group size and competing interests. I think that tenured researchers that receive public funding should have no stock options nor receive "consulting" fees from private companies. If they want to collaborate with industry, that's absolutely fine, but it should be done pro bono.
Furthermore, if you are a professor who publishes 2 articles per week and is simultaneously "supervising" 15 postdocs and 20 PhD students at 2 different institutions then, except in very few cases, you are no longer a professor but a rent seeker that has no clue what is going on. You just have a very well oiled machine to stamp your name into as many articles as possible.
The NIH already has total funding limits for grant eligibility, and the issue of competitors blocking your publications is pretty much eliminated by asking them to be excluded as reviewers, because we almost always already know who is going to do that. A competent editor will also see right through that.
I did my postdoc in a very well funded lab that was larger than even your examples- and they legitimately could do big projects nobody else could do, plus postdocs and grad students had a lot more autonomy, which helped them become better scientists. The PI worked at a distant/high level, but he was a good scientist and a skilled leader doing honest and valuable research, and had economies of scale that let him do more with the same research dollars. It was the least toxic and most creative and productive lab I’ve ever seen. Banning that type of lab would be to the massive detriment of scientific progress in my opinion.
I also disagree about banning consulting and startups for PIs- that is arguably where research ends up having the highest impact, because it gets translated to real world use. It also allows scientists to survive in HCOL areas with much less government funding. Frankly, I could make 4x the salary in industry, and if I were banned from even consulting on the side it would be much harder to justify staying an academic while raising a family in a HCOL area.
I am also very upset about academic fraud and have seen it first hand- but I think your proposed solutions would be harmful and ineffective. I’m not sure what the solution is, but usually students, postdocs, and technicians know if their PI is a fraud and would report if it were safe for them to do so, but they don’t have enough power to do so. Fixing that would likely solve this. Even for a junior PI, reporting on a more senior colleague would usually be career ending for them, but not who they are reporting on.
I'm not a fan of many of the practices you complain about here, but I will say this: We get paid too little for what we do for way too long....6 years of grad school (24k/yr) 6 year post-doc (42k/yr) in California, when I was in those positions anyway. Today, at UC Davis, assistant professors in the UC system start at $90,700 [1, for salary scale], which is often around 12 years after their undergraduate degree. That's in California, where a mortgage costs you $3,000 a month, minimum.
Why do employees keep voluntarily accepting that type of abuse? Low wages aren't a secret and the employees doing that work aren't idiots so they must know what they're getting into. Are they doing it out of some sort of moral duty, or as immigrants seeking permanent resident status, or is there some other reason? Presumably if people stopped accepting those wages then the wages would have to rise.
Those numbers aren’t accurate anymore they’re out of date and now much higher. Also, I voluntarily pay my students and postdocs 2-3x those numbers currently.
But ultimately (1) those are seen as training positions that lead to a tenured faculty position, which pays fairly well, and has a lot of job security and freedom; (2) certain granting agencies limit what you can spend on students and postdocs, to levels that are too low for HCOL areas.
I'll add in that it's a buyer's market. There are plenty of post docs with no work (who want to work in the academic space) so if you don't want the post there are plenty in line who do.
There's no post-doc-research union to set and enforce reasonable pay scales, but equally a union would have difficulty adjusting rates to local cost of living.
Put another way - supply and demand baby, supply and demand.
Says $60,000 for University of Arkansas or 2/3rd of what was listed for California.
I'm not an academic nor do I live in California (or Arkansas) but $90,000/year after 6 years working below minimum wage and 6 years barely above doesn't sound that great in <economic terms>. Hopefully people are getting benefits from teaching or research.
Looking at some random assistant professors in relevant departments, I see:
One who earns $113K
Another who earns $94K
Another who earns $96K.
These are regular departments, (biochemistry, biology, etc). Not medical departments. I'm sure those ones get paid more (e.g. one I personally knew in Houston got $180K in a public university in Dallas).
So mid $90's would be my guess.
Then note that these are 9 month salaries, and the typical deal is they get up to a third more from grants (their "summer" salary). So total compensation for assistant professors would be about $120K.
Faculty salaries are not that bad, as a rule. What's really bad is the number of years they spend trying to get a postdoc.
for biomedical, it is what your grant stipulates. I.e. I think way back when, K23's paid $98k for 75% time (or maybe 70%) and your institution agreed to pay the rest. Sometimes, they would actually pay you more to try and get close to fair market value, but that is if the department is generous. For famous institutions, like UCLA, or Brigham and Women's, the law of supply and demand is not on your side bc if you don't like the low salary, there's a giant line of wannabes waiting to take your spot.
Published historical pay will be 12 month salaries. Most non tenured professors will refuse the optional summer salary and work all summer for free, because they have to pay for it from their own grants- it means hiring one less student, and less chance of getting tenure.
Fairly certain. It's also in line with salaries I know from other departments. This is the "fixed" amount of the salary. The grant portion is variable, and also not paid by the state, so it's usually not required by the law to disclose.
Except that is where the majority of research is done, where the prestigious schools and students are, and where those people want to live. Plenty of good research gets done in Arkansas, and Texas, and North Carolina, but not in the cheap parts. It happens in Research Triangle, or Austin, or Fayetteville. It doesn't happen at Oachita Baptist College in Arkadelphia.
The somewhat good news is that people get into science and medicine because they believe in them, and they're often willing to work for peanuts so that big pharma can take their work and charge Medicare $80k/yr for a new drug that might work.
There's huge problems in academia and it's incentive structure, but I don't think they're related to be being in urban vs rural America (they exist just as badly in Europe, China, and India)
Those places you list have gone up painfully in a relative sense (like everywhere lately) but are nothing like the absurdity of California. You don't need two highly-paid professional incomes to afford a house with a long commute. There's also Atlanta, Houston, Dallas, Chicago, central Florida, many others.
It's a very small world for various reasons and sometimes, there's a good combination between a PI and a hosting institution. Sometimes there's not. If the guy who's doing what you want to do has his lab at UAB, you go to UAB. That being said, once you get your K23 or R01, because of NIH matching funds, you have more of a choice of where you go.
That one state is where the apex of the system is. It's where a lot of, maybe most, research happens, it's where perhaps most tech development happens, it's even where a lot of our popular culture is determined. It's where ~everyone is aiming for, even if only a fraction of them will make it there, so it affects the whole system.
All that given, most people still live on the East coast, as in 80% past Nevada. Culture is arguable.
For Americans, there is a clear difference between what behavior might be normal in the bay area/silicon valley all the way to LA than it is for NY, Boston, Houston, Miami, Detroit, etc.
I'd even assert "most tech development" is just plain wrong. It's certainly where many companies are HQ'd, but those same companies have offices all over the states, and each one offers/specializes in different products.
It also depends on what you mean by "tech development" of course. R&D projects and new developments, maybe there's an edge. I have a much stronger feeling that more research is performed in the Boston -> DC metropolis than the equivalent (as in distance) metropolis spanning from LA -> Silicon Valley.
I was contextualizing my response because cost of living is higher in California, and some of those numbers may seem more reasonable if it were in Arkansas, for example.
It's nice to live in a world where actions have consequences. When the media coverage got too much, Marc Tessier-Lavigne finally had to resign as president of Stanford, so he could focus on his job as a Stanford professor.
I can't tell whether your post is a joke. Yes, Tessier-Lavigne was forced to resign. But Stanford let him stay on as a professor. That was terrible: they should have kicked him out of the university.
I'm no expert, but I suspect it is a longer process to remove someone from a tenured professor position, than to remove them as President. We don't know that they won't eventually happen.
There are betrayals so severe that a grindingly slow due process is actually itself an addition betrayal. Not arguing for a kangaroo court, but tenure should not be a defense for blatant cheating.
Those who would condemn "science" need to explain why their concept is different from listening to the weather report and doing the exact opposite -- that's not a recipe for success, and "science" has gotten far more right than wrong over the years.
Now journalists dunk with science publication fraud, claiming its wider than it could possibly be! It will certainly get engagement!
More burn the bridges science journalism; "the worst that can happen is somehow the common place"; erode trust in institutions more, so and on so on so forth.
We are currently plummeting into nihilism, see you at the bottom, hope the clicks were worth it.
There is a trend of flagging many other categories of media on this site, even if its "factual and interesting". I was making a plea to the void of this community to stop eating this category of bullshit.
The big problem is not science per se but capitalist pharma. Science eventually self corrects but capitalism creates a huge inertia driven by the fact people do not want to lose money and do everything possible to push things as much as they can. So much investments went into the amyloid hypothesis.
The story of $SAVA is paradigmatic. Every neuroscientist knew that stuff was based 100% on fraudulent results but they nevertheless managed to capitalise billions.
Cassava Science is mentioned in the article and their problem is that the drug they backed was based on fraudulent academic research. The professor has been indicted by the DOJ for defrauding the NIH of $16M in grants. This isn't an indictment of capitalist pharma because the original fraud wasn't done in a capitalist system.
Based on the scale and impact of fraudulent results, I wonder if some form of LLM based approach with supervised fine-tuning could help highlight the actually useful research.
Papers are typically weighted by citations, but in the case of fraud, citations can be misleading. Perhaps there's a way to embed all the known alzheimer's research, then finetune the embeddings using negative labels for known fraudulent studies.
The the resulting embedding space (depending on how its constructed; perhaps with a citation graph under the hood?) might be one way to reweight the existing literature?
Highlighting research that's useful is probably too difficult, but highlighting research that definitely isn't should be well within the bounds of the best existing reasoning LLMs.
There are a lot of common patterns in papers that you learn to look for after a while, and it's now absolutely within reach of automation. For example, papers whose conclusions section doesn't match the conclusions in the abstract are a common problem, likewise papers that contain fake citations (the cited document doesn't support the claim, or has nothing to do with the claim, or sometimes is even retracted).
For that matter you can get a long way with pre-neural NLP. This database tracks suspicious papers using hand-crafted heuristics:
What you'll quickly realize though is that none of this matters. Detection happens every day, the problem is that the people who run grant-funded science itself just don't care. At all. Not even a little bit. You can literally send them 100% bulletproof evidence of scientific fraud and they'll just ignore it unless it goes viral and the sort of media they like to read begins asking questions. If it merely goes viral on social media, or if it gets picked up by FOX News or whatever, then they'll not only ignore it but double down and defend or even reward the perpetrators.
Me neither. But it’s very much in keeping with other seriously-intended suggestions I’ve heard. Optimism is fine until it becomes just dreaming and wishing out loud.
The amyloid hypothesis, as described to me by someone working in the field, is not only wrong, but it is harmful to the patients. His research is studying that it is more probable that the plaques are actually protective and do not cause the memory loss and other disease symptoms directly. This idea was pushed aside and ridiculed for years all because of the greed and lies by people like Eliezer Masliah.
I am going to point out how much the vibes have shifted here. When earthquake scientists gave false reassurances in the run-up to the 2009 earthquake at L'Acquila, likely contributing to the many deaths there, Italian criminal proceedings resulted. At that time, though, the usual suspects decided that this was science under attack and there was high-level protest and indignation that scientists might face legal consequences for their statements. https://en.wikipedia.org/wiki/2009_L%27Aquila_earthquake#Pro...
First, keep in mind that while those scientists were originally convicted, they were exonerated on appeal. The appellate court's opinion was that the responsibility was on the public official who announced that the area was safe without first consulting the scientists.
Secondly, this case isn't clear cut. Some still fault the scientists for not correctly interpreting the seismology data and not speaking up against the public official who was supposedly to blame. There's a big question around whether these scientists were correct or not in their judgement.
At any rate, this is pretty far from scientific fraud. Seismology (as far as I, a layman, understands it) is not a science where you can make exact predictions. Being wrong about the future isn't fraud, it's just unlucky.
The issue isn't whether someone else had a contrary opinion: the issue is that (just going by the linked reports) Bernardo De Bernardinis came out from the meeting with the scientists and informed the public that there was "no danger". Now, either the scientists felt that this was a reasonable summary of what they had said, or they didn't: either of those is bad, in different ways.
Okay, honest question: in what world is a magnitude 5.9 a "major" earthquake? 5.x earthquakes happen multiple times per day somewhere, and the same area had known much worse earthquakes in the preceding century.
How bad do your buildings have to be to get a 3-digit death toll from something that weak? I'd expect a 3-digit toll for "number of fine-china plates broken".
Christchurch was absolutely fucked by a 6.1 Mw(USGS) magnitude earthquake in 2011 - shallow depth and close to the city so we had severe outcomes. We have reasonably good earthquake building codes in New Zealand.
Poorer countries with more fragile infrastructure could be devastated by a smaller quake - depending on local circumstances.
The only thing that saved Christchurch from far far worse outcomes was that the most dangerous buildings were already empty because of damage from a larger earthquake in 2010.
I admit the log scale means that 6.1 is far bigger than 5.9... However you handwaved 5.x which shows your ignorance of log scales.
We had plenty of aftershocks below 6.1 - they are extremely unpleasant and have awful psychological effects on some people.
And we are hundreds of kilometers away from the major Southern Alps faultline - the equivalent to San Andreas. So we were blessed with some newly discovered minor faultlines.
https://archive.is/URjfk
It is hard to describe how painful was to read about the overwhelming evidence of study manipulation by Masliah and others a few months ago. My father-in-law was diagnosed with this terrible disease in his late fifties, but before that he went through several misdiagnoses, including depression. He lost his job and i am now convinced that it was because he was on early stage of the disease, which affected his memory and ability to communicate. I later learned about this being something under study because apparently is a pattern: https://www.nytimes.com/2024/05/31/business/economy/alzheime... This triggered all kind of troubles as you might imagine.
After that, he went through a violent period and within a few months he could no longer speak or eat on his own. He now wears diapers and we had to hire a professional caregiver to help with his daily routines. Our family impact has been dramatic, we are not a large family so we had to spend a significant amount of resources to help his wife who is his main care giver. We have since received some assistance from the public healthcare system, but it took time, and the support did not keep pace with the rapid progression of his symptoms.
I have seen relatives pass away from other causes but this is by far one of the cruelest ways to die. After a few years of dealing with this disease, i cannot fathom any justification - good or bad - for the massive deception orchestrated, apparently for the sake of Masliag and others' careers. I hope they are held accountable and brought to trial soon for the damage they have caused to society and science.
Science needs an intervention similar to what the CRM process (https://en.wikipedia.org/wiki/Crew_resource_management) did to tamp down cowboy pilots flying their planes into the sides of mountains because they wouldn't listen to their copilots who were too timid to speak up.
...on the evening of Dec 28, 1978, they experienced a landing gear abnormality. The captain decided to enter a holding pattern so they could troubleshoot the problem. The captain focused on the landing gear problem for an hour, ignoring repeated hints from the first officer and the flight engineer about their dwindling fuel supply, and only realized the situation when the engines began flaming out. The aircraft crash-landed in a suburb of Portland, Oregon, over six miles (10 km) short of the runway
It has been applied to other fields:
Elements of CRM have been applied in US healthcare since the late 1990s, specifically in infection prevention. For example, the "central line bundle" of best practices recommends using a checklist when inserting a central venous catheter. The observer checking off the checklist is usually lower-ranking than the person inserting the catheter. The observer is encouraged to communicate when elements of the bundle are not executed; for example if a breach in sterility has occurred
Maybe not this system exactly, but a new way of doing science needs to be found.
Journals, scientists, funding sources, universities and research institutions are locked in a game that encourages data hiding, publish or perish incentives, and non-reproducible results.
The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out. ie it's not the same as the two people in an aircraft cabin - in the research world that plane crashing is all part of the market adjustment - weeding out bad pilots/academics.
However it doesn't work all the time for the same reasons that markets don't work all the time - the tendency for people to choose to create cosy cartels to avoid that harsh competition.
In academia this is created around grants either directly ( are you inside the circle? ) or indirectly - the idea obviously won't work as the 'true' cause is X.
Not sure you can fully avoid this - but I'm sure their might be ways to improve it around the edges.
How is that correction mechanism supposed to work though? Do you mean the peer review process?
Friends in big labs tell me they often find issues with competitor lab papers, not necessarily nefarious but like “ah no they missed thing x here so their conclusion is incorrect”.. but the effect of that is just they discard the paper in question.
In other words: the labs I’m aware of filter papers themselves on the “inbound” path in journal clubs, creating a vetted stream of papers they trust or find interesting for themselves.. but that doesn’t provide any immediate signal to anyone else about the quality of the papers
Yeah I don't think CRM is the correct thing in this case... I just think that there needs to be some new set of incentives put in place such that the culture reinforces the outcomes you want.
There actually are checklists you have to fill out when publishing a paper. You have to certify that you provided all relevant statistics, have not doctored any of your images, have provided all relevant code and data presented in the paper, etc. For every paper I have ever published, every last item on these checklists was enforced rigorously by the journal. Despite this, I routinely see papers from "high-profile" researchers that obviously violate these checklists (e.g.: no data released, a not even a statement explaining why data was withheld), so it seems that they are not universally enforced. (And this includes papers published in the same journals around the same time, so they definitely had to fill out the same checklist as I did.)
Not to mention that scientists spend a crazy amount of time writing grant proposals instead of doing science. Imagine if programmers spent 40% of their time writing documents asking for money to write code. Madness.
Project managers and consultants do actually write those documents/specifications justifying the work before the programmers get to do it.
Indeed. You do need some idea of what you are going to do before being funded.
The tricky bit is that in research, and this a bit like the act of programming, you often discover import stuff in the process of doing - and the more innovative the area - the more likely this is to happen.
Big labs deal with this by having enough money to self-fund prospective work, or support things for extra time - the real problem is that new researchers - who often have the new ideas, are the most constrained.
Kinda making my point :P
If your org does this, that's a problem.
If this was applied in science we'd be still be flying blind with regards to stomach ulcers because a lot of 'researchers' thought bacteria couldn't live in the stomach (it's obviously a BS reason)
Yes, CRM procedures are very good in some cases and I would definitely apply it in healthcare in stuff like procedures, or the issues mentioned, etc.
I work in neurotech and sleep, and our focus on slow-wave enhancement has recently had 3 papers looking at the impact in Alzheimer's.
I'm not a scientist or expert, but we do speak with experts in the field.
What I've gathered from these discussions is that Alzheimer's is likely not a single disease but likely multiple diseases which are currently being lumped into the one label.
The way Alzheimer's is diagnosed is by ruling out other forms of dementia, or other diseases. There is not a direct test for Alzheimer's, which makes sense, because we don't really know what it is, which is why we have the Amyloid Hypothesis, the Diabetes Type 3 hypothesis, etc etc.
I fear the baby is being thrown out with the bathwater here, and we need to be very careful not to vilify the Amyloid Hypotheses, but at the same time, take action against those who falsify research.
Here's some of the recent research in sleep and Alzheimer's
1) Feasibility study with a surprisingly positive result - take with a grain of salt - https://pubmed.ncbi.nlm.nih.gov/37593850/
2) Stimulation in older adults (non-AD) shows positive amyloid response with corresponding improvement in memory - https://pmc.ncbi.nlm.nih.gov/articles/PMC10758173/
The way Alzheimer's is diagnosed is by ruling out other forms of dementia, or other diseases. There is not a direct test for Alzheimer's, which makes sense, because we don't really know what it is,
Correction here: while other tests are sometimes given to rule out additional factors, there is an authoritative, direct test for Alzheimer's: clinically detectable cognitive impairment in combination with amyloid and tau pathology (as seen in cerebrospinal fluid or PET scan). This amyloid-tau copathology is basically definitional to the disease, even if there are other hypotheses as to its cause.
> likely multiple diseases which are currently being lumped into the one label
You also described "cancer" what, 30 years ago?
We knew the symptoms, and we knew some rough classification based on where it first appeared. It took readily accessible diagnostic scans and genetic typing to really make progress.
And the brain is a lot harder imaging target.
This - it's more like an end-stage failure mode of a self-regulating, dynamic system which has drifted into dysfunction.
A case of "lots of things have to work perfectly, or near enough, or a brain drifts into this state". This seems to be very much the case with cancers, essentially unless everything regulates properly, the system would on its own devolve into cancer.
Like a gyroscope which will only spin if balanced, only this gyroscope has 10^LARGE moving parts.
And when you consider the ageing process, you're talking about multiple systems operating at 50%, 75%, 85% effectiveness, all of which interact with one another, so it's inevitable that self-regulating mechanisms start to enter failure cascade.
In terms of interventions, a lot of the time it seems like the best fix is to look at which of the critical systems is most deteriorated, and try and bring that one up. So, for example, diet and exercise can restore a degraded circulatory system by a meaningful amount, but you can be an Ironman triathlete and still develop Alzheimers in your 60s. If we can find reliable ways to do the same for sleep, that will be worthwhile, and likely there are other systems where we might do the same - immune, liver, kidneys and so on.
It sounds like corrosion in metals. There are many different damage mechanisms and protective effects, but at the end of the day you see weakened, oxidised metal.
Yes, except that metal isn't a dynamic, self-regulating structure. The body is in a constant state of actively fighting against its own decay.
The sleep bit is what we are working on. We increase slow-wave delta power, increasing the effectiveness of the glymphatic system to flush metabolic waste from the brain.
There is more than a decade of research into this process, the studies I pointed at earlier are focused on older adults, which see a larger improvement than a younger population, but lots of studies in university aged subjects due to the nature of research.
We have links to more of the research papers on our website https://affectablesleep.com/research
So I absolutely think you're on the right track, but also you've got a product to sell.
Certainly piqued my interest at least, depending on price point, how long it takes to demonstrate effectiveness and so on.
So in the clinic - what we usually do is a detailed neuropsychology battery. Also a patient history, but the neuropsych does provide some quantitative measures.
If there's clear amnestic memory loss, verbal fluency decline and visualspatial processing decline, it's more probably Alzheimer's. vs. if there's other features in terms of frontaldysexecutive functioning, behavioral changes, etc, then you think FTD or possibly LBD if there's reports of early visual hallucinations.
Amyloid PETs are getting a bit better so there's that. Amyloid-negative PETs w/ amnestic memory loss are being lumped under this new LATE (Limbic-Predominant Age-Related TDP-43 Encephalopathy) but that definition always felt a bit...handwavey to me.
11-32 adults is a good pilot paper but you have to raise funding for Phase II and III trials.
Awesome clarification! Thank you.
We're not the researchers, we are developing the technology to support research. They are somewhat hamstrung with the currently available technology. There are other benefits to slow-wave enhancement, non-clinical and beyond dementia. It has been suspected this could play a role in prevention of AD, we were very suspicious (and still are a bit) that we could have a direct impact in treating AD.
Having said that, the paper that looked at people with AD saw improvement in sleep, so even if we can help them subjectively feel less exhausted, that could be a quality of life benefit, even if non-clinical.
I completely agree, that proving effectiveness in treatment is a long road, but we're going through a non-clinical use first. If the research works out, we can look into clinical use at a later date.
>I fear the baby is being thrown out with the bathwater here, and we need to be very careful not to vilify the Amyloid Hypotheses, but at the same time, take action against those who falsify research.
The problem is that as many of these studies build upon each other, many other studies are tainted. A great review would be necessary to sort this mess out - but with the US research insttutions in complete disarray, we are years away from such progress.
So as a lay person with an active interest in the topic, my reading of 2) is that in the treatment group, some people showed improved sleep physiology AND improved memory, and this was attributed to the treatment, but the group as a whole did not.
If some improve, and the group average score remains unchanged, does that mean some got worse, or is it a case of the group average not being statistically significant?
What this suggests to me is that there is _surely_ a link between sleep quality and memory performance, but that whether or not the proposed treatment makes any difference - that is, whether the treatment caused the sleep improvement - is doubtful. At best it seems to be "it works moderately well for some people, and not at all for others". Am I reading it correctly?
You are reading that correctly, however, it is likely a limitation of the technology they used in the study.
Strangely, the didn't mention how they decided on stimulation volume, however, most studies will either set a fixed volume for the study, or measure the users hearing while they are awake, and then set a fixed volume for that person.
Our technology (and we're not the first to do this) adapts the volume based on brain response during sleep in real-time.
When you don't do this you risk either having the volume so low that it doesn't evoke a response, or so high that you decrease sleep depth, and don't get the correct response.
Therefore, anyone who did not get the appropriate volume would end up as a non-responder.
It is also more challenging for previously used algorithms to detect a slow-wave in older adults because the delta power is lower, so some of these participants may have had limited stimulation opportunities.
We've developed methods which improve on the state of the art, but we have not validated those in a study yet.
I feel that sleep will ultimately be the answer and the cure. Personal anecdote - one of my uncles is affected by this disease. One day, my aunt could not wake him up in the morning as hard as she tried. He would mumble and try to go back to sleep. When he finally awoke after 10 minutes of my aunt basically yelling at and slapping him, he was, in her words - "back to the man I used to know". Completely lucid and able to keep up with conversation, remembering everything etc. Two days later he was back to his old confused baseline.
Certainly pop-sci but great on this topic:
https://www.waterstones.com/book/why-we-sleep/matthew-walker...
I'm not as optimistic as you that sleep will be a cure, but I'd be very surprised indeed if sleep quality weren't preventive. (Proving this might be more difficult, though - correlation/causation).
It's almost an argument by process of elimination - why else would literally every living thing with a brain need to spend so much of its time asleep? How is it that we still don't fully know what sleep (as distinct from either rest or unconsciousness) is actually for?
Multiple studies show that night shift work is bad for the brain - and for those with a habit of working nights (probably quite a few of us on HN, from time to time), if a recreational drug made your brain feel as bad as an all-nighter can, that would surely be one you'd put in the "treat with great caution" category, no?
No doubt the glymphatic system (a central part of higher animal physiology which was only discovered in the last 25 years) has a role to play. It may be that, as with cancer, once the degenerative process gets beyond a certain point, it's hard to stop - but I'm hopeful that science will unlock a good deal of understanding around prevention over the next decade or so - even if that's not much more than an approach to sleep hygiene analogous to "eat your 5 fruit and veg a day, don't have too much alcohol or HFCS, and make sure to do a couple of sessions of cardio and a few weights every week".
The higher-level problem is that there are tons of scientific papers with falsified data and very few people who care about this. When falsified data is discovered, journals are very reluctant to retract the papers. A small number of poorly-supported people examine papers and have found a shocking number of problems. (For instance, Elisabeth Bik, who you should follow: @elisabethbik.bsky.social) My opinion is that the rate of falsified data is a big deal; there should be an order of magnitude more people checking papers for accuracy and much more action taken. This is kind of like the replication crisis in psychology but with more active fraud.
This is why funding replication studies and letting people publish null results and reproductions of important results is fundamental.
It will introduce a strong incentive to be honest. Liars will get caught rather quickly. Right now, it often takes decades to uncover fraud.
Unfortunately as you spend more time investigating this problem it becomes clear that replication studies aren't the answer. They're a bandage over the bleeding but don't address the root causes, and would have nearly no impact even if funded at a much larger scale. Because this suggestion comes up in every single HN thread about scientific fraud I eventually wrote an essay on why this is the case:
https://blog.plan99.net/replication-studies-cant-fix-science...
(press escape to dismiss the banner). If you're really interested in the topic please read it but here's a brief summary:
• Replication studies don't solve many of the most common types of scientific fraud. Instead, you just end up replicating the fraud itself. This is usually because the methodology is bad, but if you try to fix the methodology to be scientific the original authors just claim you didn't do a genuine replication.
• Many papers can't be replicated by design because the methodology either isn't actually described at all, or doesn't follow logically from the hypothesis. Any attempt would immediately fail after the first five minutes of reading the paper because you wouldn't know what to do. It's not clear what happens to the money if someone gets funded to replicate such a paper. Today it's not a problem because replicators choose which papers to replicate themselves, it's not a systematic requirement.
• The idea implicitly assumes that very few researchers are corrupt thus the replicators are unlikely to also be. This isn't the case because replication failures are often due to field-wide problems, meaning replications will be done by the same insiders who benefit from the status quo and who signed off on the bad papers in the first place. This isn't an issue today because the only people who do replication studies are genuinely interested in whether the claims are true, it's not just a procedural way to get grant monies.
• Many papers aren't worth replicating because they make trivial claims. If you punish non-replication without fixing the other incentive problems, you'll just pour accelerant on the problem of academics making obvious claims (e.g. the average man would like to be more muscular), and that just replaces one trust destroying problem with another.
Replication failure is a symptom not a cause. The cause is systematically bad incentives.
>>>• Replication studies don't solve many of the most common types of scientific fraud. Instead, you just end up replicating the fraud itself. This is usually because the methodology is bad, but if you try to fix the methodology to be scientific the original authors just claim you didn't do a genuine replication.
>>>• Many papers can't be replicated by design because the methodology either isn't actually described at all, or doesn't follow logically from the hypothesis. Any attempt would immediately fail after the first five minutes of reading the paper because you wouldn't know what to do. It's not clear what happens to the money if someone gets funded to replicate such a paper. Today it's not a problem because replicators choose which papers to replicate themselves, it's not a systematic requirement.
Isn't this what peer review is for?
Incentives are themselves dangerous. We should treat incentives like guns. Instead we apply incentives to all manner of problems and are surprised they backfire and destroy everything.
You have to give people less incentives and more just time to do their basic job.
I think this hits the nail on the head. Academics have been treated like assembly line workers for decades now. So they’ve collectively learned how to consistently place assembled product on the conveyor belt.
The idea that scientific output is a stack of publications is pretty absurd if you think about it for a few minutes. But try telling that to the MBA types who now run universities.
You do need to incentivize something. If you incentivize nothing that's the same thing as an institution not existing and science being done purely as a hobby. You can get some distance that way - it's how science used to work - but the moment you want the structure and funding an institution can provide you must set incentives. Otherwise people could literally just stop turning up for work and still get paid, which is obviously not going to be acceptable to whoever is funding it.
Einstein, Darwin, Linneaus. All science as a hobby. I don't think we should discount that people will in fact do it as a hobby if they can and make huge contributions that way.
Einstein spent almost all of his life in academia living off research grants. His miracle year took place at the end of his PhD and he was recruited as a full time professor just a few years later. Yes, he did science as a hobby until that point, but he very much wanted to do it full time and jumped at the chance when he got it.
Still, if you want scientific research to be done either as a hobby or a corporate job, that's A-OK. The incentives would be much better aligned. There would certainly be much less of it though, as many fields aren't amenable to hobbyist work at all (anything involving far away travel, full time work or that requires expensive equipment).
I think it’s an interesting feature of current culture that we take it as axiomatic that people need to be ‘incentivized’. I’m not sure I agree. To me that axiom seems to be at the root of a lot of the problems we’re talking about in this thread. (Yes, everyone is subject to incentives in some broad sense, but acknowledging that doesn’t mean that we have to engineer specific incentives as a means to desired outcomes.)
I think there is some misunderstanding here. Incentives are not some special thing you can opt to not do.
Who do you hire to do science? When do you give them a raise? Under which circumstances do you fire them? Who gets a nicer office? There are a bunch of scientist each clamouring for some expensive equipment (not necessarily the same one) who gets their equipment and who doesn't? Scientist wants to travel to a conference, who can travel and where can they travel? We have a bunch of scientist working together who can tell the other what to do and what not to do?
Depending on your answers to these questions you set one incentive structure or an other. If you hire and promote scientist based on how nicely they do interpretive dance you will get scientist who dances very well. If you hire and promote scientist based on how articulate they are about their subject matters you will get very well spoken scientist. If you don't hire anybody then you will get approximately nobody doing science (or only independently wealthy dabbling here and there out of boredom.)
If you pay a lot to the scientist who do computer stuff, but approximately no money to people who do cell stuff you will get a lot of computer scientist and no cell scientist. Maybe that is what you want, maybe not. These don't happen from one day to an other. You are not going to get more "cancer research" tomorrow out of the existing cancer researchers if you hike their salary 100 fold today. But on the order of decades you will definitely see much much more (or much less) people working on the problem.
> Right now, it often takes decades to uncover fraud.
Where? Many in this thread are talking as if there is monoculture in academia.
> My opinion is that the rate of falsified data is a big deal
Have anything that backs that up? Other than what you shared here?
I would be very interested in the rate on a per author level, if you have some evidence. Fraud "impact" vs "impact" of article would be interesting as well.
See, for example, a paper mill that churned out 400 papers with potentially fabricated images: https://www.science.org/content/article/single-paper-mill-ap...
Influential microbiologist Didier Raoult had 7 papers retracted in 2024 due to faking the ethics approvals on the research. https://www.science.org/content/article/failure-every-level-...
Fazlul Sarkar, a cancer researcher at Wayne State University, had 40 articles retracted after evidence of falsified data and manipulated and duplicated images. https://www.liebertpub.com/doi/10.1089/genbio.2024.29132.ebi
Overall, Elisabeth Bik has found thousands of studies that appear to have doctored images. https://www.newyorker.com/science/elements/how-a-sharp-eyed-...
All of those examples have no relative meaning. If there are millions of papers published per year, then 1000 cases over a decade isn't very prevalent (still bad).
Here's some numbers from insiders with relative meaning.
https://blog.plan99.net/fake-science-part-i-7e9764571422
‘It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgement of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of the New England Journal of Medicine.’ — Marcia Angell
0.04% of papers are retracted. At least 1.9% of papers have duplicate images “suggestive of deliberate manipulation”. About 2.5% of scientists admit to fraud, and they estimate that 10% of other scientists have committed fraud.
“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” — Richard Horton, editor of the Lancet
The statcheck program showed that “half of all published psychology papers…contained at least one p-value that was inconsistent with its test”.
The GRIM program showed that of the papers it could verify, around half contained averages that weren’t possible given the sample sizes, and more than 20% contained multiple such inconsistencies.
The fact that half of all papers had incorrect data in them is concerning, especially because it seems to match Richard Horton’s intuitive guess at how much science is simply untrue. And the GRIM paper revealed a deeper problem: more than half of the scientists refused to provide the raw data for further checking, even though they had agreed to share it as a condition for being published.
After some bloggers exposed an industrial research-faking operation that had generated at least 600 papers about experiments that never happened, a Chinese doctor reached out to beg for mercy: “Hello teacher, yesterday you disclosed that there were some doctors having fraudulent pictures in their papers. This has raised attention. As one of these doctors, I kindly ask you to please leave us alone as soon as possible … Without papers, you don’t get promotion; without a promotion, you can hardly feed your family … You expose us but there are thousands of other people doing the same.“
The article says:
Yet despite decades of research, no treatment has been created that arrests Alzheimer’s cognitive deterioration, let alone reverses it.
Nowhere in the article does it mention that anti-amyloid therapies such as donanemab and lecanemab have so far successfully slowed decline by about 30%. They may not yet be "arresting" (fully stopping) the disease, but it's pretty misleading for the article to completely omit reference to this huge success.
We are currently in the midst of a misguided popular uprising against the amyloid hypothesis. There were several fraudulent studies on amyloid, and those responsible should be handled severely by the scientific community. But these fraudulent studies do not constitute the foundational evidence for the amyloid hypothesis, which remains very solid.
From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".
If they are so good at cleaning up the amyloid, why don't people have more of an improvement? I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.
From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".
After many decades of research, we've gone in the last few years from no ability whatsoever to affect the underlying disease, to 30% slowdown. To be clear, that's a 30% slowdown in clinical, cognitive endpoints. Whether you call that "meaningful" is a bit subjective (I think most patients would consider another couple years of coherent thinking to be meaningful), and it has to be weighed against the costs and risks, and there's certainly much work to be done. But it's a huge start.
If they are so good at cleaning up the amyloid, why don't people have more of an improvement?
No one is expected to improve after neurodegeneration has occurred. The best we hope for is to prevent further damage. Amyloid is an initiating causal agent in the disease process, but the disease process includes other pathologies besides amyloid. So far, the amyloid therapies which very successfully engage their target have not yet been tested in the preclinical phase before the amyloid pathology initiates further, downstream disease processes. This is the most likely reason we've seen only ~30% clinical efficacy so far. I expect much more efficacy in the years to come as amyloid therapies are refined and tested at earlier phases. (I also think other targets are promising therapeutic targets; this isn't an argument against testing them.)
I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.
To be clear, the evidence for the amyloid hypothesis is causal. The association between amyloid and Alzheimer's has been known since Alois Alzheimer discovered the disease in 1906. The causal evidence came in the 1990's, which is why the scientific community waited so long to adopt that hypothesis.
Would it be fair to say that it's causal in terms of process, but perhaps not in terms of initiation?
That is, there's a feedback loop involved (or, likely, a complex web of feedback processes), and if a drug can effectively suppress one of the steps, it will slow the whole juggernaut down to some extent?
Am reminded a little of the processes that happen during/after TBI - initial injury leads to brain swelling leads to more damage in a vicious cycle. In some patients, suppressing the swelling results in a much better outcome, but in others, the initial injury, visible or not, has done too much damage and initiated a failure cascade in which treating the swelling alone won't make any difference to the end result.
Reading between the lines if we gave people those drugs before they show any symptoms we should be able to do even better. Has this been tested? How safe are those drugs? What should the average person be doing to avoid accumulating amyloids in the first place?
Reading between the lines if we gave people those drugs before they show any symptoms we should be able to do even better. Has this been tested?
I do expect early enough anti-amyloid treatment to essentially prevent the disease.
Prevention trials of lecanemab and donanemab (the two antibodies with the clearest proof of efficacy and FDA approval) are ongoing: https://clinicaltrials.gov/study/NCT06384573, https://clinicaltrials.gov/study/NCT04468659, https://clinicaltrials.gov/study/NCT05026866
They have not yet completed.
There were some earlier prevention failures with solanezumab and crenezumab, but these antibodies worked differently and never showed much success at any stage.
How safe are those drugs?
There are some real safety risks from brain bleeding and swelling, seemingly because the antibodies struggle to cross the blood-brain barrier, accumulating in blood vessels and inducing the immune system to attack amyloid deposits in those locations rather than the more harmful plaques in brain tissue. A new generation of antibodies including trontinemab appears likely to be both more effective and much safer, by crossing the BBB more easily.
What should the average person be doing to avoid accumulating amyloids in the first place?
There's not much proven here, and it probably depends on your individualized risk factors. There's some evidence that avoiding/properly treating microbial infection (particularly herpes viruses and P. gingivalis) can help, since amyloid beta seems to be an antimicrobial peptide which accumulates in response to infection. There may also be some benefit from managing cholesterol levels, as lipid processing dysfunction may contribute to increased difficulty of amyloid clearance. Getting good sleep, especially slow wave sleep, can also help reduce amyloid buildup.
What about supplementation with curcumin?
>If they are so good at cleaning up the amyloid, why don't people have more of an improvement?
I have zero knowledge in this field, but there's a very plausible explanation that I think is best demonstrated by analogy:
If you shoot a bunch of bullets into a computer, and then remove the bullets, will the computer be good as new?
Have you seen the price of ammunition lately? I think we'll need a huge NIH grant to run that experiment.
Does your computer exhibit any plasticity? After how long are we taking the post-sample?
Those quoting the 30% figure may want to research where that figure comes from and what it actually means:
“Derek Lowe has worked on drug discovery for over three decades, including on candidate treatments for Alzheimer’s. He writes Science’s In The Pipeline blog covering the pharmaceutical industry.
“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'” he told Big Think.
“Although the effect of the drug will be described as being about a third, it consists, on average, of a difference of about 3 points on a 144-point combined scale of thinking and daily activities,” Professor Paresh Malhotra, Head of the Division of Neurology at Imperial College London, said of donanemab.
What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.
“That’s a minimal difference, and people are unlikely to perceive any real alteration in cognitive functioning,” Alberto Espay, a professor of neurology at the University of Cincinnati College of Medicine, told KFF Health News.
At the same time, these potentially invisible benefits come with the risk of visible side effects. Both drugs caused users’ brains to shrink slightly. Moreover, as many as a quarter of participants suffered inflammation and brain bleeds, some severe. Three people in the donanemab trial actually died due to treatment-related side effects.”
https://bigthink.com/health/alzheimers-treatments-lecanemab-...
And here’s a Lowe follow-up on hard data released later:
https://www.science.org/content/blog-post/lilly-s-alzheimer-...
“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'”
It's not quite that simple, and the amyloid hypothesis doesn't claim it to be. It does, however, claim that it's the upstream cause of the disease, and if you stop it early enough, you stop the disease. But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.
What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.
As I point out in another comment, the decline (from a baseline of ~3 points worse than a perfect score) during those 18 months is only 1.66 points in the placebo group, It's therefore very misleading to say this is an 18-point scale, so a 0.45 point benefit isn't clinically meaningful. A miracle drug with 100% efficacy would only achieve a 1.66 point slowdown.
“But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.”
Ok, maybe we’re just arguing different points here. I’ll grant that amyloids have something to do with all of this. I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.
Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?
I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.
My central claim is the the drugs help quite a lot, by slowing down the disease progression by 30%, and that it's highly misleading to say "only 0.45 points benefit on an 18 point scale", since literally 100% halting of the disease could only have achieved 1.66 points efficacy in the 18 month clinical trial.
This is like having a 100-point measure of cardiovascular health, where patients start at 90 points and are expected to worsen by 10 points per year, eventually dying after 9 years. If patients given some treatment only worsen by 7 points per year instead of 10, would you say "only 3 points benefit on a 100 point scale"?
Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?
I do believe that they will be more (close to 100%) efficacious when used in this way, yes.
Downvoters, are you sure you have a rational basis for downvoting this informative post? Do us HNers really know enough to discredit the amyloid hypothesis when 99.9% of us know nothing other than it's gotten some bad press in recent years?
I googled lecanemab and it does have the clinical support claimed. I don't see anyone questioning the data. I'm as surprised as anyone else, even a little suspicious, but I have to accept this as true, at least provisionally.
For anyone who wants to start grappling with the true complexity of this issue, I found a scholarly review [1] from October 2024.
[1] The controversy around anti-amyloid antibodies for treating Alzheimer’s disease. https://pmc.ncbi.nlm.nih.gov/articles/PMC11624191
https://www.reddit.com/r/medicine/comments/1057sjo fda_oks_lecanemab_for_alzheimers_disease/
"Lecanemab resulted in infusion-related reactions in 26.4% of the participants and amyloid-related imaging abnormalities *with edema or effusions in 12.6%*."
https://en.wikipedia.org/wiki/Cerebral_edema
"After 18 months of treatment, lecanemab slowed cognitive decline by 27% compared with placebo, as measured by the Clinical Dementia Rating–Sum of Boxes (CDR-SB). This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"
https://www.understandingalzheimersdisease.com/-/media/Files...
Sum of boxes is a 19 point scale. So, for those keeping track at home, this is an incredibly expensive treatment that requires premedication with other drugs to control side affects as well as continuous MRIs for an ~%2.3 absolute reduction in the progression of dementia symptoms compared to placebo, with a 12% risk of cerebral edema.
Now, I'm no neurologist, but I'd call that pretty uninspiring for an FDA-approved treatment.
"This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"
…
Sum of boxes is a 19 point scale.
It's an 18 point scale, but more to the point: the decline in the placebo group was only 1.66 points over those 18 months, and the mean score at baseline was just over 3 points. So even 100% efficacy could only possibly have slowed decline by 1.66 out of 18 points (what you would call a 9.2% absolute reduction) in the 18 months of that experiment. And full reversal (probably unattainable) would have only slowed decline by about 3 points.
I agree that the side effects of anti-amyloid therapies are a serious concern. The reasons for this are being understood and corrected in the next generation of such therapies. For example, I expect trontinemab to achieve better efficacy with much greater safety, and there is already preliminary evidence of that. Furthermore, there are improved dosing regimens of donanemab which improve side effects significantly.
Note that my claim is not that the existing drugs are stellar, and certainly not that they're panaceas. Simply that the amyloid hypothesis is true and there has been tremendous progress based on that hypothesis as of late.
To emphasize your point, I don't think anyone will notice if someone's alzheimers is 2.3% better.
These rating scales like CDR-SB (invented by drug companies or researchers who are funded by drug companies) are very good at making the tiniest improvement sound significant.
I did not downvote, but OP failed to provide a link to back up his claim, or to make explicit what "slowing decline by about 30%" even means.
In light of the fraudulent and scandalous approval of aducanumab [0] (which also targeted amyloid), such claims must be thoroughly referenced.
[0] https://en.wikipedia.org/wiki/Aducanumab#Efficacy
How do you know what the downvote status is?
And that is the core problem with what happened. There may actually be a grain of truth but now there is a backlash. I'd argue though that the mounds of alternative explanations that weren't followed up on should likely get some priority right now since we know so little about them there is a lot to learn and and we are likely to have a lot of surprises there.
I see this as the same problem with UCT (upper confidence for trees) based algorithms. If you get a few initial random rolls that look positive you end up dumping a lot of wasted resources into that path because the act of looking optimizes the tree of possibilities you are exploring (it was definitely easier to study amyloid lines of research than other ideas because of the efforts put into it). Meanwhile the other possibilities you have been barely exploring slowly become more interesting as you add a few resources to them. Eventually you realize that one of them is actually a lot more promising and ditch the bad rut you were stuck on, but only after a lot of wasted resources. To switch fields, I think something similar happened to alpha-go when it had a game that ended in a draw because it was very confident in a bad move.
Basically, UCT type algorithms prioritize the idea that every roll should optimize the infinite return so it only balances exploration with exploitation. When it comes to research though the value signal is wrong, you need to search the solution space because your goal is not to make every trial find the most effective treatment, it is to eventually find the actual answer and then use that going forward. The trial values did not matter. This means you should balance exploration, exploitation AND surprise. If you do a trial that gives you very different results than you expected then you have shown that you don't know much there and maybe it is worth digging into so even the fact that it may have returned less optimal value than some other path its potential value could be much higher. (Yes I did build this algorithm. Yes it does crush UCT based algorithms. Just use variance as your surprise metric then beat alpha-go.)
People intrinsically understand these two algorithms. In our day to day lives we pretty exclusively optimize exploration and exploitation because we have to put food on the table while still improving, but when we get to school we often take classes that 'surprise' us because we know that the goal at the end is to have gained -some- skill that will help us. Research priorities need to take into account surprise to avoid the UCT rut pitfalls. If they had for the amyloid hypothesis maybe we would have hopped over to other avenues of research faster. 'The last 8 studies showed roughly the same effect, but this other path has varied wildly. Let's look over there a bit more.'
yeeeess...but when you look at the slope of the decline on the NEJM papers describing the clinical trials of lecanumab and donemumab...are you really slowing the decline?
To be clear, I think you're asking whether maybe the drugs just provide a temporary "lift" but then the disease continues on the same basic trajectory, just offset a bit?
The studies aren't statistically powered to know for sure, but on lecanemab figure 2, the between-group difference on CDS-SB, ADAS-Cog14, ADCOMS, and ADCS-MCI-ADL (the four cognitive endpoints) widens on each successive visit. Furthermore, while not a true RCT, the lecanemab-control gap also widens up to 3 years in an observational study: https://www.alzforum.org/news/conference-coverage/leqembi-ca...
On donanemab figure 2, there is generally the same pattern although also some tightening towards the end on some endpoints. This could be due to the development of antidrug antibodies, which occurs in 90% of those treated with donanemab; or it could be statistical noise; or it could be due to your hypothesis.
There is anecdotal evidence and perhaps even some small studies showing that a keto diet can halt and even reverse Alzheimer's symptoms.
Compared to that, reducing the speed of decline isn't terribly impressive. It's better than nothing to be sure! But what people want is BIG progress, and understandably so. Billions have been spent.
Billions have been spent because it's a challenging disease to understand and treat. I want big progress too. But we shouldn't let our desire for big progress cause us to lose our ability to objectively evaluate evidence.
I have no opposition to a properly controlled randomized controlled trial of the keto diet, or other proposed therapies (many of which have been conducted, and are for targets other than amyloid which are completely compatible with the amyloid hypothesis). Until a proper RCT of keto is conducted, anecdotal claims are worth very little compared to the evidence I referred to.
I'm far, far more interested in anecdotes about completely halting or reversing decline than I am in rock solid data about a 30% reduction in decline speed.
Antibiotics started out as an anecdote about something whose effect was so stark it couldn't be missed. Chasing promising anecdotes is far more valuable (in my opinion) than attempting to take a 30% effect to a 100% effect.
Others are free to feel differently of course. I'm open to hearing about 100 different times that finding a tiny effect that got grown and magnified into a huge effect that totally changed medicine. I'm just not aware of many at this point.
You can be interested in what you want. But the interest in anti-amyloid therapy came from the basic science indicating amyloid pathology as the critical but-for cause of the disease. It wasn't just a blind shot in the dark.
To my knowledge, there's no such basic science behind a keto diet for Alzheimer's.
Turns out there are enough studies for a meta analysis. Is that basic science? I'm not sure what counts.
https://www.sciencedirect.com/science/article/pii/S127977072...
If amyloid is truly the critical "but for" cause then how on earth is it possible that reducing amyloid burden doesn't really make a difference?
https://scopeblog.stanford.edu/2024/03/13/why-alzheimers-pla...
Turns out there are enough studies for a meta analysis. Is that basic science?
Basic science in this context means research investigating the underlying disease process to develop knowledge of how it works mechanistically, as distinguished from (and as a precursor to) developing or testing treatments for the disease. This helps us direct resources in plausibly useful directions rather than merely taking shots in the dark, and it also helps us to interpret later clinical findings: e.g. if we see some cognitive benefit in a three-month trial, is that because the underlying disease process was affected (and hence the benefit might persist or even increase over time), or might it be because there was some symptomatic benefit via a completely separate mechanism but no expectation of a change in trajectory? For example, cholinergic drugs are known to provide symptomatic benefit in Alzheimer disease but not slow the underlying biological processes, so that worsening still continues at the same pace. Or if we see results that are statistically borderline, is it still worth pursuing or was the very slight benefit likely a fluke?
So a meta-analysis of ketogenic diets in Alzheimer disease is not basic science, though that doesn't mean it's useless. But what I'm saying is it's really helpful to have a prior that the treatment you're developing is actually targeting a plausible disease pathway, and the amyloid hypothesis gives us that prior for amyloid antibodies in a way that, to my knowledge, we don't have for ketogenic diets.
https://www.sciencedirect.com/science/article/pii/S127977072...
Thanks, I just took a look at this meta-analysis. The studies with the strongest benefits on the standard cognitive endpoints of MMSE and ADAS-Cog — Taylor 2018, Qing 2019, and Sakiko 2020 — all lasted only three months, which makes me suspect (especially given the context of no theoretical reason to expect this to work that I'm aware of) this is some temporary symptomatic benefit as with the cholinergic drugs I mentioned above.
But it's enough of a hint that I'd support funding a long-term trial just to see what happens.
If amyloid is truly the critical "but for" cause then how on earth is it possible that reducing amyloid burden doesn't really make a difference?
I've argued elsewhere in the thread that it does make quite a difference, but there's still a lot of work to do, and I've said what I think that work is (mainly: improving BBB crossing and administering the drugs earlier).
There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.
There was no theoretical reason that washing your hands would do anything to combat the spread of disease and all the smart doctors knew otherwise. Some kooky doctor named Semmelweisz proposed that doctors should wash their hands between childbirths in 1847, 14 years before Pasteur published his findings on germ theory in 1861. When some doctors listened to him maternal mortality dropped from 18% to 2%.
I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.
It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined. That's too bad.
The strength of the effect of keto diet interventions in Alzheimer's is pretty strong to my understanding. Which should be aggressively hinting that there's likely some as-yet unknown causality that's worth investigating. We don't have to spend billions to do that. But we do need more funding for it which is hard to get while all the amyloid hypothesis folks are really invested and clamoring.
There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.
Again, I'm in favor of people investigating all sorts of random shit.
I agree that sometimes unexpected things pan out. If you want to run a carefully conducted, large long-term trial on ketogenic diets in Alzheimer's, I support you. I'm just skeptical it'll pan out, and on priors I'll put greater expectation on the approach with a scientifically demonstrated mechanistic theory behind it.
I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.
It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined.
What are you basing this one? The p-value on lecanemab's single phase 3 trial was below 0.0001. And the causal role (not mere association) of amyloid in the disease has been demonstrated for years before significant efforts were invested developing therapies to target amyloid in the first place; most convincingly in the genetic mutations in APP, PSEN1, and PSEN2.
I agree more science is certainly better than less. But patentable therapies will always get a disproportionate amount of funding for big science (versus a basically-free dietary change).
For ketones and cognition, also look for studies on MCT oil. Such as https://pmc.ncbi.nlm.nih.gov/articles/PMC10357178/
It's possible you might adopt a different attitude if one day you're diagnosed with rapid onset Altzheimer. At that stage you'd be forgiven for muttering 'basic science be blowed'. Keto (or whatever) offered some relief for my friend Bill, I'll give it a try given it's my survival at stake.
Plate tectonics was suggested in 1913 and not supported (to put it politely) at that point by 'basic science'. It took until 1960 to be accepted. A paradigm shift was needed as Kuhn explained.
Meanwhile, this paper (2024) https://www.sciencedirect.com/science/article/pii/S127977072... 'Effects of ketogenic diet on cognitive function of patients with Alzheimer's disease: a systematic review and meta-analysis'
concludes "Research conducted has indicated that the KD can enhance the mental state and cognitive function of those with AD, albeit potentially leading to an elevation in blood lipid levels. In summary, the good intervention effect and safety of KD are worthy of promotion and application in clinical treatment of AD."
There are also studies showing a plant-based diet can reverse Alzheimer's symptoms as well. It has to do with atherosclerosis.
Can you provide a source for this?
I'm not aware of any RCT showing long-term improvement of Alzheimer's symptoms from any treatment. I am aware of 1) long-term slowing of worsening (not improvement) from anti-amyloid therapy, 2) short-term benefits but no change in long-term trajectory from other therapies, and 3) sensational claims without an RCT behind them.
Sadly this stems from a structural problem in biology and medicine, and is far from exclusive to the field of Alzheimer's. Some reforms are urgent, otherwise progress is going to be incredibly slow. The same pattern is repeated again and again. Someone publishes something that looks novel, but is either an exaggeration or a downright lie.
This person begins to attract funding, grant reviews and article reviews. Funding is used to expand beyond reasonable size and co-author as many articles as possible. Reviews mean this person now has the power to block funding and/or publication of competing hypotheses. The wheel spins faster and faster. And then, we know what the outcome is.
The solution is to make sure reviews are truly independent plus limitations on funding, group size and competing interests. I think that tenured researchers that receive public funding should have no stock options nor receive "consulting" fees from private companies. If they want to collaborate with industry, that's absolutely fine, but it should be done pro bono.
Furthermore, if you are a professor who publishes 2 articles per week and is simultaneously "supervising" 15 postdocs and 20 PhD students at 2 different institutions then, except in very few cases, you are no longer a professor but a rent seeker that has no clue what is going on. You just have a very well oiled machine to stamp your name into as many articles as possible.
The NIH already has total funding limits for grant eligibility, and the issue of competitors blocking your publications is pretty much eliminated by asking them to be excluded as reviewers, because we almost always already know who is going to do that. A competent editor will also see right through that.
I did my postdoc in a very well funded lab that was larger than even your examples- and they legitimately could do big projects nobody else could do, plus postdocs and grad students had a lot more autonomy, which helped them become better scientists. The PI worked at a distant/high level, but he was a good scientist and a skilled leader doing honest and valuable research, and had economies of scale that let him do more with the same research dollars. It was the least toxic and most creative and productive lab I’ve ever seen. Banning that type of lab would be to the massive detriment of scientific progress in my opinion.
I also disagree about banning consulting and startups for PIs- that is arguably where research ends up having the highest impact, because it gets translated to real world use. It also allows scientists to survive in HCOL areas with much less government funding. Frankly, I could make 4x the salary in industry, and if I were banned from even consulting on the side it would be much harder to justify staying an academic while raising a family in a HCOL area.
I am also very upset about academic fraud and have seen it first hand- but I think your proposed solutions would be harmful and ineffective. I’m not sure what the solution is, but usually students, postdocs, and technicians know if their PI is a fraud and would report if it were safe for them to do so, but they don’t have enough power to do so. Fixing that would likely solve this. Even for a junior PI, reporting on a more senior colleague would usually be career ending for them, but not who they are reporting on.
at this moment, what NIH? No study sections for grants since the inauguration...
Instead of amyloid and tau, we now have a bunch of promising new leads:
- insulin and liver dysregulation impacting the brain downstream via metabolic dysfunction
- herpesviruses crossing the blood brain barrier, eg after light head injury or traveling the nervous system
- gut microbiota imbalance causing immune, metabolic, or other dysregulation
- etc.
These same ideas are also plausible for MS, ADHD, etc.
curious if you could link to relevant papers? Thanks!
There’s a good discussion in the previous article discussed on HN, including links to various papers.
1. https://news.ycombinator.com/item?id=42893627
2. https://pmc.ncbi.nlm.nih.gov/articles/PMC8234998/
I'm not a fan of many of the practices you complain about here, but I will say this: We get paid too little for what we do for way too long....6 years of grad school (24k/yr) 6 year post-doc (42k/yr) in California, when I was in those positions anyway. Today, at UC Davis, assistant professors in the UC system start at $90,700 [1, for salary scale], which is often around 12 years after their undergraduate degree. That's in California, where a mortgage costs you $3,000 a month, minimum.
[1] https://aadocs.ucdavis.edu/policies/step-plus/salary-scales/...
Why do employees keep voluntarily accepting that type of abuse? Low wages aren't a secret and the employees doing that work aren't idiots so they must know what they're getting into. Are they doing it out of some sort of moral duty, or as immigrants seeking permanent resident status, or is there some other reason? Presumably if people stopped accepting those wages then the wages would have to rise.
Those numbers aren’t accurate anymore they’re out of date and now much higher. Also, I voluntarily pay my students and postdocs 2-3x those numbers currently.
But ultimately (1) those are seen as training positions that lead to a tenured faculty position, which pays fairly well, and has a lot of job security and freedom; (2) certain granting agencies limit what you can spend on students and postdocs, to levels that are too low for HCOL areas.
A lot of this is defined by the NIH, K23/R01 grant amounts specify what kind of salary they support and are kind of defined by the powers that be...
UC system def has more overhead than usual, and there may be some cost-of-living adjustments but...
Hence why a lot of us went into private practice...
I'll add in that it's a buyer's market. There are plenty of post docs with no work (who want to work in the academic space) so if you don't want the post there are plenty in line who do.
There's no post-doc-research union to set and enforce reasonable pay scales, but equally a union would have difficulty adjusting rates to local cost of living.
Put another way - supply and demand baby, supply and demand.
UC postdocs are unionized… but it’s not a great union. The higher paid postdocs saw their pay and benefits go down from the union.
You keep bringing up one state in the union as if the whole system is flawed because of this one state.
Picking a state that's not California nor New York, Massachusetts, etc.: https://www.indeed.com/career/assistant-professor/salaries/A...
Says $60,000 for University of Arkansas or 2/3rd of what was listed for California.
I'm not an academic nor do I live in California (or Arkansas) but $90,000/year after 6 years working below minimum wage and 6 years barely above doesn't sound that great in <economic terms>. Hopefully people are getting benefits from teaching or research.
$60K is very hard to believe. I wouldn't trust indeed.com for faculty salaries.
For one thing, the variance is high between departments. Engineering gets paid a lot more than history, for example.
According to https://www.univstats.com/salary/university-of-arkansas/facu..., assistant professor is $91K. The caveats still apply - some likely get a lot more, and some a lot less.
Fortunately, it's a public university, so we can see actual salaries:
https://app.powerbi.com/view?r=eyJrIjoiZGM3Yzg2YzMtNDY3YS00N...
Looking at some random assistant professors in relevant departments, I see:
One who earns $113K
Another who earns $94K
Another who earns $96K.
These are regular departments, (biochemistry, biology, etc). Not medical departments. I'm sure those ones get paid more (e.g. one I personally knew in Houston got $180K in a public university in Dallas).
So mid $90's would be my guess.
Then note that these are 9 month salaries, and the typical deal is they get up to a third more from grants (their "summer" salary). So total compensation for assistant professors would be about $120K.
Faculty salaries are not that bad, as a rule. What's really bad is the number of years they spend trying to get a postdoc.
for biomedical, it is what your grant stipulates. I.e. I think way back when, K23's paid $98k for 75% time (or maybe 70%) and your institution agreed to pay the rest. Sometimes, they would actually pay you more to try and get close to fair market value, but that is if the department is generous. For famous institutions, like UCLA, or Brigham and Women's, the law of supply and demand is not on your side bc if you don't like the low salary, there's a giant line of wannabes waiting to take your spot.
Are you sure that those are 9 month salaries?
Published historical pay will be 12 month salaries. Most non tenured professors will refuse the optional summer salary and work all summer for free, because they have to pay for it from their own grants- it means hiring one less student, and less chance of getting tenure.
Fairly certain. It's also in line with salaries I know from other departments. This is the "fixed" amount of the salary. The grant portion is variable, and also not paid by the state, so it's usually not required by the law to disclose.
You mean hire one or get one?
Except that is where the majority of research is done, where the prestigious schools and students are, and where those people want to live. Plenty of good research gets done in Arkansas, and Texas, and North Carolina, but not in the cheap parts. It happens in Research Triangle, or Austin, or Fayetteville. It doesn't happen at Oachita Baptist College in Arkadelphia.
The somewhat good news is that people get into science and medicine because they believe in them, and they're often willing to work for peanuts so that big pharma can take their work and charge Medicare $80k/yr for a new drug that might work.
There's huge problems in academia and it's incentive structure, but I don't think they're related to be being in urban vs rural America (they exist just as badly in Europe, China, and India)
Those places you list have gone up painfully in a relative sense (like everywhere lately) but are nothing like the absurdity of California. You don't need two highly-paid professional incomes to afford a house with a long commute. There's also Atlanta, Houston, Dallas, Chicago, central Florida, many others.
it also depends on the lab.
It's a very small world for various reasons and sometimes, there's a good combination between a PI and a hosting institution. Sometimes there's not. If the guy who's doing what you want to do has his lab at UAB, you go to UAB. That being said, once you get your K23 or R01, because of NIH matching funds, you have more of a choice of where you go.
That one state is where the apex of the system is. It's where a lot of, maybe most, research happens, it's where perhaps most tech development happens, it's even where a lot of our popular culture is determined. It's where ~everyone is aiming for, even if only a fraction of them will make it there, so it affects the whole system.
You sincerely believe this?
All that given, most people still live on the East coast, as in 80% past Nevada. Culture is arguable.
For Americans, there is a clear difference between what behavior might be normal in the bay area/silicon valley all the way to LA than it is for NY, Boston, Houston, Miami, Detroit, etc.
I'd even assert "most tech development" is just plain wrong. It's certainly where many companies are HQ'd, but those same companies have offices all over the states, and each one offers/specializes in different products.
It also depends on what you mean by "tech development" of course. R&D projects and new developments, maybe there's an edge. I have a much stronger feeling that more research is performed in the Boston -> DC metropolis than the equivalent (as in distance) metropolis spanning from LA -> Silicon Valley.
I was contextualizing my response because cost of living is higher in California, and some of those numbers may seem more reasonable if it were in Arkansas, for example.
I think NIH does a cost of living adjustment.
devil's advocate - is that not the very definition of scaling?
It's nice to live in a world where actions have consequences. When the media coverage got too much, Marc Tessier-Lavigne finally had to resign as president of Stanford, so he could focus on his job as a Stanford professor.
I can't tell whether your post is a joke. Yes, Tessier-Lavigne was forced to resign. But Stanford let him stay on as a professor. That was terrible: they should have kicked him out of the university.
I also can't tell whether Stanford is joking, but the notion that he's a good fit for the job of biology professor is definitely funny!
They are joking.
I'm no expert, but I suspect it is a longer process to remove someone from a tenured professor position, than to remove them as President. We don't know that they won't eventually happen.
There are betrayals so severe that a grindingly slow due process is actually itself an addition betrayal. Not arguing for a kangaroo court, but tenure should not be a defense for blatant cheating.
Those who would condemn "science" need to explain why their concept is different from listening to the weather report and doing the exact opposite -- that's not a recipe for success, and "science" has gotten far more right than wrong over the years.
Now journalists dunk with science publication fraud, claiming its wider than it could possibly be! It will certainly get engagement!
More burn the bridges science journalism; "the worst that can happen is somehow the common place"; erode trust in institutions more, so and on so on so forth.
We are currently plummeting into nihilism, see you at the bottom, hope the clicks were worth it.
Similar well-founded concerns were raised before:
"How an Alzheimer’s ‘cabal’ thwarted progress toward a cure" (2019) https://news.ycombinator.com/item?id=21911225
The article starts with a non-sequitur -- "we have tackled some disease, so we should be able to tackle alzheimer's, but we haven't"
It would be great if opinion articles never made it to the front page, on any media aggregator, anywhere.
They don't make it to the front page of a newspaper.
HN is not a newspaper. Opinion pieces fit perfectly well within the guidelines as long as they are factual and interesting.
... sure?
There is a trend of flagging many other categories of media on this site, even if its "factual and interesting". I was making a plea to the void of this community to stop eating this category of bullshit.
The big problem is not science per se but capitalist pharma. Science eventually self corrects but capitalism creates a huge inertia driven by the fact people do not want to lose money and do everything possible to push things as much as they can. So much investments went into the amyloid hypothesis.
The story of $SAVA is paradigmatic. Every neuroscientist knew that stuff was based 100% on fraudulent results but they nevertheless managed to capitalise billions.
Cassava Science is mentioned in the article and their problem is that the drug they backed was based on fraudulent academic research. The professor has been indicted by the DOJ for defrauding the NIH of $16M in grants. This isn't an indictment of capitalist pharma because the original fraud wasn't done in a capitalist system.
related: https://www.statnews.com/2019/06/25/alzheimers-cabal-thwarte... (2019)
Based on the scale and impact of fraudulent results, I wonder if some form of LLM based approach with supervised fine-tuning could help highlight the actually useful research.
Papers are typically weighted by citations, but in the case of fraud, citations can be misleading. Perhaps there's a way to embed all the known alzheimer's research, then finetune the embeddings using negative labels for known fraudulent studies.
The the resulting embedding space (depending on how its constructed; perhaps with a citation graph under the hood?) might be one way to reweight the existing literature?
Highlighting research that's useful is probably too difficult, but highlighting research that definitely isn't should be well within the bounds of the best existing reasoning LLMs.
There are a lot of common patterns in papers that you learn to look for after a while, and it's now absolutely within reach of automation. For example, papers whose conclusions section doesn't match the conclusions in the abstract are a common problem, likewise papers that contain fake citations (the cited document doesn't support the claim, or has nothing to do with the claim, or sometimes is even retracted).
For that matter you can get a long way with pre-neural NLP. This database tracks suspicious papers using hand-crafted heuristics:
https://dbrech.irit.fr/pls/apex/f?p=9999:1::::::
What you'll quickly realize though is that none of this matters. Detection happens every day, the problem is that the people who run grant-funded science itself just don't care. At all. Not even a little bit. You can literally send them 100% bulletproof evidence of scientific fraud and they'll just ignore it unless it goes viral and the sort of media they like to read begins asking questions. If it merely goes viral on social media, or if it gets picked up by FOX News or whatever, then they'll not only ignore it but double down and defend or even reward the perpetrators.
I honestly cannot tell if you're being serious or sarcastic.
Me neither. But it’s very much in keeping with other seriously-intended suggestions I’ve heard. Optimism is fine until it becomes just dreaming and wishing out loud.
Can you explain to me what about the idea I presented is flawed or infeasible?
Say it again and loud. No null publications leads to reproducibility crisis.
Especially in an academic discipline that fundamentally bowing to it's industry counterparts for scraps.
This coming to you from a field which in modern times which reinvented the trapezium rule...
The amyloid hypothesis, as described to me by someone working in the field, is not only wrong, but it is harmful to the patients. His research is studying that it is more probable that the plaques are actually protective and do not cause the memory loss and other disease symptoms directly. This idea was pushed aside and ridiculed for years all because of the greed and lies by people like Eliezer Masliah.
https://pmc.ncbi.nlm.nih.gov/articles/PMC2907530/
https://journals.sagepub.com/doi/abs/10.3233/JAD-2009-1151
https://www.utmb.edu/mdnews/podcast/episode/alzheimer's---co...
Yes! This plays into my favorite pet theory - that herpesviruses (and/or other microbes) are the cause of Alzheimers:
https://pmc.ncbi.nlm.nih.gov/articles/PMC5457904/pdf/nihms85...
I am going to point out how much the vibes have shifted here. When earthquake scientists gave false reassurances in the run-up to the 2009 earthquake at L'Acquila, likely contributing to the many deaths there, Italian criminal proceedings resulted. At that time, though, the usual suspects decided that this was science under attack and there was high-level protest and indignation that scientists might face legal consequences for their statements. https://en.wikipedia.org/wiki/2009_L%27Aquila_earthquake#Pro...
I think that's an apples-to-oranges comparison.
First, keep in mind that while those scientists were originally convicted, they were exonerated on appeal. The appellate court's opinion was that the responsibility was on the public official who announced that the area was safe without first consulting the scientists.
Secondly, this case isn't clear cut. Some still fault the scientists for not correctly interpreting the seismology data and not speaking up against the public official who was supposedly to blame. There's a big question around whether these scientists were correct or not in their judgement.
At any rate, this is pretty far from scientific fraud. Seismology (as far as I, a layman, understands it) is not a science where you can make exact predictions. Being wrong about the future isn't fraud, it's just unlucky.
You're conflating outright fraud with a difference of scientific opinion?
To be fair, it looks like there was fraud in the Italy case - just of the builders not the scientists.
The issue isn't whether someone else had a contrary opinion: the issue is that (just going by the linked reports) Bernardo De Bernardinis came out from the meeting with the scientists and informed the public that there was "no danger". Now, either the scientists felt that this was a reasonable summary of what they had said, or they didn't: either of those is bad, in different ways.
Okay, honest question: in what world is a magnitude 5.9 a "major" earthquake? 5.x earthquakes happen multiple times per day somewhere, and the same area had known much worse earthquakes in the preceding century.
How bad do your buildings have to be to get a 3-digit death toll from something that weak? I'd expect a 3-digit toll for "number of fine-china plates broken".
Don't be a dick.
Christchurch was absolutely fucked by a 6.1 Mw(USGS) magnitude earthquake in 2011 - shallow depth and close to the city so we had severe outcomes. We have reasonably good earthquake building codes in New Zealand.
Poorer countries with more fragile infrastructure could be devastated by a smaller quake - depending on local circumstances.
The only thing that saved Christchurch from far far worse outcomes was that the most dangerous buildings were already empty because of damage from a larger earthquake in 2010.
I admit the log scale means that 6.1 is far bigger than 5.9... However you handwaved 5.x which shows your ignorance of log scales.
We had plenty of aftershocks below 6.1 - they are extremely unpleasant and have awful psychological effects on some people.
And we are hundreds of kilometers away from the major Southern Alps faultline - the equivalent to San Andreas. So we were blessed with some newly discovered minor faultlines.
https://en.m.wikipedia.org/wiki/2011_Christchurch_earthquake
https://en.m.wikipedia.org/wiki/2010_Canterbury_earthquake
Why should scientists be held to higher account than, say, politicians?