The risk to benefits ratio of introducing a language model to interpret so clear signals is nowhere near justified.
Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
Yep. The oref1 algorithm is amazing and proven to make diabetic's quality of life better, AND SAFE. I don't understand why would you need to add AI to that mix.
But I will check this algo out. Maybe it has some interesting bits.
We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being.
That's just risk/benefit to the user. As the developer, I'd be concerned that publicly distributing and marketing this, even with a GPL "no warranty" license and even free to the user, is illegal.
My experience is completely the opposite, of using LLMs to pattern match and cast diagnostic nets.
Is your perspective based on, say, opinionated principle?, or experience?
The benefits are enormous.
The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
Type 1 diabetes with the sensors and pump technology that this software being presented here fits to is not Everyman Joe stuff. Someone who can set this up and get this going is already burdened with the kind of analysis that the app can assist with.
> No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
if you can't trust this thing then what is it doing? the implication that people that trust this software do not have adult competency is also confusing.
> Is your perspective based on, say, opinionated principle?, or experience?
your perspective is solely based on recent trauma so I don't know if it is more reliable in any
capacity
Don't do as I say. I'm just a rando from the Internet.
Don't do as the author of the posted software does. Don't do what the software tells you either. But the software can certainly build an informative perspective and suggest patterns and movements in an exquisitely complex disease. Managing T1D with a pump is exhausting.
Second, re. "your perspective is solely based on recent trauma so I don't know if it is more reliable in any capacity"
This kind of statement is far beyond anything bounded by the self-respect of a balanced adult. What the fuck, and who are you?
My ex-fiancée almost died in 2020. We lost an unborn child in IVF due to grave neglect on behalf of healthcare who missed the glaringly obvious Type 1 diabetes she had; They never once checked her blood sugar. You know what I did? I read the literature. I read medicine, I read molecular biology, I read neuroimmunobiology, I read about the placenta and fetal development.
I stood by my fiancée and carried her by hand back to health. She recovered faster than the endocrinologists expected. Her pregnancy was exemplary, fullly intact placental vitals out to 38.5 weeks. Healthcare is in such a bad state that I was forced to interject and argue coolly and adamantly with doctors on several occasions about potentially severe mistakes they were about to make. EVERY SINGLE TIME when I interceded, it was confirmed correct by a second opinion from a senior doctor.
I don't come here speaking from trauma. I come here speaking from grim and serious and confirmed lived experience of stepping in and caring, without any margin for error. Know how you do that? With extreme humility and the utmost care.
Who are you to speak to me like that; I can tell that you know not at all who I am or what I have been tasked with in this life, because then you would not. talk. to me. this way. Okay?
> The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
My local physician says otherwise, with respect to facebook posts about dosages. I'm convinced the same applies to LLM generated content with respect to people blindly following the computer.
I ask for your understanding in that I chose to have "baseline adult competence" highly load-bearing in my comment. This does not include people who have only such poor judgement to guide them that they use Facebook posts as input to managing their T1D pump/sensor-based management.
It is entirely possible to beneficially and safely use software like the that which is the topic of the post.
Can you point me to where it explicitly says that this app is not to be consumed by the general public? Or explain how that could even be enforced?
And my entire point was "baseline adult competence" means very little. Competent in what? Technology? Insulin administration? Both? If they're competent in technology but not insulin administration, than this is obviously a bad idea. If they're competent in insulin administration, but not technology, then why would they use it?
We're not even at the point where we can definitely say it's a good idea to surface this information to actual professionals let alone someone with no clinical experience.
It's a bad idea, period. I work with both clinicians and the general public and the idea that this can be responsibility used by either is a pipe dream that only people who work with neither can believe in.
I truly appreciate your work (and I’ll absolutely take a look at it).
But let me say one thing: I’ve been diabetic for more than 20 years. Ten years of management with finger pricks, three measurements a day, and insulin pens (thinking about it now, it feels completely insane that anyone could imagine managing this madness in such a primitive way). Then came years of CGM systems (I’m on my third one now, with different types of sensors, but that’s not the point).
I tinkered, automated, hacked things. But in the end I came to one conclusion: you need a competent specialist (someone who also understands that we tend to be a bit tech-obsessed) who, besides listening to you, actually imposes a strategy. We are the ones who need to adapt to the mainstream approach so we can speak the same language and have methods that are compatible with “everyone else.” No doctor will ever fully understand your custom system, and meanwhile the key to proper management is not in what you built.
Precise carb counting (without cheating yourself), correct boluses given at the right times, marking exercise, boredom and repetition, and being lucid about the effects of changes (agreed upon beforehand!) to CGM settings — changes that should only be made when you’re certain you’ve been a “good and precise patient.”
I’m saying this from the perspective of my own devastated situation. I now have an HbA1c of 5.8, but only after 20 years of smashing my head against the wall (and suffering incredible damage, many mistakes, and the classic “I’ve figured it all out myself” approach).
Interesting.
I can see the utility if you're going to see a nurse practitioner. But if your physician doesn't pull the actual charts for your device and visually inspect them.... try finding someone else.
I think the only thing that could be made better is tuning the I:C/ISF/Basal values automatically. And ISF is already handled by DynamicISF, while not perfect it reduces the variables you have to tweak.
Otherwise, when tuned correctly, oref1 et.al. provide amazing results and are safe. Hard to understand where I would use LLMs in this.
You sort of have that - not automatically though, but you can run autotune against nightscout and get a report of where things need to be adjusted. I run oref1 with DyanmicISF, and just run autotune every few months just to tweak values.
I genuinely don't see where I would use an LLM in this process.
Does it prompt logging? For example, when I was trying to monitor my BG after diagnosis, I tried to log my meals to correlate later, but 1) would forget and 2) wouldn’t have the energy to time align the stats. So a tool that even saw changes in BG and shot me a text or message (did you eat/exercise do something @ [time]?) and used the LLM or something else to capture and enrich the metadata. Paired with boring things like med reminders (I just realized I forgot my metformin while typing this) and giving me an easy visualizer with these meta points would be useful. If I’m tracking sleep on a device etc.
As others have said, the analysis might be risky. I don’t want to trust interpretation to anyone but myself (bear my own risk) or my clinician. But just remembering to capture the data and making it easily time alignable and possible augmentable in the future would be useful.
I don't think that LLMs are trustworthy companions in managing a complex metabolic disease like diabetes - especially if you deviate (ever so slightly) from the norm (very lean, very active, strict diet, etc.)!
I'm a T1D myself and like to experiment with ChatGPT (or Opus). My experiences are mixed
LLMs are overly cautious when it comes to correcting with insulin. They regularly advise against correcting before going to bed, even if this means that my blood glucose remains above 140 mg/dl for the whole night.
I am following a low to medium carb diet (<100g a day). ChatGPT always nudges me to consume more carbohydrates, even though I have a TIR of 90% (70-150 mg/dl). Why would I change my diet if it currently works very well for me? Still, most LLMs seem to favor carbs for some reason.
I am using Fiasp as my fast acting insulin. Typically, I inject around 1 to 4 IUs of Fiasp. Its glucose-lowering effect typically lasts for roughly 2-3 hours. Therefore, I know that it is safe to re-inject after three hours without risking insulin stacking. But ChatGPT regularly advises against that and wants me to wait another 1-2 hours.
I am not against automating diabetes management. In fact, I really appreciate projects that help with that. But I don't consider LLMs to be helpful in this regard. Their combination of training data bias, liability aversion, lack of context, and one-size-fits-all thinking disqualifies them from such tasks.
I understand this instinct, but I can see the appeal of capabilities that are well within the limits of a well-designed agentic system.
Imagine asking such a system, "look at my postprandial response to dosing for the past week and make ratio suggestions for breakfast, lunch, and dinner." This is genuinely helpful, saves time, and well within the reasoning limits. You could spot check if you like.
Is it worth setting up such an assistant for the value you'd get out of it? I guess that's on the user and how many similar use cases exist.
> look at my postprandial response to dosing for the past week and make ratio suggestions for breakfast, lunch, and dinner
I'm not so sure about that. A patient absolutely must critically evaluate the LLM's suggestions. A naive user risks severe complications. A user with that kind of competence, however, doesn't need an LLM for such trivial adjustments - they're obvious
I'm a T1D and tbh it's not that hard to manage, I just wouldn't need that. But for kids or the elderly, I see a use case.
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
I don't know, Dexcom G7 is accurate enough. I do random tests from finger and it's about there a bit off but close. The previous versions were wildly off if you're having a hyperglycemia, and also G7 doesn't peak as high as measured from a finger. But in that level you already have ketones in your blood, and insulin resistance, and a bit extra insulin from the pump will not drop you to hypo.
These are exactly the kinds of people who should NOT base their diabetes management on the "suggestions" of LLMs! There is the real risk that such users lack the competence or judgement to critically assess the convincing-sounding output of an LLM.
As a urologist who built and runs his own clinic management software, I'd encourage thinking about this question early: what does the system do when the LLM refuses to answer, returns malformed JSON, or hallucinates a glycemic value? In medical contexts, a 'silent failure' (system continues despite bad data) is much worse than a noisy failure (system stops and asks the user). The 'happy path' for an LLM-powered medical tool is usually well-designed. The failure paths are where the project lives or dies. Curious how you handle that.
About the risks, managing type 1 diabetes is exhausting, and most people will still sanitycheck the output alongside the hundreds of treatment decisions they make every day. That doesn’t change the fact that tools like this can nudge you to notice and look into patterns or things that needs attention.
Tools like this can also display false low glucose figures, leading you to reduce your slow-acting insulin and skipping fast-acting insulin. A day later, you start feeling nauseous (ketoacidosis), and you’re in danger of death.
I’ll keep using the manual glucose meters (like you advised), and would personally stay the fuck away from any transformer-based LLM to report medical data.
Looks interesting, being a Whoop user for the last few years, I have seen for myself that their AI Coach/AI based suggestions are a hit or miss 3 out of 10 times, slightly concerned about how accurate this will. Not a diabetic patient, but I do monitor my levels with a CGM from time to time, will definitely check it out!
The issue with Whoop’s AI is that there isn’t much data, and the data doesn’t have much prescriptive power, so it can’t really suggest anything useful.
Recovery and Strain scores are made up, and even resting heart rate doesn’t tell you anything prescriptive for the day.
The data available to the LLM in OP’s app is the polar opposite. It’s all actíonable and real, so I bet it can draw more useful insights than Whoop reminding you that you didn’t exercise all week.
Went through pregnancy with the mother having recently-diagnosed T1 diabetes – just barely not killed by grave neglect on behalf of healthcare due to how badly they missed the diagnosis to begin with.
But if someone dies because this thing hallucinates their reporting - would you feel any sense of culpability?
“GPL says no warranty”
“People need to double check LLM output”
“You’re holding it wrong”
I really don’t know if we, collectively as a civilization, should be willing to accept this kind of hand-waving when it comes to creating things like this. Sure, tools make mistakes or people misinterpret reports without the help of LLMs - but LLMs are just on a whole other level where the mistakes are just part of how these things work from a fundamental level.
I don’t even trust AI scribes at my doctors office to transcribe my appointment due to errors. There is no way in hell I would ever use something like this that could just straight up lie about something that kills me if I get it wrong.
I've done this with the Libre 2 sensor. I added Gemini to it. It gets like 2 weeks of readings at once, and the user can "chat to their data". I added a meals tool as well, where the user can photo their meal, and the ai estimates the impact on the readings.
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.
So, I'm in the medical field building an EMR and LLMs have obviously been a really important topic in the industry the last few years. We're still not even sure that giving LLM-assisted suggestions TO ACTUAL DOCTORS AND CLINICIANS will be helpful let alone to the patient themselves.
It's breaking the golden rule of these tools which is to have someone with enough knowledge to verify the accuracy of the data it spits out. Patient's famously don't. Hell, even the actual staff don't really understand or know how these tools work (or the ways in which you can/can't trust them).
The risk to benefits ratio of introducing a language model to interpret so clear signals is nowhere near justified.
Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
Yep. The oref1 algorithm is amazing and proven to make diabetic's quality of life better, AND SAFE. I don't understand why would you need to add AI to that mix.
But I will check this algo out. Maybe it has some interesting bits.
Thanks for calling out!
We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being.
That's just risk/benefit to the user. As the developer, I'd be concerned that publicly distributing and marketing this, even with a GPL "no warranty" license and even free to the user, is illegal.
You can distribute the source code for research purposes legally at least here in Germany. That's how AndroidAPS is even possible.
My experience is completely the opposite, of using LLMs to pattern match and cast diagnostic nets.
Is your perspective based on, say, opinionated principle?, or experience?
The benefits are enormous.
The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
I think you're being too optimistic about your fellow humans' judgement. "Death by GPS" is a quite common occurrence: https://www.sciencedirect.com/science/article/abs/pii/S13550...
Type 1 diabetes with the sensors and pump technology that this software being presented here fits to is not Everyman Joe stuff. Someone who can set this up and get this going is already burdened with the kind of analysis that the app can assist with.
> No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
if you can't trust this thing then what is it doing? the implication that people that trust this software do not have adult competency is also confusing.
> Is your perspective based on, say, opinionated principle?, or experience?
your perspective is solely based on recent trauma so I don't know if it is more reliable in any capacity
Don't trust the thing. That's not what it's for.
Don't do as I say. I'm just a rando from the Internet.
Don't do as the author of the posted software does. Don't do what the software tells you either. But the software can certainly build an informative perspective and suggest patterns and movements in an exquisitely complex disease. Managing T1D with a pump is exhausting.
Second, re. "your perspective is solely based on recent trauma so I don't know if it is more reliable in any capacity"
This kind of statement is far beyond anything bounded by the self-respect of a balanced adult. What the fuck, and who are you?
My ex-fiancée almost died in 2020. We lost an unborn child in IVF due to grave neglect on behalf of healthcare who missed the glaringly obvious Type 1 diabetes she had; They never once checked her blood sugar. You know what I did? I read the literature. I read medicine, I read molecular biology, I read neuroimmunobiology, I read about the placenta and fetal development.
I stood by my fiancée and carried her by hand back to health. She recovered faster than the endocrinologists expected. Her pregnancy was exemplary, fullly intact placental vitals out to 38.5 weeks. Healthcare is in such a bad state that I was forced to interject and argue coolly and adamantly with doctors on several occasions about potentially severe mistakes they were about to make. EVERY SINGLE TIME when I interceded, it was confirmed correct by a second opinion from a senior doctor.
I don't come here speaking from trauma. I come here speaking from grim and serious and confirmed lived experience of stepping in and caring, without any margin for error. Know how you do that? With extreme humility and the utmost care.
Who are you to speak to me like that; I can tell that you know not at all who I am or what I have been tasked with in this life, because then you would not. talk. to me. this way. Okay?
> The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
My local physician says otherwise, with respect to facebook posts about dosages. I'm convinced the same applies to LLM generated content with respect to people blindly following the computer.
I ask for your understanding in that I chose to have "baseline adult competence" highly load-bearing in my comment. This does not include people who have only such poor judgement to guide them that they use Facebook posts as input to managing their T1D pump/sensor-based management.
It is entirely possible to beneficially and safely use software like the that which is the topic of the post.
Risks:
Changing parameters on the insulin pump because the LLM said so
Neglecting to seek actual medical advice believing a LLM replaces it
Misunderstanding medical complexity (ie a prescription due to medical history not available to the LLM)
> No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
You 1000% don't work with the general public in a tech way.
"Baseline adult competence" was load-bearing there.
This is not an app for the general public.
> This is not an app for the general public.
Can you point me to where it explicitly says that this app is not to be consumed by the general public? Or explain how that could even be enforced?
And my entire point was "baseline adult competence" means very little. Competent in what? Technology? Insulin administration? Both? If they're competent in technology but not insulin administration, than this is obviously a bad idea. If they're competent in insulin administration, but not technology, then why would they use it?
We're not even at the point where we can definitely say it's a good idea to surface this information to actual professionals let alone someone with no clinical experience.
It's a bad idea, period. I work with both clinicians and the general public and the idea that this can be responsibility used by either is a pipe dream that only people who work with neither can believe in.
I truly appreciate your work (and I’ll absolutely take a look at it).
But let me say one thing: I’ve been diabetic for more than 20 years. Ten years of management with finger pricks, three measurements a day, and insulin pens (thinking about it now, it feels completely insane that anyone could imagine managing this madness in such a primitive way). Then came years of CGM systems (I’m on my third one now, with different types of sensors, but that’s not the point).
I tinkered, automated, hacked things. But in the end I came to one conclusion: you need a competent specialist (someone who also understands that we tend to be a bit tech-obsessed) who, besides listening to you, actually imposes a strategy. We are the ones who need to adapt to the mainstream approach so we can speak the same language and have methods that are compatible with “everyone else.” No doctor will ever fully understand your custom system, and meanwhile the key to proper management is not in what you built.
Precise carb counting (without cheating yourself), correct boluses given at the right times, marking exercise, boredom and repetition, and being lucid about the effects of changes (agreed upon beforehand!) to CGM settings — changes that should only be made when you’re certain you’ve been a “good and precise patient.”
I’m saying this from the perspective of my own devastated situation. I now have an HbA1c of 5.8, but only after 20 years of smashing my head against the wall (and suffering incredible damage, many mistakes, and the classic “I’ve figured it all out myself” approach).
Stay strong.
Interesting. I can see the utility if you're going to see a nurse practitioner. But if your physician doesn't pull the actual charts for your device and visually inspect them.... try finding someone else.
I'm a T1D who has an insulin pump looping with AndroidAPS and NightScout, what does this give you that Nightscout and Autotune doesn't give you?
And how do you deal with AI hallucinations?
I think the only thing that could be made better is tuning the I:C/ISF/Basal values automatically. And ISF is already handled by DynamicISF, while not perfect it reduces the variables you have to tweak.
Otherwise, when tuned correctly, oref1 et.al. provide amazing results and are safe. Hard to understand where I would use LLMs in this.
You sort of have that - not automatically though, but you can run autotune against nightscout and get a report of where things need to be adjusted. I run oref1 with DyanmicISF, and just run autotune every few months just to tweak values.
I genuinely don't see where I would use an LLM in this process.
I agree. For us, who are really good with software and all that, AndroidAPS is really good when you can easily learn how to use it.
Not that great for non-technical people. But it saved my life at least.
Edit: thanks for reminding me about Autotune. I used the azure app ages ago, but it's now integrated in AndroidAPS behind a secret setting.
Does it prompt logging? For example, when I was trying to monitor my BG after diagnosis, I tried to log my meals to correlate later, but 1) would forget and 2) wouldn’t have the energy to time align the stats. So a tool that even saw changes in BG and shot me a text or message (did you eat/exercise do something @ [time]?) and used the LLM or something else to capture and enrich the metadata. Paired with boring things like med reminders (I just realized I forgot my metformin while typing this) and giving me an easy visualizer with these meta points would be useful. If I’m tracking sleep on a device etc.
As others have said, the analysis might be risky. I don’t want to trust interpretation to anyone but myself (bear my own risk) or my clinician. But just remembering to capture the data and making it easily time alignable and possible augmentable in the future would be useful.
I don't think that LLMs are trustworthy companions in managing a complex metabolic disease like diabetes - especially if you deviate (ever so slightly) from the norm (very lean, very active, strict diet, etc.)!
I'm a T1D myself and like to experiment with ChatGPT (or Opus). My experiences are mixed
LLMs are overly cautious when it comes to correcting with insulin. They regularly advise against correcting before going to bed, even if this means that my blood glucose remains above 140 mg/dl for the whole night.
I am following a low to medium carb diet (<100g a day). ChatGPT always nudges me to consume more carbohydrates, even though I have a TIR of 90% (70-150 mg/dl). Why would I change my diet if it currently works very well for me? Still, most LLMs seem to favor carbs for some reason.
I am using Fiasp as my fast acting insulin. Typically, I inject around 1 to 4 IUs of Fiasp. Its glucose-lowering effect typically lasts for roughly 2-3 hours. Therefore, I know that it is safe to re-inject after three hours without risking insulin stacking. But ChatGPT regularly advises against that and wants me to wait another 1-2 hours.
I am not against automating diabetes management. In fact, I really appreciate projects that help with that. But I don't consider LLMs to be helpful in this regard. Their combination of training data bias, liability aversion, lack of context, and one-size-fits-all thinking disqualifies them from such tasks.
I understand this instinct, but I can see the appeal of capabilities that are well within the limits of a well-designed agentic system.
Imagine asking such a system, "look at my postprandial response to dosing for the past week and make ratio suggestions for breakfast, lunch, and dinner." This is genuinely helpful, saves time, and well within the reasoning limits. You could spot check if you like.
Is it worth setting up such an assistant for the value you'd get out of it? I guess that's on the user and how many similar use cases exist.
> look at my postprandial response to dosing for the past week and make ratio suggestions for breakfast, lunch, and dinner
I'm not so sure about that. A patient absolutely must critically evaluate the LLM's suggestions. A naive user risks severe complications. A user with that kind of competence, however, doesn't need an LLM for such trivial adjustments - they're obvious
I'm a T1D and tbh it's not that hard to manage, I just wouldn't need that. But for kids or the elderly, I see a use case.
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
Maybe that information is useful to you.
I don't know, Dexcom G7 is accurate enough. I do random tests from finger and it's about there a bit off but close. The previous versions were wildly off if you're having a hyperglycemia, and also G7 doesn't peak as high as measured from a finger. But in that level you already have ketones in your blood, and insulin resistance, and a bit extra insulin from the pump will not drop you to hypo.
> But for kids or the elderly, I see a use case.
These are exactly the kinds of people who should NOT base their diabetes management on the "suggestions" of LLMs! There is the real risk that such users lack the competence or judgement to critically assess the convincing-sounding output of an LLM.
Look into Eversense 355 (implantable), it has so much better accuracy for my wife than the Libre and Dexcom CGMs she has tried.
This is quite possibly a horrible idea. Personal anecdote: ChatGPT once read a blood work report value as 40, when the actual report said 4.
As a urologist who built and runs his own clinic management software, I'd encourage thinking about this question early: what does the system do when the LLM refuses to answer, returns malformed JSON, or hallucinates a glycemic value? In medical contexts, a 'silent failure' (system continues despite bad data) is much worse than a noisy failure (system stops and asks the user). The 'happy path' for an LLM-powered medical tool is usually well-designed. The failure paths are where the project lives or dies. Curious how you handle that.
Really nice of you to share this, well done!
About the risks, managing type 1 diabetes is exhausting, and most people will still sanitycheck the output alongside the hundreds of treatment decisions they make every day. That doesn’t change the fact that tools like this can nudge you to notice and look into patterns or things that needs attention.
Tools like this can also display false low glucose figures, leading you to reduce your slow-acting insulin and skipping fast-acting insulin. A day later, you start feeling nauseous (ketoacidosis), and you’re in danger of death.
I’ll keep using the manual glucose meters (like you advised), and would personally stay the fuck away from any transformer-based LLM to report medical data.
You know that current AI systems are not reliable and produce errors?
How do you protect your life and the life of others using your software against potential lethal errors?
"This will all end in tears, I just know it"
Marvin
The alerts system and sharing with caregivers is a solved problem already (e.g. Dexcom's Follow, Abbot's LibreLinkUp).
Do you find the analytics actually helps? I.e. a lot of this will depend on what you ate and whether or not you logged it?
Looks interesting, being a Whoop user for the last few years, I have seen for myself that their AI Coach/AI based suggestions are a hit or miss 3 out of 10 times, slightly concerned about how accurate this will. Not a diabetic patient, but I do monitor my levels with a CGM from time to time, will definitely check it out!
The issue with Whoop’s AI is that there isn’t much data, and the data doesn’t have much prescriptive power, so it can’t really suggest anything useful. Recovery and Strain scores are made up, and even resting heart rate doesn’t tell you anything prescriptive for the day.
The data available to the LLM in OP’s app is the polar opposite. It’s all actíonable and real, so I bet it can draw more useful insights than Whoop reminding you that you didn’t exercise all week.
What’s the limit on badges in a README
Went through pregnancy with the mother having recently-diagnosed T1 diabetes – just barely not killed by grave neglect on behalf of healthcare due to how badly they missed the diagnosis to begin with.
On your work:
this is legit
it is appreciated
Hats off, I salute this, thank you
I mean this in the nicest way possible.
But if someone dies because this thing hallucinates their reporting - would you feel any sense of culpability?
“GPL says no warranty”
“People need to double check LLM output”
“You’re holding it wrong”
I really don’t know if we, collectively as a civilization, should be willing to accept this kind of hand-waving when it comes to creating things like this. Sure, tools make mistakes or people misinterpret reports without the help of LLMs - but LLMs are just on a whole other level where the mistakes are just part of how these things work from a fundamental level.
I don’t even trust AI scribes at my doctors office to transcribe my appointment due to errors. There is no way in hell I would ever use something like this that could just straight up lie about something that kills me if I get it wrong.
This is THE ONE domain where you would want to use classical machine learning and not unreliable LLMs. Unless you want to kill yourself, that is.
Yes, language has nothing to do with it and is complete overkill.
Probably something like SVM for warnings.
Unless the whole purpose is just daily reports.
I've done this with the Libre 2 sensor. I added Gemini to it. It gets like 2 weeks of readings at once, and the user can "chat to their data". I added a meals tool as well, where the user can photo their meal, and the ai estimates the impact on the readings.
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.
Life imitates comedy...
I'm just happy to see a GPL project.
So, I'm in the medical field building an EMR and LLMs have obviously been a really important topic in the industry the last few years. We're still not even sure that giving LLM-assisted suggestions TO ACTUAL DOCTORS AND CLINICIANS will be helpful let alone to the patient themselves.
It's breaking the golden rule of these tools which is to have someone with enough knowledge to verify the accuracy of the data it spits out. Patient's famously don't. Hell, even the actual staff don't really understand or know how these tools work (or the ways in which you can/can't trust them).
FDA approved?