10 points | by nadchif 3 days ago
8 comments
What happens after you close the browser? Is the model still stored locally?
The model remains cached in the OPFS.
If you're curious about the caching code, it can be found here: https://github.com/ngxson/wllama/blob/c267097dc79df8d23df8bc...
Source code: https://github.com/nadchif/in-browser-llm-inference
fantastic tool!, is the limit on the length of the prompt text model dependent? or can be tweaked on the github repo?
it turn out to be a limit on the html, you can change the maxlength="512" with the chrome console to fit your text
nice tool, can see this actually being useful especially in those times where sites go down for no reason
Very nice
Thanks :)
What happens after you close the browser? Is the model still stored locally?
The model remains cached in the OPFS.
If you're curious about the caching code, it can be found here: https://github.com/ngxson/wllama/blob/c267097dc79df8d23df8bc...
Source code: https://github.com/nadchif/in-browser-llm-inference
fantastic tool!, is the limit on the length of the prompt text model dependent? or can be tweaked on the github repo?
it turn out to be a limit on the html, you can change the maxlength="512" with the chrome console to fit your text
nice tool, can see this actually being useful especially in those times where sites go down for no reason
Very nice
Thanks :)