Do you think LLMs in the future will still be on the server? I think as the hardware improves, there will be a time where we can install the LLM on mobile phones or portable devices. This will make it faster and cheaper to maintain sincce you don't need a server anymore. Or maybe I'm wrong?
That's already the case. I run a quantized 70 billion parameter llama 3.1 model on my framework 13 inch laptop. Only cost ~$300 to get the 96GB of ram (which I purchased for unrelated non-AI reasons before the AI boom). It certainly isn't fast, but it is fast enough. I run it via vulkan compute using llama.cpp with an anythingllm web interface in front of it.
Do you think LLMs in the future will still be on the server? I think as the hardware improves, there will be a time where we can install the LLM on mobile phones or portable devices. This will make it faster and cheaper to maintain sincce you don't need a server anymore. Or maybe I'm wrong?
That's already the case. I run a quantized 70 billion parameter llama 3.1 model on my framework 13 inch laptop. Only cost ~$300 to get the 96GB of ram (which I purchased for unrelated non-AI reasons before the AI boom). It certainly isn't fast, but it is fast enough. I run it via vulkan compute using llama.cpp with an anythingllm web interface in front of it.
so, exactly how much code/content is new/borrowed?