I wrote a tutorial for invoking Mistral 7B model directly with SQL using Python UDFs in Exasol and Ollama. This demonstrates a fully self-hosted AI pipeline where data never leaves your infrastructure—no API fees, no vendor lock-in. Takes ~15 minutes to set up with Docker.
great idea! I went with Ollama because I found set up to be slightly easier. But technically both should offer the same experience and altogether - hosting both in Docker is very logical. That will be the next iteration of my write up!
I wrote a tutorial for invoking Mistral 7B model directly with SQL using Python UDFs in Exasol and Ollama. This demonstrates a fully self-hosted AI pipeline where data never leaves your infrastructure—no API fees, no vendor lock-in. Takes ~15 minutes to set up with Docker.
purely curious, but why did you go with ollama instead of the built in LLM runner in docker, since you are also using docker?
great idea! I went with Ollama because I found set up to be slightly easier. But technically both should offer the same experience and altogether - hosting both in Docker is very logical. That will be the next iteration of my write up!