Self-hosted AI data workflow: DB and Ollama and SQL

6 pointsposted 6 hours ago
by exasol_nerd

3 Comments

exasol_nerd

6 hours ago

I wrote a tutorial for invoking Mistral 7B model directly with SQL using Python UDFs in Exasol and Ollama. This demonstrates a fully self-hosted AI pipeline where data never leaves your infrastructure—no API fees, no vendor lock-in. Takes ~15 minutes to set up with Docker.

pploug

6 hours ago

purely curious, but why did you go with ollama instead of the built in LLM runner in docker, since you are also using docker?

exasol_nerd

5 hours ago

great idea! I went with Ollama because I found set up to be slightly easier. But technically both should offer the same experience and altogether - hosting both in Docker is very logical. That will be the next iteration of my write up!