Self-hosted AI data workflow: DB and Ollama and SQL

11 pointsposted 17 days ago
by exasol_nerd

3 Comments

exasol_nerd

17 days ago

I wrote a tutorial for invoking Mistral 7B model directly with SQL using Python UDFs in Exasol and Ollama. This demonstrates a fully self-hosted AI pipeline where data never leaves your infrastructure—no API fees, no vendor lock-in. Takes ~15 minutes to set up with Docker.

pploug

17 days ago

purely curious, but why did you go with ollama instead of the built in LLM runner in docker, since you are also using docker?

exasol_nerd

17 days ago

great idea! I went with Ollama because I found set up to be slightly easier. But technically both should offer the same experience and altogether - hosting both in Docker is very logical. That will be the next iteration of my write up!