Cannoli can also use local LLMs with Ollama. To use Ollama, switch the "AI provider" dropdown to Ollama, and make sure the ollama url reflects your setup (the default is usually the case).
We also need to configure the `OLLAMA_ORIGINS` environment variable to `"*"` in order for requests from obsidian desktop to reach the ollama server successfully. Reference [this document](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) to configure this environment variable for each operating system, for example, in Mac OS you will run the command `launchctl setenv OLLAMA_ORIGINS "*"` in your terminal and restart ollama.
You can change the default model in the settings, and define the model per-node in Cannolis themselves using [[Config arrows]] as usual, but note that the model will have to load every time you change it, so having several models in one cannoli will take longer.