AI [[Nodes]] are how cannolis interact with LLMs.
![[Pasted image 20240718221341.png]]
You can reference variables in them from incoming arrows, floating nodes, and your vault, which will be injected before the message is sent to the LLM. You can reference images using standard markdown image embed syntax, and they will be included in the request for vision-enabled models.
AI nodes will also send previous messages in a conversation chain if they are given by incoming arrows from other AI nodes or content nodes.
## Incoming arrows
| Arrow type | Label | What happens |
| ------------------------------------------------------ | --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [[Basic arrows\|Basic]] | none | Coming from AI node: Chat history will be passed along<br><br>Coming from Content node: Content will be added to beginning of chat history as a system message |
| [[Variable arrows\|Variable]], [[Field arrows\|Field]] | any | The content of the arrow can be referenced and injected as a variable using curly braces around the arrow label: "{{arrow label}}" |
| [[Choice arrows\|Choice]] | any | Chat history will be passed, but the content of the arrow cannot be accessed as a variable |
| [[Config arrows\|Config]] | See table below | Content will override the default of the setting defined by the arrow's label |
## Outgoing arrows
| Arrow type | Label | What happens |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------- |
| [[Basic arrows\|Basic]] | none | To AI node: Chat history will be passed along<br><br>To content node: Response will be written to the node |
| [[Variable arrows\|Variable]] | any (see [Arrow modifiers](app://obsidian.md/Arrow%20modifiers) for special prefix and suffix characters) | Response will be passed as content |
| [[Field arrows\|Field]] | Name of the field that arrow represents | See [[Field arrows]] |
| [[List arrows\|List]] | any | The response will be parsed for list content, which can be used to load parallel groups (see [[List arrows]] for more info) |
| Chat | see [Chat arrows](obsidian://open?vault=cannoli-test&file=Cannoli%20College%2F2.%20Special%20arrows%2F5.%20Chat%20arrows.cno.canvas) | Chat arrows allow streaming to files when they point to a [[Reference nodes\|reference node]] |
## Config
Using [[Config arrows]], you can override the default settings for a variety of LLM settings, including:
| Setting | Description | Values |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------ |
| provider | LLM provider the request will be made to | openai, anthropic, gemini, ollama, groq |
| apiKey | API key to be used for the request | |
| model | LLM model to be used | Valid model string as defined by the provider you're using (i.e. gpt-4o) |
| temperature | Temperature setting for LLM (affects variability of response) | Number (usually between 0 and 1, but this can depend on the provider and model) |
| role | Role of the message this node is sending/appending to the message array (default: user) | Generally: user, system, and assistant (this can vary with providers, check the spec of the provider you're using) |
| baseURL | URL the request will be sent to (can be used to request from alternate providers or from local LLMs like [[Ollama]] as long as they use the OpenAI API spec) | |
These are the most common config settings. For a full list, go [[Full list of LLM config settings|here]]