A bit of background, for quite some time I’ve been sending push and timeline notifications around to notify my household that the dishwasher salt needs to be refilled or that the dryer is done. So I thought time to make it more fun with a local running LLM.
First we need to install a LLM, I decided to run Ollama in docker:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
If you have an Nvidia GPU, then you first need to install the Nvidia container toolkit.
And use the following command:
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
But you can also run it on a Mac/Pc/Linux, see this page for more information
then you need to install a Model, I decided to use Meta’s llama3.2, see this page for all available models
Time to run the model:
docker exec -it ollama ollama run llama3.2
And now we can create an advanced homey flow that starts with a text tag, I called mine LLM
The homey script that I use looks something like this
const input = args[0] !== undefined ? args[0] : "Het eten is klaar";
const prompt = `Schrijf een korte one-liner, waarin je laat weten dat '${input}'. Gebruik een flinke dosis humor.`;
const response = await fetch("http://dockerIP:11434/api/generate", {
method: 'POST',
headers: {'content-type' : 'application/json'},
body: JSON.stringify({
model: "llama3.2",
prompt: prompt,
stream: false
})
});
const json = await response.json();
const output = json.response;
return output.substring(0, output.length - 1).substring(1);
And now you can trigger the LLM flow from all your other flows
And you will receive a unique notifications
Docs: