I made a AI assistant App that is able to controll devices. The App has a API that other app developers can use to handoff commands like the UC remote does with its voice command feature. Multiple AI models are supported but currently only Gemeni is actually fully tested. Please let me know what does and doesn’t work for you.
@Jorg_Wissemann I’ve updated the app. tha API is rewritten. i mainly used the OpenAI Api, as it supposed to be the “same” except it doesn’t support moddel listing and tools are handled differently so thats the two errors. Just pused a new test. let me know if and what errors you get. if possible please copy past the text. this way i don’t have to type over
@Niels, please can you help me understand whole concept of this assistant?
let me explain - I was playing for some time with MCP settings and AI. My main issue is with AI understanding of my home - I do not have every device named in natural language as this is impossible if you have lot of devices.
Then if you say “please turn on light in kitchen” - ends up with AI asking which light I had in mind.
With MCP via Claude I found solution with .md file where I can specify what I have in mind when I ask for something.
But with AI Assistant - which I think is great app and I really appreciate all your effort with it I do not think this will be possible at the moment.
How do you teach AI which devices/zones/flows, etc… Homey has?
In the AI assistant app you can select the devices you want to provide to the AI. My code has a system prompt that explains the AI how it should behave and a list with the devices including the area they are in. it’s basically a tool. Currently it only offers device access (and the zones), not flows.
What i want to include in the AI Assistant app (and in the UC app as well) is a custom user prompt for additional information. You could then tell it that zone and device names are mixed languages German and English for instance.
In the current setup as is, i am able to ask in english: “what is the current radioactivity at home” and it will understand to use a Geiger counter i build myself and isn’t a usual thing to have, so it reports the value in uSv and informs me that it’s basically background noise. directly after i can ask in dutch to turn on certain lights and it will do that. a custom prompt could help the AI to explain certain extra details like mixed language usage.
So yes this should very well be possible and i’m nearly there, so in my next code session it is most likely working.
Another note. The App is written in a way that other app developers can use my app. so lets say a developer works with a speeker that has a microphone then it can pass me the translated text and the AI App will do the AI for it.
I just pushed 1.3.0, I’ve tested OpenAi, Gemeni and Anthropic to work perfectly.
I’m not happy with the settings but it is a lot better then as it was.
I hope Perplexity works now. If not, i’m afraid i need to let it go. (in either way please let me know) They ask 20$ for the subscription and another minimum of 50$ for me to be able to use the API. It’s the only AI provider that do not list their models using a API so supporting Perplexity will keep me busy every time a model is changed. making them the most time consuming and for me the most expensive provider to support.
Perplexity/sonar seems to work, sonar-pro and others sonars gib error 400. But sonar gives the exact same answer as Gemini-flash, which does not sound reasonable to me.
This is definitely only partly true. The AI looks at Homey devices first, and if it finds a device which might lead to a result, it doesn’t look any further. See example below. If the topic is totally unrelated to Homey, it seems to work (I asked for a recipe and got an answer).