ChatGPT, my personal assistant

I’m having trouble understanding some lines of JavaScript.

I looked for a site that could help me decipher lines that seemed particularly obscure to me.

I couldn’t find anything!

Finally I had the idea to try ChatGPT.

And there, not only did I have the decryption of the lines that were causing me problems, but I also found a real personal assistant to write code, solve errors…

And, even, very enlightened advice, to write a driver for Homey.

Of course, you don’t necessarily have to follow all the proposed avenues, but it is a very valuable assistance.

ChatGPT can offer you a complete driver from the sensor interview in the space of a few seconds…

There will be some work left, but what time saved! I would never have imagined such efficiency before.

It works great, until it doesn’t. Although it will keep trying to convince you it should work. Then you post something on a forum, fully convinced that your code is correct (because $LLM said so), so clearly Homey (or whatever product you’re using) is doing something wrong. And then others will spend an inordinate amount of time trying to help you, only to find out that $LLM was wrong all along.

Welcome to the world of modern software development :man_shrugging:t3:

4 Likes

I understand your reaction very well.

I was in no way trying to oppose ChatGPT and the forum.

I learned a lot on this forum and you often helped me.

But there are questions to which I have not had solutions, ChatGPT has been able to provide them for me.

Of course, to use this tool, you have to know how to evaluate the relevance of what it offers.

It seems out of the question to me to ask it for a development and to take it raw without analyzing it…

In a word, it seems to me to be an excellent complement to the forum whose contributors cannot cope with all the requests, despite their dedication.

I would be sorry if my words hurt you…

start sarcasm

When ChatGpt writes code, but makes some errors and does not understand everything , ChatGpt should ask question on this forum.

end sarcasm

Don’t worry, my response wasn’t specifically targeted at you, it’s just that I’ve noticed (not just here on the forum, but on other platforms as well) that a lot of people think ChatGPT is always right, even though it often makes up non-functioning or subtly-flawed code, and it’s often left to real people spending a lot of time to fix it.

Personally, I don’t have issues helping out when people have issues with their own code, but when it turns out the code was LLM-generated (which, for some reason, is often not mentioned at the start), it becomes a different matter (I’m not going to waste my time fixing code that a glorified autocomplete generated).

2 Likes

Personally, when I get stuck on code from one LLM, I try debugging it with another, like using Claude or Gemini for code originally from GPT. That usually helps. Sometimes, providing extra context from the Homey SDK documentation or other resources is also effective. I only turn to this forum as a last resort :slight_smile:

Mostly using Claude myself (through Kagi Assistent).

I have a tendency to dislike most of the LLMs due to how easy you can convince them on your view.
Claude meanwhile during coding has at times told me “You say this is refreshing the entire webpage? That is simply not what is happening“ which leads to way better follow-up results, while Chatgpt has a tendency to be “ow sorry, I must have done something wrong.. let me write down an alternative route”.

To a certain extent I dont mind LLMs. When I have an concept I like I at times simply ask it like “I want to manage to do X, X and X .. give me all scripts and files I need to have software that can do it for me”
Do I expect it to have something 100% perfect ready, fully secure and logical? No, not at all.
But what I am noticing is that compared to how AI used to be, it now brings me almost 85% to wherever I want.
Then if you have enough knowledge, you can get another 5-10% if you ask very specific follow-up questions.
And then you have to do the rest yourself.

So does it fully replace human logic? Not yet.. you need to already know quite a bit on what exactly you want to get the best results.
But nonetheless it works so much better than even a year ago. Just keeps evolving so quick.

Fantastic experience with both ChatGPT and Gemini. Yes, sometimes they do something different from what you wanted, but you can easily indicate that. The coding of both is not the same, and if one has done something and you’re unsure, you can copy and paste it into the other and ask for their opinion. Code can be easily expanded, with added pieces or improvements. The ideas came from me, as well as the coding of Gemini and ChatGPT. Gemini tends to give pieces to add or change. If you ask for the whole code, you’ll get that too.

I think the danger in LLM lies in the solutions it provides to people who have no coding skills (and also other stuff they do with LLM’s) to understand the code it generates and just copy/past the output. I use it in VSC to write boring coding parts, reuse own written parts and reimplement them in a new project, beautify my code more, add some more extended documentation, ec… but the core logic I write myself so I understand what it will do. Writing your own code will also enable you to debug it quicker in my opinion. I use it almost daily for my powershell commands/scripts, just some boring routine stuff I have to do, like finding alle VM’s with older snapshot and delete those. Also very handy if you get an unsorted pile of data from an end user and you need to convert it to a specific datatype, there does LLM shines.