Issue with Matter on self hosted Homey

After testing Homey for about a week now I run into exacty this issue. Matter-over-Wifi Devices (Shelly Smart Plugs and Tink Basic E27 bulbs) do pair with the Matter Network and then, when handing over to Homey SHS get error Code 0x00ac. Tried it with Devices before paired with Home Assistant and new devices. Currently my Work-around is to have Home Assistant running and pair Matter devices to it (which works without a problem) instead of Homey and then use the HomeAssistant Addon to make them accessible in Homey SHS.
Other then that I have no issues whatsoever with Homey SHS. Even migrating my z2m Setup over by running z2m in a docker container in the same compose stack works perfectly.

Logs from Homey for the pairing Attempt of a Shelly Plug S Gen3 via Matter:

Homey [log][ManagerVirtualDeviceLocal][VirtualDriverMatterLocal:driver] Get learnmode null
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent → Success 200
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent → Success 204
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent → Success 204
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent → Success 204
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent → Success 204
Server HTTP GET /manager/thread/active-dataset
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingHeartbeat
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingHeartbeat → Success 204
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingHeartbeat
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingHeartbeat → Success 204
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingHeartbeat
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingHeartbeat → Success 204
Homey [log][ManagerApiLocal] IO homey:manager:energy:getLiveReport
Homey [log][ManagerApiLocal] IO homey:manager:energy:getLiveReport → Success 200
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent
Homey [log][ManagerMatterLocal] [Pairing:0x1d048836] Start
Homey [log][ManagerMatterLocal] [Pairing:0x1d048836] Got status update: SEARCHING (-0x01, Unknown)
Homey [log][ManagerMatterLocal] Updating PAA certs
Homey [log][ManagerMatterLocal] DCLs: https://on.dcl.csa-iot.org. Git: false
Homey [log][ManagerApiLocal] IO homey:manager:drivers:emitPairingEvent → Success 204
Homey [err][ManagerMatterLocal] [Pairing:0x1d048836] Finished with error: Unknown error (Matter code: INTERNAL (0x00ac))
Homey [err][ManagerMatterLocal] [Pairing:0x1d048836] Could not cleanup pairing: Unknown error (Matter code: INTERNAL (0x00ac))
Homey [err][ManagerMatterLocal] [Pairing:0x1d048836] Could not stop pairing: Could not stop pairing (Matter code: INCORRECT_STATE (0x0003))
Homey [err][ManagerMatterLocal] [Pairing:0x1d048836] Failed: Unknown error (Matter code: INTERNAL (0x00ac)) (at stage: unknown)

And the logs of the same attempt from the Shelly Plug

{
  "deviceInfo": {
    "name": null,
    "id": "shellyplugsg3-e4b063e55ce4",
    "mac": "E4B063E55CE4",
    "slot": 0,
    "key": "eyJhbGciOiJFUzM4NCIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE3NDc4MTU2ODQsIm1hYyI6IkU0QjA2M0U1NUNFNCIsIm0iOiJTM1BMLTAwMTEyRVUiLCJiIjoiMjQ1MS1Ccm9hZHdlbGwiLCJmcCI6IjA3NjNkZGEyIn0.s_V2vhxAYLjZLR0QsXrNP06dLtKwhzNqiJY6_fExk3FpJCXFRwnIOxxEfR6WTiNfeV_Dh1YHy3SFLOfXv7w4coUui3_BpO0Xb1V8DRpisdsO3uu8PPbEAqNSHadHMxQE",
    "batch": "2451-Broadwell",
    "fw_sbits": "00",
    "model": "S3PL-00112EU",
    "gen": 3,
    "fw_id": "20250924-062730/1.7.1-gd336f31",
    "ver": "1.7.1",
    "app": "PlugSG3",
    "auth_en": false,
    "auth_domain": null,
    "matter": true
  },
  "logs": [
    {
      "data": "Connected.",
      "ts": 1767448557.822,
      "level": 2,
      "fd": 1
    },
    {
      "seq": 178,
      "ts": 1767448557.828,
      "level": 2,
      "data": "shelly_debug.cpp:236    Streaming logs to 192.168.0.20:40826",
      "fd": 2
    },
    {
      "seq": 179,
      "ts": 1767448560.036,
      "level": 2,
      "data": "shelly_notification:164 Status change of switch:0: {\"aenergy\":{\"by_minute\":[0.000,0.000,0.000],\"minute_ts\":1767448560,\"total\":18753.142},\"apower\":0.0,\"current\":0.000,\"freq\":49.95,\"ret_aenergy\":{\"by_minute\":[0.000,0.000,0.000],\"minute_ts\":1767448560,\"total\":0.000},\"voltage\":233.6}",
      "fd": 2
    },
    {
      "seq": 180,
      "ts": 1767448565.015,
      "level": 1,
      "data": "shelly_persistent_c:391 Writing persistent counters, but interval is shorter than 60 sec.",
      "fd": 2
    },
    {
      "seq": 181,
      "ts": 1767448566.084,
      "level": 2,
      "data": "shos_http_client.cp:321 0x3fccdedc: WS ws://192.168.0.236:4:6113/",
      "fd": 2
    },
    {
      "seq": 182,
      "ts": 1767448572.623,
      "level": 2,
      "data": "shos_http_client.cp:656 0x3fccdedc: Finished; bytes 0, code 0, redir 0/3, auth 0, status -10: Host not found",
      "fd": 2
    },
    {
      "seq": 183,
      "ts": 1767448578.525,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 607 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 184,
      "ts": 1767448579.147,
      "level": 2,
      "data": "shos_matter_server.:297 Commissioning started",
      "fd": 2
    },
    {
      "seq": 185,
      "ts": 1767448579.148,
      "level": 2,
      "data": "shos_matter_server.:333 Commissioning window closed",
      "fd": 2
    },
    {
      "seq": 186,
      "ts": 1767448579.149,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 541 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 187,
      "ts": 1767448580.614,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 503 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 188,
      "ts": 1767448580.835,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 138 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 189,
      "ts": 1767448581.346,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 403 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 190,
      "ts": 1767448582.051,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 255 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 191,
      "ts": 1767448582.824,
      "level": 2,
      "data": "shos_matter_server.:352 Fabric updated",
      "fd": 2
    },
    {
      "seq": 192,
      "ts": 1767448582.826,
      "level": 2,
      "data": "shos_dns_sd_respond:236 ws(0x3fcc330c): Announced E4B063E55CE4 any@any (192.168.0.47)",
      "fd": 2
    },
    {
      "seq": 193,
      "ts": 1767448582.827,
      "level": 2,
      "data": "shos_dns_sd_respond:236 ws(0x3fcc330c): Announced E4B063E55CE4 any@any (fe80::e6b0:63ff:fee5:5ce4)",
      "fd": 2
    },
    {
      "seq": 194,
      "ts": 1767448582.828,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 631 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 195,
      "ts": 1767448583.584,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 370 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 196,
      "ts": 1767448584.131,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 490 ms, for event type 2",
      "fd": 2
    },
    {
      "seq": 197,
      "ts": 1767448584.316,
      "level": 2,
      "data": "shos_dns_sd_respond:236 ws(0x3fcc330c): Announced E4B063E55CE4 any@any (192.168.0.47)",
      "fd": 2
    },
    {
      "seq": 198,
      "ts": 1767448584.318,
      "level": 2,
      "data": "shos_dns_sd_respond:236 ws(0x3fcc330c): Announced E4B063E55CE4 any@any (fe80::e6b0:63ff:fee5:5ce4)",
      "fd": 2
    },
    {
      "seq": 199,
      "ts": 1767448584.319,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 172 ms, for event type 2",
      "fd": 2
    },
    {
      "seq": 200,
      "ts": 1767448585.038,
      "level": 2,
      "data": "shos_matter_server.:348 Fabric committed",
      "fd": 2
    },
    {
      "seq": 201,
      "ts": 1767448585.039,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 527 ms, for event type 3",
      "fd": 2
    },
    {
      "seq": 202,
      "ts": 1767448585.222,
      "level": 2,
      "data": "shos_matter_server.:333 Commissioning window closed",
      "fd": 2
    },
    {
      "seq": 203,
      "ts": 1767448585.223,
      "level": 2,
      "data": "shos_matter_server.:257 === Commissioning complete",
      "fd": 2
    },
    {
      "seq": 204,
      "ts": 1767448585.224,
      "level": 0,
      "data": "Matter-DL:1             Long dispatch time: 166 ms, for event type 32781",
      "fd": 2
    },
    {
      "seq": 205,
      "ts": 1767448585.372,
      "level": 2,
      "data": "shos_matter_server.:329 Commissioning window opened",
      "fd": 2
    },
    {
      "seq": 206,
      "ts": 1767448588.007,
      "level": 2,
      "data": "shos_dns_sd_respond:236 ws(0x3fcc330c): Announced E4B063E55CE4 any@any (fe80::e6b0:63ff:fee5:5ce4)",
      "fd": 2
    },
    {
      "seq": 207,
      "ts": 1767448588.085,
      "level": 2,
      "data": "shos_dns_sd_respond:236 ws(0x3fcc330c): Announced E4B063E55CE4 any@any (192.168.0.47)",
      "fd": 2
    },
    {
      "seq": 208,
      "ts": 1767448594.019,
      "level": 2,
      "data": "shos_http_client.cp:321 0x3fcce8c8: WS ws://192.168.0.236:4:6113/",
      "fd": 2
    },
    {
      "seq": 209,
      "ts": 1767448600.623,
      "level": 2,
      "data": "shos_http_client.cp:656 0x3fcce8c8: Finished; bytes 0, code 0, redir 0/3, auth 0, status -10: Host not found",
      "fd": 2
    },
    {
      "seq": 210,
      "ts": 1767448611.786,
      "level": 2,
      "data": "shos_rpc_inst.c:243     Shelly.GetStatus [1138011471@] via HTTP_in GET 192.168.0.236:45164",
      "fd": 2
    },
    {
      "seq": 211,
      "ts": 1767448615.092,
      "level": 0,
      "data": "Matter-SWU:1            No suitable OTA Provider candidate found",
      "fd": 2
    },
    {
      "seq": 212,
      "ts": 1767448620.036,
      "level": 2,
      "data": "shelly_notification:164 Status change of switch:0: {\"aenergy\":{\"by_minute\":[0.000,0.000,0.000],\"minute_ts\":1767448620,\"total\":18753.142},\"apower\":0.0,\"current\":0.000,\"freq\":49.95,\"ret_aenergy\":{\"by_minute\":[0.000,0.000,0.000],\"minute_ts\":1767448620,\"total\":0.000},\"voltage\":233.4}",
      "fd": 2
    },
    {
      "seq": 213,
      "ts": 1767448639.561,
      "level": 2,
      "data": "shos_http_client.cp:321 0x3fccd6c0: WS ws://192.168.0.236:4:6113/",
      "fd": 2
    },
    {
      "seq": 214,
      "ts": 1767448645.623,
      "level": 2,
      "data": "shos_http_client.cp:656 0x3fccd6c0: Finished; bytes 0, code 0, redir 0/3, auth 0, status -10: Host not found",
      "fd": 2
    },
    {
      "seq": 215,
      "ts": 1767448675.864,
      "level": 2,
      "data": "shos_rpc_inst.c:243     Shelly.GetStatus [1152496476@] via HTTP_in GET 192.168.0.236:58168",
      "fd": 2
    },
    {
      "seq": 216,
      "ts": 1767448680.036,
      "level": 2,
      "data": "shelly_notification:164 Status change of switch:0: {\"aenergy\":{\"by_minute\":[0.000,0.000,0.000],\"minute_ts\":1767448680,\"total\":18753.142},\"apower\":0.0,\"current\":0.000,\"freq\":49.95,\"ret_aenergy\":{\"by_minute\":[0.000,0.000,0.000],\"minute_ts\":1767448680,\"total\":0.000},\"voltage\":233.6}",
      "fd": 2
    }
  ]
}

Unfortunately I don’t really see any issues here that would help me explain the error. But maybe someone else sees something.

Thank god. I was really starting to doubt myself.

  • does it matter (pun intended) to add devices via this workaround? Are devices now working with Homey-SHS or working on Homey-SHS via HA?
  • Should it work directly with Homey-SHS or is there a limitation that I am not aware of?
  • Do we know anyone that was able to add matter devices to Homey-SHS?
  • The Devices only work in HA, not in Homey SHS. I use Homeys Home Assistant Addon to get them integrated into Homey SHS for now
  • I think Matter devices should work in Homey SHS, but I currently don’t know if they do at all.

Currently I think the only way of action is to wait and and provide as much information as is requested, unfortunately.

1 Like

You shouldn’t, did you see the post I shared.. there are 2 more with other matter devices including me that can’t get this to work. Seems to be a problem in Homey. By now everyone is working with the Home Assistant Work around I guess. Issue with Matter on self hosted Homey - #17 by Bram_H

1 Like

Did you see this?

1 Like

Yes, although that explains it for the Shelly app? Not in general for homey I think

Yeah… but what if more assume the port is two digits?

Nobody does that (famous last words).

Also, this was a WiFi-issue, Matter works differently.

1 Like

Damn…

Finally had some time to dig a bit further.
To test if my IPv6 Setup, Firewall or mDNS (and what not, Matter is by far more setup demanding then e.g. zigbee) works as intended, I grabbed the HomeAssistant docker Container and the Python Matter Server required to have matter in HA running (NOT VIA ADDON, But as a standalone Container).
Also I grabbed a Smartphone, which was never connected to any Smart Home System at all (a new iPhone 16e, which I just got from work) and logged into the dockerized Home Assistant instance. There I paired the same Shelly Plug (reset the Matter Fabric in the WebUI), the same tink Basic Matter-over-Wifi Bulb (reset via toggeling it on and off five times) AND a new Tink Basic Matter-over-Wifi Bulb (I currently don’t care about Thread setup, which is another, bigger topic at some point).

The hope was, that I could see more detailed logs within HA, if my Network Setup did not work.

ALL of the devices worked without any issue whatsoever in HomeAssistant (dockerized Intance).

I’m still quite sure, that in some way my setup is broken and not Homey SHS, but I can’t really see how to debug this further. Couldn’t find anything in syslog on the docker host, in the dmsg logs (at least nothing which I could link to this issue) and now I’m quite sure, that my IPv6 setup works.

I’m also aware that Athom can’t easily guarantee a working matter setup, as running Homey on your own hardware is completely different then running it on verified hardware.

If anyone has any hint I could try to investigate further, I would appriciate it, otherwise I quite likely will wait for an Homey SHS Update and check if that might have some pointers. (Or I will buy more Zigbee devices, which seem to work very well with z2m).

1 Like

Just to follow your train of thought; why do you now assume that your setup is broken and not Homey-SHS?

From my experience, Matter (over Thread or Wifi) is really tricky if your setup is not correctly configured. Container make Network Config quite tricky, as its the one part that a container cant fully configure in the Image easily and needs config from the host os.

Even when the HomeAssistant Container works, I currently think that Homey and HomeAssistant (or rather python-matter-server, as HA only communicates via WebSocket with it) use different approaches to handle the Network traffic for Matter Devices.

Also, the biggest reason I belive this: I do own a Matter hub (My LG G4 OLED) which should be able to pair with matter devices. Unfortunately I can’t pair any device to LG ThinQ and I assume the underlying reason is my Network setup. Unfortunately I can’t figure out what exactly is not working.

:white_check_mark: Solved: Homey SHS (Synology/Docker host mode) can’t add Matter-over-Thread devices (stuck on “Searching…”)

Hi All, I had the same issue, and it was very annoying after I’ve spent 100+ EUR on IKEA sensors :slight_smile: It took me a day but I’ve managed to find the solution, and now it’s working. Here are the details:


Symptoms

  • Homey Self-Hosted Server (SHS) runs on Synology DSM / Container Manager (Docker).

  • Matter-over-Thread devices:

    • Pair fine in Apple Home (HomePod minis as Thread border routers).

    • Pair fine when running SHS on a MacBook.

    • Fail only on Synology SHS: QR/code is accepted, then Homey keeps “searching” and eventually times out.

  • Matter-over-Wi-Fi devices work normally.

Root cause (what it turned out to be)

Homey SHS on the Synology could discover Matter devices via mDNS, but it could not reach the Thread devices’ IPv6 mesh addresses (ULA prefix like fd29:...) on TCP 5540 (Matter).
The Synology host was missing an IPv6 route to the Thread mesh prefix that macOS had automatically.


Analysis / How to confirm it

1) Check that Synology sees Matter services (discovery works)

SSH into Synology and run:

avahi-browse -rt _matter._tcp

You should see _matter._tcp endpoints and IPv6 addresses like fd29:... (Thread mesh ULA).

2) Compare IPv6 routes: MacBook vs Synology

On a Mac (where SHS worked), check IPv6 routes:

netstat -rn -f inet6 | egrep 'fd29|default'

In my case, macOS had a route like:

fd29:....../64  ->  fe80::xxxx%enX

Meaning: “send Thread mesh prefix traffic to this link-local next hop” (HomePod mini border router).

On Synology, check routes:

ip -6 route | egrep 'fd29|default'

No fd29:.../64 route existed, only default route.

3) Verify Homey container inherits the same routing (host mode)

My Homey container was running in host networking.
To inspect from inside Homey’s namespace, I used a netshoot container:

sudo docker run --rm -it --network container:<HOMEY_CONTAINER_ID> nicolaka/netshoot sh
ip -6 route

Then test connectivity to a Thread device:

nc -6 -vz -w 3 <fd29:...device-ipv6> 5540

Before the fix, it always timed out:

failed: Operation timed out

That’s why Homey pairing hung on “searching”.


:white_check_mark: Solution: Add the missing IPv6 route for the Thread mesh prefix

1) Ensure RA acceptance is enabled (good practice)

On Synology:

sudo sysctl net.ipv6.conf.eth0.accept_ra

Set to 2 (accept RAs even if forwarding is enabled):

sudo sysctl -w net.ipv6.conf.eth0.accept_ra=2
sudo sysctl -w net.ipv6.conf.all.accept_ra=2
sudo sysctl -w net.ipv6.conf.default.accept_ra=2

2) Add an IPv6 route for the Thread prefix via the border router (HomePod mini)

From the Mac route table (netstat -rn -f inet6) I found the next hop (link-local) used for fd29:.../64, e.g.:

fd29:.../64 -> fe80:...

Then on Synology I added the same route (without the %iface part):

sudo ip -6 route add fd29:.../64 via fe80:... dev eth0 metric 50

Confirm it exists:

ip -6 route | grep fd29

3) Re-test Matter reachability

Inside netshoot / Homey namespace:

nc -6 -vz -w 3 <fd29:...device-ipv6> 5540

After adding the route, Homey SHS could finally reach the device and pairing succeeded.

4) Pair again in Homey

After the route was added, I could successfully add Matter-over-Thread devices in Homey SHS.


Making it persistent (important!)

The route will disappear after reboot. Create a DSM boot-up task:

DSM → Control Panel → Task Scheduler → Create → Triggered Task → User-defined script
Event: Boot-up
User: root
Script example:

/sbin/sysctl -w net.ipv6.conf.all.accept_ra=2
/sbin/sysctl -w net.ipv6.conf.default.accept_ra=2
/sbin/sysctl -w net.ipv6.conf.eth0.accept_ra=2

/sbin/ip -6 route add fd29:.../64 via fe80:... dev eth0 metric 50 2>/dev/null || true

:warning: Note: if Apple switches the active Thread border router to another HomePod, the link-local next hop may change. For stability, consider:

  • disabling “Automatic Selection” for Home Hubs in Apple Home, or

  • updating the route if the active border router changes.


Summary

If SHS on Mac works but SHS on Synology hangs on “searching”, check IPv6 routes.
Missing fd29:.../64 route on Synology was the cause. Adding the route fixed Matter-over-Thread pairing in Homey SHS.


If it is too technical I recommend paste this solution into chatGPT and ask it to walk you through the process step by step. Hope it helps. Cheers, Pepe

4 Likes

Thanks for the indepth explanation. Unfortunately, I don’t have that problem. For me, Matter-over-wifi devices do not work (have not yet checked matter of thread, since my only border router is an OTBR within HA, which i want to migrate but need more knowledge how this works).

I still checked the my routing setup (ipv4 and ipv6) and it seems, that there is no issue here. At least the Route to hand over to my Router is correct for IPv4 and IPv6. I can also ping6 the device in question, so I’m quite sure that routing is not the issue here.

Still thanks for the hint.

After more investigation:

I think I might have found a potential culprit.

On reboot of homey I found the following log (I don’t know how I missed this in the first place).

Homey [err][ManagerMatterLocal] Could not initialize matter daemon: MatterError: Could not init Matter daemon (Matter code: ENDPOINT_POOL_FULL (0x00c1))
    at wrapMatterError (file:///app/packages/homey-local/node_modules/@athombv/homey-matter/dist/util/Util.js:209:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async HomeyMatter.init (file:///app/packages/homey-local/node_modules/@athombv/homey-matter/dist/HomeyMatter.js:94:26)
    at async MatterDaemon.initializeMatter (file:///app/packages/homey-local/lib/MatterDaemon.mts:206:36)
    at async ManagerMatterLocal.onMatterDaemonConnected (file:///app/packages/homey-local/lib/ManagerMatterLocal.mts:570:5) {
  code: 193,
  original: RPCError: Error response for conversation 0. Domain: 2, code: 193
      at RPCManager.handleIncomingResponse (file:///app/node_modules/@athombv/homey-rpc/dist/RPCManager.js:355:27)
      at RPCManager.parseIncomingData (file:///app/node_modules/@athombv/homey-rpc/dist/RPCManager.js:329:25)
      at RPCManager.onIncomingData (file:///app/node_modules/@athombv/homey-rpc/dist/RPCManager.js:260:36)
      at Socket.&lt;anonymous&gt; (file:///app/node_modules/@athombv/homey-rpc/dist/RPCManager.js:59:18)
      at Socket.emit (node:events:508:28)
      at addChunk (node:internal/streams/readable:559:12)
      at readableAddChunkPushByteMode (node:internal/streams/readable:510:3)
      at Readable.push (node:internal/streams/readable:390:5)
      at Pipe.onStreamRead (node:internal/stream_base_commons:189:23) {
    code: 193,
    serviceId: 2
  },
  serviceId: 2
}

I grabbed the /app directory out of the container to investigate, but tbh my Typescript Knowledge is VERY LIMITED.

From this i found the Promise, which throws an error

const response = await wrapMatterError(this.daemon.sendRpc.initDeviceController(parameters), 'Could not init Matter daemon');

This call seems to throw, which is then wrapped as a Matter Error and retured, resultinng in the log message.

The daemon is initilized in the same class as follows:

this.daemon = this.manager.registerService(DaemonService, undefined);

Unfortunately, here the trail (at least for my limited knowledge and time) seems to end, as the manager is initialized as follows

this.manager = parameters.args;

And since parameters could be anything, I’m a little bit lost (also I cant find the “call” to the constructor, this is all part of). Knowledge of Containers and Java Development only got me so far … for the moment. If I find some time I might dig deeper, but thats more of a “Last hope” approach, as I doubt I will fully understand the Source Code of Homey-SHS without working at Athom.

Also I tried to run the homey-shs container on my main work station and while the container starts without an issue, the activation process seems to break for some reason (maybe because my Internet connection is one that uses CGNAT meaning I don’t really have a public IP that is routable to my individual PC, or maybe because I made a mistake when pausing the firewall, who knows). So no progress on this path.

the first error is : Matter code: ENDPOINT_POOL_FULL (0x00c1)

so it cannot add matter devices as the device registration pool is already full?

Its not really “the first error” as StackTraces are read from Bottom up (Read like: This is the issue, which was caused by this, which was caused by this … and so on). So the Issue Comes from the RPCError Error response for conversation 0. Domain: 2, code 193.

Unfortunately I could not find the meaning of code: 193 or anything related to it.

Also, I’m unfortunately not really knowledgeable around the Matter protocol and don’t know what the ENDPOINT_POOL_FULL Error means, especially when I’m not trying to pair a device but simply start homey-shs. Hope someone with more insights sees this and knows an answer.

Also I tried homey 12.11 beta (Container Image is published, but not yet released) and did have the same error when starting the pod there.

Maybe, just maybe it has something to do with to many network adapters, as I host ~20 Compose projects on the server and they each have their own docker network with virtual network adapters. I disabled ipv6 for all of them (since Matter is an IPv6 protocol) but to no success

on the homeassistant forum they had some time ago same error, when they as a work around switched docker networking from host to bridge mode, matter over wifi works again. in a main ha os upgrade is was then fixed . i do not know what the default network mode with shs is, or how to test this.

network mode = Host in general

Synolog config:

source

1 Like

Quickly changed to bridge network mode on my homey-shs compose.

The Matter daemon now starts successfully, but the connection does still not work. Instead of immediately getting errror 0x00ac I’m now having a timeout when pairing the device (Shelly Plug) with Homey, quite likely because either the IPv6 routing or mDNS is not working correctly. Unfortunately I don’t have the time to check this today. Might have a look at it tomorrow

EDIT: I also found this Guthub Issue which indicates that having to many Network Adapters on a host might cause issues when initializing the Matter Daemon, and since I have a bunch of IP Links due to running more compose stacks, this might be my problem. Would also explain why in bridge mode this works, as in the container I would only have a small number of Network Adapters (One from the host, the loopback device and the one veth interface from the container Network). I might move my homey-shs to another host (vm on proxmox) and check if this works or move it even out of VMs and see if I have success then