00:59:36 bamahat: IDK if you have done any of the work already to replace stud in the portal, but I updated the portal-orchestration code to replace stud with haproxy. But I no longer have access to the repo to commit my changes. If you'd like the changes let me know. 16:36:23 I've got a triton compute node PI 20221215T000744Z for some reason when I rebooted into the newer version the VM hosted on that box stopped being accessible on the network. I determined there were too underlay vnics 0 and 1 and two different IP addresses at that. I do not think it was like that before the update. 16:36:59 There is only one underlay nictag however 16:38:14 I deleted one of the underlays from napi, removed the VM's fabric nic, rebooted the compute node, and it went back to only underlay0. Then I added a new NIC to the VM. But it refuses to work. I am at a loss 16:40:20 When I run dladm show-overlay I see the VM nic setup correctly, `vmadm get VM | json nics` looks correct, but you cannot ping the VM. I went in under single user mode in the guest and enabled networking (ubuntu 2204), the nic looks right, but I cannot ping anything on the fabric within the guest. 16:40:49 I can ping other hosts from the GZ on the underlay VLAN so it can't be an MTU/VLAN issue 18:11:45 okay I found the problem with the overlay in portolan. Now to figure out how to fix it 18:51:22 anyone have any idea how to reassociate an underlay nic with a compute node uuid in portolan? 18:51:50 Everything looks to match other working compute nodes in sdc-napi. 19:38:15 for future reference: portolan add-underlay --cn --ip 19:38:22 boom back in business 19:42:43 nice 19:43:33 odd that portolan's path doesn't include the correct node path though 19:43:52 I slapped a symlink in /opt/smartdc/portolan to /opt/smartdc/portolan/build/node 19:44:09 Otherwise, the "portolan" command died with /usr/bin/env no such node 19:44:47 Too bad theres no "portolan list-underlay" 19:44:59 or help lmao 19:45:07 barfield: No idea what you're talking about there. If portolan didn't have the right path to node then the service would be completely broken. 19:45:22 I think that smf calls the correct path 19:45:29 but bash login doesn't 19:45:38 well 19:45:41 let me rephrase that 19:45:42 It's not supposed to. 19:45:54 /opt/smartdc/portolan/node just doesn't exist 19:46:08 but I found it in /opt/smartdc/portolan/build 19:46:22 so cd /opt/smartdc/portolan; ln -s build/node 19:46:33 makes `portolan` and `portolanadm` work 19:47:14 Ah. 19:47:35 here is the PATH 19:47:36 /usr/local/sbin:/usr/local/bin:/opt/local/sbin:/opt/local/bin:/usr/sbin:/usr/bin:/sbin:/opt/smartdc/portolan/node/bin:/opt/smartdc/portolan/node_modules/.bin:/opt/smartdc/portolan/bin 19:48:31 Good thing I've spent some time playing with sdcnode recently lol 19:49:55 and here it is: 19:49:56 /usr/bin/ctrun -l child -o noorphan /opt/smartdc/portolan/build/node/bin/node --abort_on_uncaught_exception /opt/smartdc/portolan/server.js & 19:50:07 /opt/smartdc/portolan/smf/method/portolan is what starts the SMF 19:53:26 Created TRITON-2359 19:53:43 that was fast! 20:11:23 sdc-portolan#9 20:13:09 TritonDataCenter/sdc-portolan#9 20:13:09 https://github.com/TritonDataCenter/sdc-portolan/issues/9