14:10:17 looking for a NUC to install SmartOS, is the 10th gen still not compatible? haven't been able to find any new info regarding this since 2020 15:01:35 they should boot now 15:02:42 thank you! is there any config needed on install or it works out of the box? 15:03:38 should just be out of the box.. though some bits like the onboard wireless may not work (wired ethernet should be fine though) 15:04:02 good to know. thanks again 17:28:47 hi there, i am looking to upgrade a smartos server and researching an Intel® Xeon® Silver 4310 setup, but see there are some issues with KVM vms, specifically: 17:28:47 https://www.illumos.org/issues/14862 17:28:48 We have a few KVM instances and I was wondering if there is any official documentation on converting these to BHYV or if a rebuild is recommended? My searches didn't turn up much, appreciate any thoughts 18:03:12 Yeah... I wish I had some Ice Lake HW to reproduce & fix this bug. 18:04:53 Could possibly get you access to it if you're interested 18:23:03 I'm imagining I'd need kmdb at the host AND the guest. (Assuming I couldn't find a possible fix via source-diving... in THAT case such a remote-access scenario becomes more plausible.) 18:27:07 I figured you'd need host access. I'll hit you up directly. Thanks 19:26:56 Hi all. I am deep trouble. We are updating Triton from version 20190912T054018Z to the latest version 20220825T001415Z. We have installed the latest platform image on the headnode and restarted the server. Then we updated sdcadm itself, GZ tools and agents tool. Everything went smoothly, but then we hit a roadblock. We are trying to update the manatee but it failed, and we cannot continue the update process. 19:27:53 Manatee primary and sync are ok, but async is in failed state. On async zone we have run “svcadm disable manatee-sitter” and “manatee-adm rebuild” command but without any success. 19:30:53 tealirc: What error are you getting? 19:31:16 Is the async the only issue you have, or are there other things wrong? 19:33:32 bahamat: We do not get any errors, but on the other hand nothing happens(?). 19:34:01 The rebuild process is asynchronous, so maybe it's running already. 19:37:48 I'm not sure. Async zone is still in failed state. We ran the command “manatee-adm rebuild” five hours ago. 19:38:03 check your manatee-sitter log 19:39:46 If the only thing wrong is that your async is failed, then you don't have anything to worry about. The system works fine without an async, you just need to make sure it's fixed before you attempt any upgrades of manatee or other manatee instances fail. 19:46:37 Indeed. The system works fine, but we cannot update the Triton core components and platform images, because the manatee async zone is failed state. 19:47:52 OK, well the first thing is that you'll need to figure out what state the rebuild process is in. To do that, you need to check the sitter log file. 20:05:51 tealirc : what was the result of "manatee-adm rebuild"? did it finish? 20:08:07 neuroserve: No, it is still on "infinite" loop. 20:08:47 I'm just collecting logs. 20:19:20 Logs: https://pastebin.com/mkcNzqAa 20:22:29 You should definitely not have that nay zfs recv processes running. 20:30:20 I am not sure if this matters, but manatee primary version is 20220825T001415Z, and sync/ async are version 20190912T054018Z. 20:31:34 No, that shouldn't particularly matter. 20:32:29 You'll need to figure out what's going on with zfs there. The sitter is unable to get the status of the recv, so it never sees it as complete. 20:33:09 One possible reason that it can't get the status is because you have so many zfs recv processes running. There should be exactly 1. 20:34:07 If you ran `manatee-adm rebuild` 34 times, that wasn't the best choice. 21:22:43 Could this log point to a problem? https://pastebin.com/JXDgJq9Y 22:03:03 bahamat: did you find the same issue with piranha? Or perhaps I was on a wrong version? 22:09:52 Smithx10: travisp was working on that. 22:10:02 tealirc: That doesn't provide any additional information. 22:25:37 did a zone stop/start and it seems to automatically begin the zfs receive operation without manatee-admin rebuild. And it seems that it starts new zfs recv's continuously leaving the previous ones still running 22:41:50 there's also a lot of ECONNREFUSED errors all over the log 22:42:18 That's because the zfs dataset hasn't been fully restored, so postgres is not running yet. 22:43:41 It will automatically start a recv if there's no data. 22:43:50 ah 22:44:17 from where is the send supposed to connect? 22:45:15 It will connect to the manatee-backupserver service on the next upstream peer. In this case, that will be the sync. 22:46:01 There's another underlying problem though. In the sitter log it says the progress is null. That should never be the case. 22:46:19 So you've probably got other underlying condition that's breaking zfs. 22:46:52 that might also explain why there's so many zfs recvs running. 22:47:02 Some possible places to look: 22:47:08 * disk full? 22:47:16 * delegated dataset exists? 22:47:31 * dataset is properly delegated to the zone? 22:53:29 should the /data/manatee exist before receive? 22:53:41 the /data is there 22:54:01 Yes, the dataset needs to exist. 22:54:30 ah... there's our problem, I presume 22:54:33 Use zfs list, don't just look for /data/manatee 22:54:48 It's mounted at /manatee/pg, not at /data/manatee 22:55:04 yeah, I see the uuid and uuid/data sets 22:55:06 nothing more 22:55:56 Use zpool history in the global zone to see if you can figure out who deleted it 22:57:05 I see it destroyed -r 22:57:15 I wonder if the upgrade script could've done it? 22:57:27 No, that's not how it works. 22:58:41 I would stop the zone and use sapiadm from the headnode to create a 4th manatee zone, and just Let It Be. 22:58:48 It should come up on its own automatically. 22:59:07 In any case, the sitter should be able to track progress. 22:59:26 If it works, then use sapi to destroy the defunct zone. 22:59:35 If it does not work, then you've got a bigger problem on your hands. 23:03:22 oki... we shall try that 23:03:32 thanks for the help :)