15:56:38 HmM, booting up bhyve takes some times, should we use the bhyve memory reservoir ? 16:15:39 Good idea. We'd have to change a few things, however. Enough for a PI change at least, and ideally we should plumb it all the way up to Triton. 16:17:05 I guess using the res would require a CN to be marked as a HVM ... not sure we can have zones + bhyve safely on that box.... 16:17:22 or..... would you specify like 50% for VMs or something 16:19:10 Smithx10: There's some design space to play with that. I can't remember off the top of my head how static-or-not bhyve's use of the reservoir is. 16:19:27 e.g. if I ask for 16GB, but only 8's available, do I get 8, or do I get 0? 16:21:24 I think it's time we plumbed it at least through SmartOS. 16:21:43 (obvs not for this week's/tonight's-branch release. :) ) 16:22:21 lol, nooooo.... i just bounced a machine and waited.... honestly it doesn't really make life too bad 16:22:46 more just annoying lol 16:24:01 Still... I think being able to have a BHYVE zone request reservoir usage, esp. if the behavior is best-effort not all-or-nothing might be useful. 16:25:14 One thing that COULD be done is to have reservoir be set on a CN to sizeof (BHYVE-VMs-on-CN) with the understanding that a new BHYVE VM might not succeed using the reservoir until next CN reboot. 16:26:13 I RTI-reviewed it, but it was 1.5years ago. 16:30:04 ahhh nice 17:59:17 lol someone gave in and used packer on illumos :P 17:59:17 https://github.com/jperkin/packer-plugin-qemu/tree/illumos-binary 18:02:03 Yeah, we're shifting our HVM image generation to use packer rather than a completely bespoke system that few people understand. 18:03:06 It's basically going to be packer+ansible, two systems that are well documented outside of Triton. So that should be much more approachable by anybody wanting to either contribute or run their own builds. 18:04:38 ..if I can actually get it working ;) for some reason it's not opening plugins even building them manually, still debugging.. 19:49:50 danmcd: if you set an instance to allocate from the reservoir, it will fail (immediately) if there is not space available 19:50:01 there is no fallback behavior today 19:50:36 the assumption being that if you choose to use the reservoir, you're already making placement decisions which you believe should be successfully fulfilled 19:51:28 Yep... post 1125amET inspection of bhyve, vmm, and even propolis have educated me. :) 19:51:44 and any memory held by the reservoir is wholly unavailable to the rest of the system 19:52:29 while there is a tentative guard rail for its upper size limit, it makes no guaratees about preventing one from wedging the system by requesting an excessively large reservoir size 19:52:49 like if you have a bunch of pages tied up in i40e or whatever, it has no visibility into that fact 19:53:27 one upside though, is that it's fine to resize the reservoir at runtime 19:59:56 80% is the absolute top-off AIUI. 20:00:06 Funny you mention i40e there. :) :) :) 20:00:26 heh 20:01:40 yes, but 80% will absolutely wedge some (perhaps even many) systems 20:02:20 it was hard to strike a balance between ensuring safety without being overly constraining on large-memory systems 20:03:58 Yeah... understood. 20:48:47 danmcd: do you have the gist for converting the HN to a bootable pool? 20:48:55 Got 3 HNs I gotta get off USB lol 20:56:50 Are your USB keys loader? 20:57:22 If so it should be easy. If not, your first step is to convert the USB key to loader. This is not straight-from-GRUB-to-pool-boot that doesn't involve surgery. 20:58:37 I think I already did the conversion of the USB 20:58:57 Just required a more recent platform to boot in order to convert the pool 20:59:06 Finally got the HN on that platform 20:59:35 Are you gonna boot off `zones` or boot off of a dedicated zpool? 20:59:55 zones 21:00:20 And zones is not multi-vdev? (single raidz is a single vdev, single mirror is a single vdev...) 21:00:41 (And note, I disappear in 30mins.) 21:00:53 `zpool list -v ` is your friend here 21:02:17 https://gist.github.com/Smithx10/b7e8dbcbc68a61895ba5b6e9f37018a5 21:02:34 actually looks like they are all bootable already.... might have already done this lol 21:03:03 think I just need to add the latest platform to them and remove the USBs 21:03:36 `piadm bootable` is your best diagnostic here. 21:04:11 `piadm list` is not necessarily reliable on HNs. 21:04:19 https://gist.github.com/Smithx10/b7e8dbcbc68a61895ba5b6e9f37018a5 21:05:23 Okay, as long as your CNs are EFI booting looks like you just need to: 21:06:00 1.) Boot off the disk. 21:06:12 2.) Update the pi using `sdcadm platform assign...` 21:06:28 YOU DO NOT USE piadm(8) save for 'bootable -e` on HNs. 21:08:50 3.) Reboot HN to newly-assigned PI. 21:09:02 Missing 0: "remove usb key" 21:10:31 yea 21:10:44 I think tho if I cant get someone in there.... and I change the boot order 21:10:58 I could use cfgadm to disconnect the USB devices 21:11:09 so sdcadm platform doesnt see them* 21:11:31 If you boot off the disk, `sdc-usbkey` will treat $BOOTPOOL/boot as the USB key. 21:11:57 even if there are USBs plugged in? 21:12:13 https://gist.github.com/danmcd/e8cc9f5416f40a9f418d299bcab829b7 21:12:18 nice 21:12:20 You must boot off the disk. 21:12:35 It sets an extra variable in bootparams that lets sdc-usbkey know WTF it's doing. 21:13:06 Kebecloud's HN has a dedicated boot pool SSD, as you can see from the GIST. 21:14:03 If you boot from disk AND the usb key is avaiable, you must invoke with `-u` to have sdc-usbkey try. 21:14:33 e.g. `sdc-usbkey -u mount `will not mount $BOOTPOOL/boot, it'll actually seek out a USB key. 21:14:41 If it can't find one (like on my Kebecloud HN) it'll spew: 21:14:42 sdc-usbkey mount: error: no pcfs devices found 21:15:43 `bootparams | grep triton_bootpool` ==> No output means you booted from the USB key OR SOMETHING IS REALLY FUCKING WRONG WITH YOUR BOOT POOL. :) 21:15:54 Disk-booting output will look like: 21:16:00 triton_bootpool=bootpool 21:16:13 (or in your case: triton_bootpool=zones )