10:04:53 Smithx10: I have a few questions! ;) 10:05:27 it builds fine for us and is available in the trunk package repo, is there any particular option or change to the build that you need that we could integrate rather than you having to build your own? 10:06:51 are you using the pkgbuild image and 'run-sandbox' to configure the environment first? 13:26:15 jperkin: nope.... I just cloned it into a zone. Where are those instructions again for the pkgbuild image? I just pulled the latest version* 13:27:18 jperkin: I just noticed in the bind logs https://kb.isc.org/docs/aa-00508 13:44:29 Smithx10: the tl;dr is to provision the latest pkgbuild, 'run-sandbox trunk-x86_64', cd /data/pkgsrc, git pull (probably --force as the image is a bit behind at this point), then build as usual - docs at https://github.com/TritonDataCenter/pkgsrc/wiki/pkgdev:setup 13:45:28 Smithx10: it looks like there's a 'tuning' option that passes '--with-tuning=large' and increases (amongst other things) maxevents to 1024, so I'll just enable that option for the next build 13:48:11 bahama-: do we run bind in any tiny zones which https://kb.isc.org/docs/aa-01314 might negatively affect? 13:49:29 can avoid the RCVBUFSIZE and RESOLVER_NTASKS changes if necessary 14:10:23 thanks jperkin 14:10:49 maybe I'll wait than.... im not dropping requests that I know of , so it might just be an annoying message 15:50:17 jperkin: I don't think *we* do, but Joyent might. 15:51:22 But I think in general, tuning=large is probably a better option for the build. 15:53:12 I think that in the case of "smaller and low-end BIND servers", performance isn't going to be such an issue. 15:54:15 The minimum zone size is about 128MB (not a hard limit, but the os in general gets cramped below that), which would be way more than a small bind server needs. 16:52:46 Any avid SMF users around that could teach me how to make caddy reload it's config with a refresh method? 16:52:53 https://bin.disroot.org/?e744e5684280917e#7C5nNEwhy5QaCLRF5FxGEHCi22W5i792rbJoz6CQq31o 16:53:15 This is what I have so far and it works, but when doing svcadm refresh caddy it tries to start a new caddy process somehow 16:56:57 And it doesn't seem to clean up the old one, so it complains: 16:56:58 Error: loading initial config: loading new config: starting caddy administration endpoint: listen tcp 127.0.0.1:2019: bind: address already in use 16:58:11 could that be a bug in your caddy config file? 16:58:18 because the commands look right to me. 16:58:37 it seems like it's having trouble finding the admin endpoint. 16:58:46 https://caddyserver.com/docs/command-line#caddy-reload 17:01:00 > Because this command uses the API, the admin endpoint must not be disabled. 17:01:34 though it does look like your new config has the admin endpoint defined... 17:02:21 nahamu (LIBERA-IRC): thanks for taking a look 17:02:22 https://bin.disroot.org/?90404222a676dd63#9a3kxsebhQKwJtvRQkYrxPf7D5Khi9hWki3smhXgpgty 17:02:33 These are the caddy logs for more context 17:03:19 When checking running processes after trying the refresh method I can see two caddy processes, so it's seems that SMF / caddy has trouble cleaning up / reloading the config 17:04:45 oh! 17:05:12 It seems like the original caddy process might be having trouble re-binding to that same adming address. 17:05:17 *admin 17:05:37 I bet the error message is coming from the running server process, not the reload process. 17:06:18 You could test my theory by running the reload command manually and seeing that the logs still show the error message as opposed to that error showing up in the shell where you run the reload. 17:06:48 which means it's an issue with how caddy is trying to perform the reload. It might be making some sort of Linux-y assumption about being able to share the port binding. 17:07:39 something like SO_REUSEPORT 17:08:48 which it does look like illumos defines. 17:11:22 SO_REUSEPORT had to be patched out as it's hidden behind _KERNEL 17:11:28 so it's probably that 17:12:13 Ah, someone who knows what they are talking about! ;) 17:12:33 Instead of me just stabbing in the dark... 17:16:19 Thank you the insights jperkin (LIBERA-IRC) and nahamu (LIBERA-IRC) 17:16:47 So for the moment, caddy reload like it's implemented doesn't work on illumos and only a restart would work, correct? 17:17:01 https://www.illumos.org/issues/12455 looks stuck 17:17:07 That would be my guess. :( 17:18:10 Would have to find the relevant golang code to confirm the culprit. 17:19:54 https://github.com/caddyserver/caddy/blob/master/listen_unix.go#L104-L105 looks pretty guilty. I'm surprised you don't get a log message about it. 17:21:29 https://github.com/NetBSD/pkgsrc/blob/trunk/www/caddy/patches/patch-listen__illumos.go 17:22:51 haha, there it is. 17:53:11 Hello folks. We in illumos are getting close to switching default compilers. One of the switching components is moving from gcc10.3 (which we build with DEBUG on every SmartOS release since early 2022) to gcc10.4. 17:53:57 This pi: 17:54:00 https://kebe.com/~danmcd/webrevs/platform-20230227T180744Z.tgz 17:54:00 MD5 == 70c2c165c9e4b66b649335423ba24f41 17:55:24 Was built with gcc10.4, non-DEBUG. I've been testing a slightly older (some small upstream merges missing) version and it passed the tests same as stock, same slightly older version. 17:55:58 This one is going on my Kebecloud CNs and the piadm(8)-testing long-lived SmartOS VMware VM on my workstation. 17:56:44 I can burn an ISO or USB image for those so inclined. piadm(8) users can use the .tgz PI, as there's no impactful loader changes between that and last release. 18:31:30 nahamu (LIBERA-IRC): jperkin (LIBERA-IRC) sent a kind mail to Hokuto asking if I could be of help 19:51:44 @danmcd https://gist.github.com/Smithx10/9694a1369e7b26d425aa8263b7baa2d3 hung dladm add-aggr 19:57:51 ahh 19:58:06 can you do 'findstack -v' instead? 19:58:14 (mostly want to see the args to ddi_dma_mem_alloc()) 19:58:36 i'm working on a fix for this that should hopefully make it's way upstream 19:59:01 just multiple fires have prevented me from finish the work to test it 20:04:16 https://gist.github.com/Smithx10/989c23629ea174fc21254f8bc637b540 20:04:50 @jbk you run into this at your current job? 20:05:44 yes 20:12:46 yeah -- basically the driver's trying to allocate a bunch of 9216 bytes of DMA memory.. which unlike regular kernel or application memory which has the benefit of using the MMU on the CPU 20:13:07 this needs to be physically contiguous.. which on a system running for a while can take some time 20:15:10 my proposed fix is to just do chunks of 2k since the card can split an incoming packet across multiple buffers (and 2k would still be able to handle a 9216 packet w/o hitting NIC limits on # of segments) 20:16:20 just the driver needs to be able to handle it 20:16:31 i dont think this system ever finds that lol 20:16:35 because it never gets added 20:16:49 it can take quite a while.. 20:18:06 i'm guessing if there's VMs on the system, it probably makes it even worse since all of the VM memory is effectively off limits (IIRC) 20:18:26 so the system has to effectively 'defrag' the memory that's left 20:19:12 I suppose if our IOMMU support was better, that might avoid this as well (though I have no idea what the scope of that work would entail) 21:55:56 bahamat: where is a good place to see the documentation about creating triton rbac v2 cross account manta roles / policies 21:57:58 https://github.com/TritonDataCenter/sdc-cloudapi/blob/master/docs/index.md#rbac-users-roles--policies ? 22:07:33 Smithx10: Yeah, that's it. You just do type=account instead of subuser. 22:07:43 yea 22:07:59 And you can leave out the ID, triton will figure that out for you. 22:08:01 triton rbac apply kinda forced me to figure out the objects, glad it was in cloudapi