00:49:21 AmyMalik, We all are! Some are just luckier than others. 00:50:55 I mean, Xfwm4 and Fluxbox cause a memory leak on my Xorg, Openbox instead stands on the CPU core, and I just had to patch lighttpd so it would support `SO_SETFIB`. 00:52:45 Interesting that you mention a memory leak since I am hitting one in a tool (which I mentioned above and don't want to trigger the port maintainer by mentioning it again until I can spend some time looking into it). It does make me wonder now if something more deep is happening regarding memory. 00:53:48 Why does lighttpd need SO_SETFIB? That doesn't sound like the result of a curse though. No plague of locusts. No rain of toads. 00:59:01 rwp, It doesn't. I do, because I have two default routes. 01:01:19 rwp, I was offline for 15 minutes until 09:31 UTC due to network problems (self-caused); how "above" did you mention? Feel free to DM me if needed. 01:22:32 do you really need to set fib on socket level or is on process level via setfib(1) enought? 02:09:31 nimaje, Essentially it was easier, for me, to do it this way, than to try to run two copies of lighty, which would've needed me to maintain two entirely separate config files. 05:01:20 AmyMalik, Re: how "above" did you mention? "15:50 -0700 I think that I am cursed." But I didn't see anything cursed before that in the scrollback. 05:02:14 I have successfully avoided needing to deal with multiple upstream routes. I think if I did need to deal with it I would deal with it at the router level. Maybe use BGP for it. 05:08:04 rwp, I can't find out which tool you're hitting a memory leak in. 05:09:17 multitail on 14.3R tailing about 5 very active log files. Check out the swap use graphs: https://www.proulx.com/tmp/multitail-memory-leak.png 05:10:15 The saw-tooth is when I quit and restart it. In between if I leave it running then over the course of a week the program just eats up memory. Nom, nom, nom. 05:12:37 that's not ideal... 05:13:30 It does not eat memory on other systems. Easy workaround is to 'q' and restart it. So it is no big deal. And when I get some free time maybe this next week it should be an easy debug. 05:14:20 my first instinct is to think that it's either something about scrollback (if it stores that in memory) or something about how it accesses the files 05:15:06 It was really only a problem before I was aware of it happening. The system ground itself into a completely out of memory state and then had some failing processes. Now that I am aware of it I make sure I don't put it into that out of memory state. Since multitail is for me monitoring the AI Scrapers and other botnets hammering away. 05:23:08 confession: I just use base tail(1) -F pointed at my logs 05:23:31 i liked splunk. 05:27:15 i mean, centralized logging in general 05:28:21 makes sense 05:29:00 my only "centralized logging" is non-real-time reports (/etc/periodic) 09:15:19 pkubaj, hi! can you test build on ppc* sysutils/mods? https://bugs.freebsd.org/292305 09:16:05 Thanks! 09:21:27 VVD: go is currently not supported at all on freebsd 09:21:42 there's a wip for that 09:21:51 ok, thanks 09:22:26 if go ends up supported, it will be only 64-bits anyway on power 11:46:44 any guides on configuring a freebsd laptop for use in dual-stack, ipv6-mostly and ipv6-only networks with clat? so if the network has some NAT64 I want to use it, but if there is none then no client side translation should take place 13:45:03 nimaje: it just works 13:46:04 I mean FreeBSD in NAT64 + DNS64 environment 13:52:02 i think nimaje is asking about doing CLAT based on presence of PREF64 advertisement, which isn't (as far as i know) currently supported 13:53:57 i'm not sure we support CLAT in that way at all, actually... ipfw has some support for it, but from reading the manpage, i had the impression it was intended for use on a CPE router, not a single end node 13:56:17 it might be interesting to look at how Darwin implements this, perhaps there's some code we could steal 13:58:59 isn't darwin closed? 14:01:22 <[tj]> https://github.com/apple-oss-distributions/xnu 14:05:25 okay alright thx for the tip 14:25:18 mzar: exactly as ivy says, I want dynamic support for NAT64 based on what dhcp tells my system, as I understand the ipfw man page I could configure it statically to just always do CLAT, but then what if I'm in a dual-stack network that has no NAT64? (Or in a IPv4-only network, but I guess that is rare) 14:48:05 nimaje: OK, I am not using it, for me IPv6-only networks just work when I am using laptop with FreeBSD 14:51:45 I am implementing them with dns/undbound using its dns64 module and NAT64 on the gateway 14:51:48 probably configured with DNS64? and what about IPv4 literals? (that's why you want CLAT and as extra you don't need DNS64 any more) and what about IPv6-mostly networks? 14:52:33 has anyone tried the rtl8221ce driver on freebsd15? 15:59:30 You taking a poll, or did you have a question? 16:32:45 nimaje: please let me know if you try ipfw CLAT and get it working for local applications, because i was very confused about how you would do that based on the manpage 16:33:47 nimaje: i was thinking about adding nat64 prefix as an interface attribute (accessible via netlink/ifconfig, like on macOS) in the hope someone might write the necessary userland parts to make it work 16:43:14 ivy: My bad. 16:43:30 I wish there were an easy IRC-undo button sometimes 16:46:03 I think I'm in an ipv6-mostly network here, so I should try to statically configure CLAT for tests and the nat64 prefix as interface attribute seems like a good idea, I'm definitly on board for testing, not so sure about writing code 16:56:14 if ipfw can do this, we could potentially even add a new syntax to use the auto-configured nat64 prefix for it 17:02:44 wavefunction, IRC and e-mail have that advantage: you cannot unwrite your words. 17:04:18 nimaje: why do you want CLAT ? 17:05:34 AFIR CLAT is implemented only in Android 17:06:33 it is supported by Android, iOS, macOS, and Windows (the latter only on LTE for now, but they're testing support on Ethernet) 17:06:42 Windows ? 17:07:35 https://techcommunity.microsoft.com/blog/networkingblog/windows-clat-enters-private-preview-a-milestone-for-ipv6-adoption/4459534 17:08:12 Preview, thanks for sharing ivy 17:08:51 it's been supported on LTE for years since a lot of mobile carriers require it, the new feature is supporting it on Ethernet 17:10:03 ha.. first you need to find mobile carrier with ip6 support 17:10:37 and in my country only one of four seems to be competent enough 17:21:02 v6-only-preferred aka DHCP option 108 seems to be respected by Android phones sine a longer while 17:24:01 we are sending it to wireless clients and Androind phones after receving it don't negotiate DHCP lease further 17:26:48 mzar: because clat makes sure that stuff works, no matter where the ip addresses come from, DNS64 only works if DNS is used and you have to make sure that DNS64 is used 18:09:01 nimaje: CLAT will only work if router, ip6 gatway, implements NAT64, so yes, you can probably CLAT by hand, using either IPFW or PF 18:10:15 let me check 18:29:01 hm, with my test ipfw nat64clat test create clat_prefix 64:ff9b::/96 I get "ipfw: ipfw_ctl3 invalid option 160v0" and "ipfw: nat64clat instance creation failed: Invalid argument" 18:31:26 hello, does rtw88 still not work properly in freebsd 15? 18:44:04 ah, you have to load the ipfw_nat64 kernel module, but still no idea how that whole thing should work from the architecture view point, where would I add routing rules to route ipv4 traffic via that clat instance? 18:47:53 nimaje: you have confused the clat_prefix and the plat_prefix (64::ff9b::/96 is the well-known plat prefix) 18:58:54 they have to differ? can I just use anything as clat_prefix? or do I have to figure out some specific value? 19:01:00 you can theoretical use anything as long as your plat provider is able to route this address to your host 19:18:54 ok, seems like ipfw nat64clat is only meant for 464xlat, not to have a host level clat to have only ipv6 on the host with a nat64 in the network to make stuff like ping 9.9.9.9 still work 19:33:12 hey is anyone available to help with a ZFS question I have? 19:33:48 just ask the question directly and we may find out 19:37:16 long story short, I scrapped together a server at work for a pet project and I just accepted the defaults for ZFS install for a single drive. So the ZROOT pool is a single drive. I just got two SSDs that i want to transfer my existing pool onto. My current plan of action is to simply create a new pool with a mirror vdev and zfs send to this new pool (i plan on retiring the single drive aka the current pool after this is finished) 19:38:01 what do i need to do to properly transfer the boot info/partitions to this new pool? 19:39:59 TLDR i basically want to completely transfer the system's pool to these new drives (single mirror vdev) - is ZFS send the right tool for this? 19:41:02 nimaje: with PF it almost works, (packets are translated, and replies are coming) but the software is dumb 19:43:10 so either this self-hosted CLAT NAT46 doesn't work or we need something more to convience the OS that this returned ip6 has be translated back to ip4 19:44:35 if you're doing "pass out ... af-to inet6", it may be that doing af-to on out rules is less tested than on in rules (i reported one issue with this using af-to inet which i think was fixed) 19:47:10 ant-x: Yes, "cannot undo" is a feature, I agree. Maybe what I dislike is per-line history in my chat client :-D 21:04:07 thermos, zfs send would do the trick, but if you wouldn't mind still using the old drive + one of the new, it would be a lot simpler to add a new drive to the current single drive vdev, so you reuse the same pool 21:04:12 easier and probably faster 21:04:44 also remember that if you boot from there, you have to install the bootcode to the new drive too so it can boot even if the original drive is dead 21:08:20 thermos, also, there is one footgun: use "zpool attach", do *not* use "zpool add" 21:08:36 that mistake used to be non-recoverable, these days it is recoverable, but a headache 21:09:45 actually even if you want to use only the two new, the smoothest pipeline is 1) zpool attach newdrive0 2) zpool replace olddrive0 newdrive1 21:16:46 Does anyone here use Podman? 21:17:12 I'm wondering, for Linux stuff, should I do BSD -> bhyve (for Linux) -> Podman -> *, or BSD -> Podman -> bhyve -> * 21:17:30 The former is kind of what Windows / Podman Machine does, but AFAIK, podman machine isn't supported yet in BSD (unless that changed recently) 21:22:32 never heard of Podman Machine 21:22:34 funtune: thanks for the reply - yes i figured simply adding a drive to the existing pool would be easier since i could then make it a mirror, but the current drive is 500GB while the 2 new ones are 1.92TB 21:22:42 looking at using podman soon, though, to run some Docker Compose garbage 21:22:47 hodapp: https://docs.podman.io/en/latest/markdown/podman-machine.1.html 21:22:53 Same as Docker Machine, just podman. 21:23:03 TL;DR it's host -> VM -> Podman instead of host -> Podman 21:23:22 Podman (and Docker, via Docker Machine) use it on Windows, for example --- Docker & Podman there use wsl2. 21:23:32 So it's basically Windows -> Hyper-V -> Linux -> Podman|Docker 21:23:37 (Windows just hides most of that) 21:23:43 EXCEPT FOR WHEN IT DOESN'T 21:23:46 * hodapp kicks Windows in the face 21:23:59 True. 21:24:06 reading through your messages, you're suggesting i do make the existing pool a mirror by adding one of the drives and then simply replacing the old drive and let zfs do the heavy lifting? 21:24:59 funtune: can i do this given that the original and new drives are different sizes? for some reason I have it in my head that mirrors have to be of drives of the same size 21:27:08 Is this how i transfer the bootcode gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2? This is from the Handbook section 22.3.5 - if so, do i simply replace 'ada2' with both of the drives I would be adding? 21:30:53 thermos, as long as the old drive is smaller, the size mismatch is not a problem 21:31:27 you can tell zfs to expand to the full extent of the drives afterwards 21:34:10 for the bootcode, that looks correct to me yeah. Yeah you'll have to run the bootcode command twice - attach newdrive0 - bootcode newdrive0 - replace olddrive0 newdrive1 - bootcode newdrive1 21:35:35 and zpool set autoexpand=on poolname *should* expand the drives to the full extent 21:36:15 wavefunction, you dislike per-line history? You an avoid using that feature, can't you? 21:38:06 thermos, do you use swap? You may want to set up mirroring for the swap partitions, too, but using GEOM raid instead of zfs since it's very discouraged to swap on zfs 21:38:07 okay thank you much funtune, I'll give it a shot. Much appreciated 21:39:28 if so you'll probably want to format the drives before attaching the partitions to zfs, so that you get whatever size swap partition you wanted 21:39:40 gpart show shows 4 partitions on the current disk, 1- efi 2- freebsd-boot 3- freebsd-swap 4- freebsd-zfs 21:40:22 right. So just redo the same scheme on the new drives, and zpool attach the partition corresponding to freebsd-zfs to the pool 21:40:38 if you want to keep having swap, I mean 21:41:10 you can sort out the GEOM raid mirror for the swap partitions later, it can wait 21:42:14 okay so i should create a partitiontable on the new drive before zfs attach, where 1,2,3 are same size as the partitions on the current drive and the 4th partition is the remainder of the drive to attach via zfs 21:42:30 precisely so, thermos 21:43:17 if you want to be really cool and safe you can also give the partitions gpt labels and use attach gpt/label1 instead of attach newdisk0 etc 21:43:31 do i need to dd the partition's contents from the OG drive to the new one? or does something else take care of that like gpart bootcode - or does gpart bootcode only update the 'freebsd-boot' partition? 21:43:59 thermos, is this a bios system or efi? 21:44:23 Dell r640 - pretty sure I have it using efi 21:44:57 okay - I have no first hand experience with that, the server I did this to was bios, and I think it might be different 21:45:04 better check the handbook about that 21:45:36 I mean, just the bootcode part 21:46:47 Okay will do. Do you think I would need to make this partition table on both of the new drives as I attach/replace them? 21:47:06 yeah you should make partition tables on both new drives before you attach/replace 21:48:10 Okay, so really the only question I think left is: what exactly does gpart bootcode do for me and what do i need to manually do - and I should probably see what i can find in the handbook/man pages 21:48:52 thermos, I'm not sure what gpart bootcode does on efi systems :/ 21:50:28 funtune: thanks again. I suppose I could force the r640 to use BIOS - it seems like you had something you were going to mention if I was on a BIOS system, anything in particular that you had to deal with on BIOS? 21:51:27 The desired configuration is that the system should be able to boot even if you yank a drive. That works fine with bios if both drives have bootcode. But with EFI, it's kind of weird since the EFI system partition needs to be FAT, and I'm not sure if any given machine supports giving it raid 21:52:17 maybe it works fine? But if only one drive has an EFI system partition, certainly the system can't boot without it... 21:54:04 thermos, it's all good on a bios system, nothing more to say :) 21:55:53 perhaps somebody else knows if EFI implementations are usually able to deal with GEOM raid under the EFI system partition... or similar? 22:06:59 okay thanks funtune, I'm gonna read up on the gpart bootcode - if you're interested I can let you know what I end up finding out 22:07:56 great thermos! If you want to call my attention to it, make sure to spell my name without the extra n :) 22:08:11 I'd love to know how it turns out 22:08:44 ohhh I was wondering what was going wrong in my mentions, LOL I scanned your name multiple times and never realized. Will do 22:09:00 you're not the first, no worries :D 22:50:26 futune: the disks I am adding are newer (relative to the current disk) and they report 4096 block alignments - whereas the OG drive does not. I would like to respect this and set 'ashift=12' for ZFS but the docs say this can only be done at pool creation or when adding a vdev 22:51:07 so if I attach/replace I assume that I won't be able to change this since as far as ZFS is aware, i'm not actually adding a vdev right 22:59:47 thermos, I was right there with futune and thinking that replacing the new drives into the system was the best way but with both UEFI and AF partitions perhaps it is best to install fresh on the new and zfs send|recv from old to new. Mason has some nice documentation on this area, let me point you to it. 22:59:56 thermos, https://wiki.freebsd.org/MasonLoringBliss/ZFSandGELIbyHAND 23:00:27 If nothing else it is good general documentation on this topic and good to have available. 23:02:19 Okay thanks rwp, I'll take a look now. After reading 'gpart' manpage I found an easy way to transfer the partition tables is via 'gpart backup da2 | gpart restore -F da0 da1'. 23:10:14 If the drives being backed up are 512 byte and the new drives being created are 4K AF then that won't work, unfortunately. 23:14:53 thermos, I'm afraid that's accurate - you can not change ashift on the vdev if you use attach/replace :/ 23:15:34 but do check if the vdev was actually provisioned with ashift=9, it might have been done with 12 for future-proofing (precisely because of this scenario) 23:16:33 Good point futune! 23:16:46 thermos, You can somewhat use gpart backup of a 512 byte partition for use with a 4K partition by dividing each of the numbers by 8 and then use the modified backup. 23:17:49 yeah the results after backup/restore had some '- free -' spaces between partitions, I assumed that 'gpart' was just making stuff work behind the scenes 23:18:44 What are the correct keys to use for pkg update on a 15.0pkgbase installation? I've got some using /usr/share/keys/pkgbase-15 and ports /usr/share/keys/pkg 23:19:22 the current zroot pool is encrypted with GELI encryption - if i understand correctly, this is simply a layer on top of the underlying ZFS filesystem? 23:20:27 thermos, almost certainly *under*, by which I mean that the pool is created on the GELI plaintext abstraction, i.e. zpool -> geli -> physical drive 23:21:50 Since "zpool get ashift zroot" just says "default" here I suggest "zdb -C zroot | grep ashift" to get the current ashift value for the pool. 23:25:45 thanks for clarification futune, and rwp I was just trying to figure this out as mine says '0 default' so thanks! for somereason 'zdb' "can't open 'zroot': no such file or dir" so I'm looking into this.. 23:28:22 ah found it at '/boot/zfs/zpool.cache' vs the expected '/etc/zfs/zpool.cache'. Turns out I'm in luck! The pool is already using ashift=12. Now I just need to get this partitioning funny-business sorted. 23:31:08 thermos, just to have it in the toolkit: I guess I found the same invocation as you by searching, but the one rwp gave is nicer because it doesn't refer to any cache file! 23:34:24 Agreed, however simply copy-pasting rwp's suggestion I got this error: "zdb: can't open 'zroot': No such file or directory" and I looked at the docs for 'zdb' and found that it looks for a .cache file in /etc/zfs/ yet mine was in /boot/zfs (couldn't tell you why) 23:35:01 so i just had to do a '-U /boot/zfs/zpool.cache' at the end ;) 23:36:56 futune: since these disks are much larger than the original disk, do you think I should zfs-attach the partition of the new disk at the same size as the original one? Then after attach/replacing and fazing the original one out, i 'gpart resize' the zfs partition on both of the new disks? 23:38:47 thermos, the partition should be the final size you want from the start 23:38:57 just skip the gpart resize step 23:39:06 zfs doesn't mind 23:39:16 10-4 23:40:29 I guess you have to stick the geli layer on there before you attach though 23:41:44 thanks for reminding me, was a few keystrokes from attaching haha