01:06:21 Tenkawa: /38 01:06:26 oops 01:06:35 multitasking :( 01:06:42 Haahaa np.. 01:07:26 I'm trying to track down a mystery kerberos library hiccup 02:13:54 hi all i am God Husband and The Eathquake guy Nice to Meet you all ... . your president and prime minister too , How Life been citizen ... . 02:14:58 ai slop in irc now? 02:26:03 How would I use EXTRACT_ONLY (https://docs.freebsd.org/en/books/porters-handbook/book/#makefile-extract_only) for DISTFILES that are autogenerated? 02:38:43 tuaris: That's a fine question. You'll likely get a more informative answer from the #FreeBSD-Ports channel. 08:18:46 Hello. I have a question about setting up X with an Intel 945GM integrated graphics card. In particular, am I correct in the understanding that it requires the "legacy" Intel graphics? 08:18:53 *driver 08:19:54 Attempting to follow the instructions provided in the handbook yields X working with graphical errors upon loading. 08:21:11 Should I use this https://www.freshports.org/x11-drivers/xf86-video-intel instead? 08:32:04 I'm getting essentially https://www.freebsd.org/security/advisories/FreeBSD-EN-24:09.zfs.asc on 14.3-p1... 08:32:40 0 root -8 - 0B 3056K CPU5 5 87.8H 96.21% kernel{arc_prune} 08:38:46 anyone have a rough idea of how much resource overhead jails add? like i'm wondering if 5k tiny server processes would take much more if each server process was in its own jail with access to its own port of the main system's ip 08:41:01 jails are funky chroots 08:41:32 if you are not doing VNET there's almost no overhead 08:48:37 Also, is MATE (fisrt time using — decided to check out) supposed to take a bit to actually load my background image? 08:49:28 I'd guess that using an almost 20 year old system could be a factor, but it may also be normal. Wouldn't know. 08:54:24 duskgale: with an SSD it *should* be fine? SATA is older than 20 years after all 08:54:47 >With an SSD 08:54:47 No SSD in sight. 08:54:56 how large is the image then 08:55:23 I've an 120GB HDD running amd64-stable. 08:55:50 120 GB is closer to 25 years old than 20 08:56:06 it probably cannot do more than 100-120 MB/s linear 08:56:21 So, it's just my drive being slow? 08:56:36 if your background image is 10 MB+ and fragmented, yeah 08:56:47 you'd hit IOPS 08:57:08 especially as you are starting your DE which means lots of other stuff doing disk reads? 08:57:52 It happens even with default ones. I'll check I/O stats later, but the computer does start lagging tremendously on big amounts of disk writes. 08:57:52 duskgale: remember that with spinning rust and random read/write operations your IOPS drops down to double digits 08:58:12 compared to thousands for SSDs 08:58:49 When I was building large programs from source a few hours in my WM would be barely responsive 08:58:54 also check smartctl output because I would not trust 120 GB spinning rust in 2025 08:59:11 especially if it is Seagate, WD, or Samsung 08:59:15 Hitachi should be fine 08:59:35 though I guess it was still IBM then? 08:59:38 do vnet jails add enough overhead that running a network intensive server (nginx) in a jail is prohibitive? 08:59:48 kerneldove: not in my experience 09:00:08 know how much cpu overhead it would add? 09:00:16 is it like .1% or like 3% or? 09:00:26 kerneldove: for reference I am running Apache in one jail and PHP-FPM in another, and requests go haproxy -> varnish jail -> apache jail -> php jail 09:00:58 I do not really see much overhead but I did not compare it to not running in jails 09:01:21 would be great to find some comparison benchmarks 09:01:29 epair_task stays under 2% 09:01:50 It's the HDD that was presumably originally shipped with the machine. I got it with Windows, and I'm fairly sure I ran CDI to check how it's doing. 09:02:12 Unless I'm misremembering, it came out with good results. 09:02:45 I'll run some tests later. You're probably right, though. 09:05:16 duskgale: does `egrep '^(ad|ada).:' /var/run/dmesg.boot` give you a model? 09:07:53 Hold on. 09:09:30 It's a WD drive. 09:11:08 https://harddiskdirect.com/wd1200bevs-22ust0-wd-laptop-hard-drive.html 09:11:20 This one. 09:40:01 is there a command that returns how many subdirs it has in it or 0 if none? 09:40:10 subdirs a dir has 09:57:56 oh no a netsplat 10:09:48 kerneldove: `find -type d | wc -l` is an option 10:10:43 i went with ls -l . | grep ^d | wc -l, is either better than the other? 10:10:56 that's wow 10:11:05 I wouldn't haha 10:11:07 ? 10:12:25 using grep for this is weird to me 10:12:44 the one downside of find in my example is it will list the path itself too 10:13:14 i just want the count 10:14:20 not sure what you mean 10:14:32 you said it lists the path itself 10:14:37 yes, the number is +1 10:14:49 so `find /tmp -mindepth 1 -maxdepth 1 -type d | wc -l` would give the number of subdirectories without recursion 10:14:59 and without counting itself 10:15:49 mindepth 1 removes that self entry 10:16:05 ah ok that 1 will work ty 10:16:39 it should also be faster than using grep 10:21:03 ok what about this, how can i count up the number of files in any dir named "foo" under "somepath"? 10:25:58 kerneldove: have you tried using find 10:31:15 i tried 2 times. 1. find . -type d -name foo -exec find {} -type f | wc -l. 2. find . -type d -name foo -exec sh -c 'find "$0" -maxdepth 1 -type f | wc -l' {}. but neither worked 10:34:01 Why do you have "-name foo" there? 10:36:15 i only wanna count files in dirs named foo 10:37:15 Makes sense. 10:38:54 you could chain with xargs probably 10:39:51 you can pass multiple paths to find 10:48:02 duckworld: actually maybe something like... find `find path -type d -name foo` -type f -maxdepth 1 11:03:55 (why use the deprecated barely noticable `…` syntax instead of $(…)?) 11:04:40 i got something working but not sure if it's totally right: find . -path '*/foo/*' -type f | wc -l 11:22:41 find . -path '*/foo/*' -type f -printf . | wc -c works on linux but on freebsd it says find: -printf: unknown primary or operator \n 0 11:25:42 yay! find . -path '*/foo/*' -type f -exec printf %.s. {} + | wc -c works on linux AND freebsd 11:30:38 instead of spawning many printf processes find . -path '*/foo/*' -type f -print0 | tr -dc '\0' | wc -c is probably better (use NUL as terminator, delete anything else and then count the number of NUL bytes 11:34:55 that seems even better nimaje. it works on freebsd and linux too 11:35:09 tyvm 11:38:33 comment from #bash: it won't spawn printf for every single pathname traversed but that method might end up being faster. however, you must specify LC_ALL=C tr -dc '\0', otherwise some implementations of tr(1) will try to decode according to the effective ctype (probably UTF-8) and fail with EILSEQ. 11:39:06 pathname components may contain arbitrary bytes (other than NUL and /), so one must allow for it. 11:40:29 what do you think nimaje? 11:42:09 fwiw it works on freebsd and linux too 11:57:43 yeah, that LC_ALL=C tr … should make it more robust and yes, the -exec printf %.s. {} + will not spawn printf for every path, but it will spawn an unknown number (bounded by the number of paths) of printf processes, so 'many', why the tr way spawns statically knowable exactly three processes (and that was my point) 12:05:01 ok ty! 12:17:42 nimaje: because it is hard to teach new tricks to an old pony 12:18:14 I always go like 'but $() won't work in my IRIX machine's default shell' 13:19:20 anyone have recent benchmarkings/numbers on how much overhead jails add? less than 1% or? 13:23:59 that is a weird question 13:24:28 and any overhead is going to be really specific to operations, it's not just gonna be a flat "< 1%" 13:24:40 does other container implementations add overhead? 13:25:19 e.g. if you're using Docker with a lot of layers, I think that can add overhead 13:26:09 so usual linux being crap stuff 13:27:33 more Docker being crap. if the same concept were implemented anyplace else, it'd be similarly problematic 13:28:27 or the position could be.. is it more efficient "virtualize" an environment or buy a whole new set of equipment? 13:31:05 for the sorts of things containers are used for, it's almost always for where new equipment would just be silly 13:45:43 yes, it is valid question to ask.. what is the "overhead" with a follow-up of.. what are you trying to accomplish? and is the virtualization of your environments warrant going forward with analysis. but to each their own. 15:13:45 kevans: what do you think about an IFF_L2ONLY flag (or maybe a cap makes more sense) that prevents L3 addresses being assigned to an interface? i'd mostly like this for bridge but i wonder if there are other places it's useful 15:24:24 Anyone run into this before? 15:24:27 ld-elf.so.1: Shared object "libprivateheimipcc.so.11" not found, required by "libkrb5.so.11" 15:24:54 I can't find any references to libprivateheimipcc out there.. 15:25:26 Tenkawa: are you on main? if you updated past c7da9fb90b0b you must do a clean build (delete objdir) 15:26:04 I "think" I did.. but I'll wipe it and try again 15:26:52 Thanks for the pointer 15:27:08 you also need to rebuild all ports, in case that error came from a port, i'm not sure if the builders have updated yet or not 15:27:19 (well not all ports, all ports that use base kerberos) 15:27:34 It was from a pkg... I have no ports on this boxz 15:27:40 s/boxz/box 15:28:13 well same thing, if you're installing ports from packages the packages need to be rebuilt 15:28:15 I wonder if I might have to switch them to ports due to that 15:28:19 yeah 15:29:12 alternatively you can build src with WITHOUT_MITKRB5, but if you're using pkg.f.o packages, that will break again once the builders update 15:29:18 Honestly I'd prefer not to have KRB at all... 15:29:46 yeah... I was worried about that.. 15:30:23 I just found an arm64 board that works great and I'm trying to tune it.... thats when I started discovering these things 15:43:40 ivy: it seems like like a flag would be a better fit, unless you're suggesting an (inverted from your flag sense) L3 capability that you have to add on probably in a few places 15:43:43 ? 15:45:09 kevans: an L3 capability might make more sense but also feels more invasive.. the idea was the bridge would set L2ONLY when you add a member interface 15:47:08 right, an L2ONLY capability would be kind of weird since you can traditionally disable caps via ioctl, but you'd want this to be immutable as long as it's still in 15:47:47 you can change flags, too, but we already have the notion of IFF_CANTCHANGE 15:49:50 an alternative would be to have per-AF caps (or some similar system) so an interface has to indicate it supports inet and/or inet6... for example that means you could prevent wg interfaces from having OSI addresses configured on them (not that we support OSI, but...) 15:51:44 i'd bring it up to -network@ folks 15:52:01 sometimes they respond if you write something egregiously bad enough, which may be your indicator 15:52:11 (sometimes they don't respond at all) 15:52:41 er, -net, sorry 18:05:55 Hi. I just had this peculiar error message from pkg: `pkg: An error occurred while fetching package: No error 18:06:09 The easy guess is that time was out of sync, given that it is a VM and it was suspended 18:06:18 so I actually fixed it with `ntpd -qg` 18:07:09 Anyway, the error message is somewhat misleading. Should I flie a bug report? 18:22:00 ivy: interesting.. after rebuilding and still having the problem I ended up verbose truss tracing the problem and curl ended up being being the root issue.. 18:22:08 It doesn't hurt ot file a bug. . . though I'm not sure it's right to say the error message was misleading. . . 18:22:22 I mean, did it explicitly tell you your system time was off? No. 18:22:49 No, it did not. I guessed it 18:23:04 Well, not misleading, but incorrect 18:23:05 For all it knew, the server's time was off - point was, there was an unacceptable delta between them. 18:23:14 CrtxReavr: since there was an error, it should obviously not say "No error"... i believe there's already an open bug about this, it seems to happen with any TLS failure 18:23:31 ivy, that I'd agree with. 18:23:33 Something like "SSL kaboom" would have put me in the right direction, let's say 18:23:33 (basically, it's not reporting the error from openssl properly) 18:24:02 Oh, if there's already some error, I guess it's OK :) 18:24:08 s/error/bug report/ 18:24:21 dacav, what does this return?: openssl s_client -connect : 18:24:30 Oh, late, sorry 18:24:35 Well. . . now you've corrected your time. 18:24:42 well, I can try to restore the snapshot, and the time will be off again! 18:24:44 Hold on 18:24:55 I used to run labs use to develop and test NAS devices. . 18:25:17 I got a request from the testers for an NTP server that would be very wrong. 18:25:31 It took more effort to setup time I would guessed. 18:25:46 haha, yes, i suppose it would 18:25:47 ah, the existing issue is for pkg-static: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286532 18:25:49 how did you end up doing that 18:25:56 but i've definitely seen it with non-static pkg as well 18:26:08 verify error:num=9:certificate is not yet valid 18:26:46 that's great, ivy. Thanks 18:27:05 interesting libcurl is where my break was with that unrelated other library problem I ws just working on... 18:27:07 i think i saw that 'no error' when dns was working but my default router was not 18:27:19 rtprio, it's been. . . a scary long time ago. . . but I deployed a FreeBSD VM. . . and I think it involved both setting the time arbitrarily wrong, but also setting a runtime option for ntpd to not sanity check itself on launch. 18:28:01 It did work though though. . . the testers could ssh in, set an arbitrary system time and restart ntpd, and it would serve the wrongly set time. 18:28:12 CrtxReavr: i had a system where freebsd picked the wrong timecounter, and ended up gaining a minute for every minute 18:28:19 So they were able to test the ntp client on our NAS devices. 18:28:39 I did a lot of crazy shit with FreeBSD in that job. 18:28:58 One of the cooler things was doing "WAN emuation." 18:29:44 I used a FreeBSD box as a router, and the testers were able to introduce arbitray levels of bandwidthy, latency, and a precentage of packet loss. 18:30:23 So they could test remote filesystem mirroring on a "WAN" connection that was literally in the same cabinet. 18:31:00 Used ipfw & dummynet for that trick. 18:32:20 The "magic sauce" on that config was that that the settings for the ques were applied both inbound and outbound, so you had to actually cut the desired values in half, since they'd be applied twice. 18:32:55 queues 18:43:13 arc_prune eating 100% of a core was unexpected 18:43:55 wish I was on 13.something so it could be fixed with an update but no, 14.3, and the code matches the patch 18:46:45 * CrtxReavr happily runs 13.x. 18:47:22 I encounter enough whacky issues with apps, networks, and my own sketchy code. . . I don't need issues with my OS as well. 18:47:59 It's been so rare that I've been excited about a new OS feature on FreeBSD. 18:52:39 thank you for your input 18:55:43 i wrote a bit about the new bridge stuff in 15.0: https://people.freebsd.org/~ivy/bridge_vlan_filtering.txt (mostly while we're waiting for the manpage to be updated...) 19:00:25 Is there a good tutorial out there for migrating a zfs os drive to another drive? Now that I am more confident this system is going to run well I want to move it to a better performing drive. 19:01:13 I have a second NVMe drive already connected via PCie if it is possible via oniine mirror/detach method. 19:13:36 Tenkawa: man zpool-add 19:13:46 Thanks 19:13:55 if it's the same size 19:14:40 unfortunately its not.. its a few gb smaller... making this a bit ... problematic 19:16:15 no, don't use zpool add for this! you want zpool *attach* 19:17:01 ... ope 19:17:06 That does look more adaptable 19:17:32 (well, i suppose you could zpool add the new device then zpool remove the old one, if you're on a recent enough version, but this leaves a bunch of bookkeeping data around) 21:51:05 llua i didn't ask if jails added overhead, i asked how much. so no it wasn't a weird question you just misconstrued it as weird 21:52:08 and for type of workload, lots of network traffic. so imagine a webserver in a jail 21:54:23 it wasn't miscontrued 21:54:54 i just asked a question in response 22:19:41 anyone here by chance play around with jitsi, that is available in ports? 22:20:31 kerneldove: may i could ask in a different way.. do you have a cohort (user group/count) that is expected to use service? say 100 concurrent users? 1000? 2000? 22:23:55 a couple thousand active udp peers 22:30:12 using nginx or apahce? 22:32:03 ya that's the kinda example i use 22:47:26 voy4g3r2 ^