00:45:22 Just a bump to see if my bug report can get anymore attention: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289220 01:13:47 SponiX: Hrm. That's pretty strange. I don't have any clue what would be going on. However, I can attest my E5-2690 v4's work perfectly fine. :( 01:20:53 SponiX: the fact you can get it to boot after powering off from linux seems insane 01:21:21 that makes no sense :) 01:23:34 CPU: Intel(R) Xeon(R) CPU X5670 @ 2.93GHz (2933.51-MHz K8-class CPU) <- my xeons totally crush it :) lol 01:23:55 they're from the before time 01:24:06 hi hi hows it hanging 01:41:03 Hello, RosieMonad. 01:45:04 that's a stupid bug eh SponiX 01:50:04 i'll watch it :) 01:50:10 just too strange 01:51:28 it's crazy eh 01:54:28 ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss 01:54:28 ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ 01:54:43 ah, computer eh? 01:55:21 excuse me 01:55:45 cat? 01:58:34 no, did something stupid while moving tha laptop 02:06:17 thought it was a snake 02:17:11 That's on vicious looking snake! 02:17:26 s/on /one / 02:18:23 in migrating to a new machine, in theory, I can just zfs send zroot/home | ssh zfs receive -f zroot/home and it will just replicate everything? that's not a real ssh command, but just for illustrative purposes 02:19:01 usually, i rsync to a local drive on source, move drive to target, rsync again 02:19:18 i'm trying to avoid the 8h rsync to portable drive 02:22:10 could compress it between systems 02:22:13 zfs send pool/dataset@snapshot | zstd -3 | ssh remote-host "zstd -d | zfs receive pool/dataset" 02:22:33 if network is the limiting factor 02:24:00 deimosBSD: As long as you're only trying to copy the /home dataset, that'll work. 02:24:00 it's either usb4 or 25gb fiber network 02:24:14 yeah, i only have enough space for one fs at a time 02:24:40 well, the portable drive only has enough space for one at a time 02:24:55 deimosBSD: I thought you were trying to avoid that? 02:25:14 yes, i am 02:26:00 Then a zfs send/recv from local source to remote target would be best. 02:26:28 25Gbps would be way quicker than USB. 02:26:35 my plan was basically: for fs in (zfs list); do zfs send $fs | ssh new-server "zfs receive -f $fs"; end 02:26:46 in theory, usb4 is 40gbps 02:27:25 Yeah. But, it requires manual intervention and additional possible failure points. 02:28:07 Direct with 25Gbps fiber would be quick. 02:28:29 we'll see 02:28:37 thanks for confirming my thoughts 02:29:11 zfs send|recv could be dangerous 02:29:21 Also, be careful what you zfs send/recv. Unless you're mirroring from one system to another (full zpool/dataset) you could overwrite some important stuff. 02:29:48 it's a new server and i'll put a fresh fbsd install, setup the filesystems the same way 02:30:10 If you're just doing zroot/home or whatever, that should be fine. 02:30:42 yeah 02:30:49 i thought about trying to do the whole pool at once, but zfs seems to really like operating at the individual fs level 02:31:05 i still rsync even when both are zfs and even local 02:31:19 that was my backup plan 02:31:26 portable drive was tertiary plan 02:31:31 yeah rsync can help you here 02:32:21 could also tar|tar 02:32:25 deimosBSD: You can do full pool-to-pool with zfs send/recv. Just gotta setup the partitions the same and boot live on the target. 02:32:36 or with cpdup i found 02:32:56 you can do recursive full send too 02:33:33 * ketas bites ek 02:33:37 :p 02:33:47 deimosBSD: Very simple for both BIOS or UEFI: https://it-notes.dragas.net/2024/09/16/moving-freebsd-installation-new-host-vm/ 02:34:10 ketas and his rsync! ;P 02:35:26 this is bare metal, but i see the point 02:36:00 deimosBSD: It works for bare metal or VM. Doesn't matter. 02:37:32 To/From either. It's just partition setup and zpool snapshot send/recv. OS doesn't care what it is. 02:38:43 indeed, thanks 02:39:08 i wanted to do a clean install on the new system, since the source is at least 8 years old 02:50:37 daemon: Yep. Totally doable. 02:51:18 Just send/recv or rsync or whatever you want. I'd do it directly before I did USB, though (if possible). 02:51:23 Less steps. 02:52:54 why age matters 02:52:59 it's not windows 02:53:01 :p 02:53:27 registry isn't getting full 02:56:55 Could be a lot of leftovers from upgrades and such, though. Nothing wrong with starting from scratch. I do it from time-to-time. 03:18:44 ek Macer : Yes, it is a very odd issue. The only way I can for sure boot normally is after a clean boot and "poweroff" from Linux. Even a clean shutdown from FreeBSD normally hangs the system on the next boot 03:20:33 SponiX: Yep. Very, very strange. I've never even heard of that before. 03:20:33 ketas: Yeah, it really doesn't make any sense. Why would a FreeBSD boot up fine after a Linux shutdown, but not do the same with a FreeBSD shutdown? 03:21:05 ek: well, you can see there was a similar issue for someone way back on like FreeBSD 10 03:21:13 I liked to that bug report 03:23:02 I have another X99 system up with an ASRock Motherboard instead of this Asus X99 Sabertooth. I'm debating on seeing if it will boot up consistently. And if so, I might swap my machines around OS wise 03:24:01 SponiX: Yeah. I saw that bug report. You can boot no matter or how you shutdown if you disable HT? 03:24:24 ek: at one point I thought that, and then it started to hang that way also 03:25:26 someone thought it could be memory related. But I swapped in a 128G kit and it still did the same thing 03:25:48 I haven't swapped processors between my systems yet. But that is another thing I plan to try when time permits 03:28:59 SponiX: Did you apply Jordan's patch before reporting the acpidump commands? 03:29:28 I can't tell by the replies. If so, great! Hopefully, they'll get back to you soon. If not, I would certainly suggest doing so. 03:30:07 yeah you can reinstall 03:30:20 and yes such hw bugs do happen 03:30:29 just very rarely 03:31:06 it's totally ridiculous if hw gets into that state 03:31:19 wait, is it cold boot as well? 03:31:41 or that worked? 03:32:08 that said, i don 03:32:12 Neither work. 03:32:23 don't know any specifics about it 03:32:51 machine gets into weird state even after a cold boot? 03:33:22 ketas: From the looks of the PR, yes. 03:33:37 :/ 03:33:59 what battery does i wonder :p 03:34:06 it's weird 03:34:20 but then, nothing new eh in this world 03:38:38 Very strange issue. 03:38:42 tho it would be tempting to do what someone i heard did after he found that flow control was to blame why his usb-rs232 didn't work with that particular factory equipment... told that anvil is still intact 03:38:46 :) 03:39:05 Yessir! 03:42:56 SponiX: you don't see any issues at all running linux on it? 03:43:41 more nonprofessional stuff, some dell optiplex sff machines could drain cmos battery while on battery, and hp machines could just lose their internal nic if their battery runs out... those are things you don't even expect in humble desktop 03:43:48 that sounds like a hardware issue. the whole poweroff from linux then working for a little while then stopping on reboot or not booting on a cold boot. it seems rather apples and oranges 03:44:56 Macer: Nope, Fedora Linux runs perfect on it, and so does FreeBSD inside a virt-manager VM from Linux 03:44:59 i had an avoton board go bad and it would constantly reboot if the ipmi nic was connected to a switch lol 03:45:09 but worked just fine without it connected 03:45:34 were you able to connect it later? 03:45:48 no. once it was plugged in it would start rebooting 03:45:55 but yeah that's shot 03:46:08 it was one of those supermicro avoton boards with teh soldering thing 03:46:26 but i'm not sure if the two issues were related.. but it was well past warranty when i even realized that was a thing 03:47:09 which is a shame because it would make for an awesome router board now if it worked. no i'm using some celeron nuc as my router for opnsense 03:47:20 *now 03:47:51 which i guess nowadays is probably better than the avoton heh 03:48:39 my nas is running fbsd and is an ancient isilon that i yanked the jet engines out of and replaced with human hearing fans 03:49:24 i had to put cpu fans in it too since it used the jet engines and only had cpu heatsinks 03:50:12 reminds me how i was offered 48p switch for free, and i declined partially why he got rid of it, only web ui and to configure vlans you needed to click a matrix of checkboxes 03:50:16 :p 03:50:54 that's most of them nowadays 03:51:16 my ubiquity sfp+ switch does that... but i guess it does have ssh if you don't want to do that 03:51:36 yeat but you needed to click individual checkboxes with mouse 03:51:43 yup lol 03:51:48 can't recall if 48*48 03:51:57 that would be one bingo 03:52:37 i have to be picky on my vlans since my router is only 2x1gbit 03:53:20 vlan switch is godsend even home i quickly realized 03:54:21 SponiX: i wonder if linux added a hack 03:55:11 they do and sometimes they do it even worse than hw would require 03:55:13 i hear 03:55:20 but issue is in hw 03:56:14 with periph, one might not get things all up on boot 03:56:21 hw is fun 03:56:31 and i don't even understand all of it 04:07:32 i had to do some tomfoolery to get my old amd A10 5800K using debian .. where it would work find with ubuntu 04:07:50 i guess some amdgpu kernel flags 04:08:15 somehow ubuntu would just boot fine though.. but with debian i HAD to add the flags 04:51:43 Disk IO: 2359.9% read: 1.58GiB/s write: 172MiB/s 04:52:02 well.. i guess i should probably set up a cron job to scrub pools once a month... wonder how necessary that even is 04:52:16 i mean zfs does on the fly chksum checking doesn't it? 06:59:33 Macer, Turn on zfs scrubs in periodic.conf file: sysrc -f /etc/periodic.conf daily_scrub_zfs_enable=YES 07:00:17 It's a builtin capability in FreeBSD base but it does need to be enabled. 11:21:27 rwp: ah ok. thanks. 11:21:48 although making it daily seems a bit overzealous .. especially for the larger pools.. i'm scrubbing one now and it's taking 15 hours 11:22:11 maybe weekly? 11:22:34 gpt/R02-04_Seagate_BarraCuda_Zxxxxxx ONLINE 0 0 1 (repairing) 11:22:45 it did find that though 11:42:12 i want to pin a jail to only using 1 and only 1 core. anyone done that before? i read https://man.freebsd.org/cgi/man.cgi?query=cpuset&sektion=1&manpath=freebsd-release-ports but i still don't get it 11:42:35 i found cpuset.id on https://man.freebsd.org/cgi/man.cgi?query=jail&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports but that's readonly 11:50:23 kerneldove: you need to set it from the parent / host, and it needs to be enabled first in loader.conf 11:50:34 https://klarasystems.com/articles/controlling-resource-limits-with-rctl-in-freebsd/ has a good overview 11:51:14 cpuset isn't on that page dch? 11:52:49 aah ok 11:54:35 so try this `exec.created = "cpuset -l 1 -j ${name}";` in the appropriate jail.conf 11:55:00 or just from the parent with `cpuset -l 1 -j ` 11:56:24 ohhhh putting it in the jail.conf exec.created hook! is that documented anywhere or just expected ppl know that? 11:58:00 erm "just know" ? 11:58:13 most of the jail.conf things like rctl etc would go there 11:58:25 so all processes in the jail inherit it 11:59:14 buuut if you feel like adding an example to jail.conf and/or cpuset pages that would be an awesome contribution 12:05:42 omg that worked dch! i restarted jail then ssh into it, run sudo top, and i see every "C" is 1 now! 12:06:08 so then for next jail if i want to give it only 1 core too, i'd make its exec.created command be cpuset -l 2 -j ${name} ? 12:07:46 Anyone has got FreeBSD booting and working from iPXE and a iSCSI volume? Or what's the best way to create a netboot freebsd installation what will work from iPXE? 12:09:03 in my head, cpuset requires rctl, but apparently that is not the case. it took me a while to find a box where its not enabled... 12:09:51 dch ty for reminding me i need rctl because i wanna limit ram too 12:10:16 sakura1312: I haven't done this but have 2 links https://gist.github.com/dch/9811e8ac2748d4b7bab875c3e6a74543 & https://antranigv.am/posts/2024/05/freebsd-vultr-ipxe-root-on-zfs/ should see you right 12:15:11 dch if i want each jail to have only 1 core, there any way to set it in jail.conf so it's only written once per jail? 12:15:26 right now i have to put the exec.created line in each jail's specific conf :/ 12:17:36 IIRC you can just put it in /etc/jail.conf and it will apply to all of them 12:18:00 it won't assign core 1 to every jail? 12:21:36 hmm 12:22:17 so if im doing cpu pinning I'm going to have to have a manual map of jail <> cores, to avoid over-subscription 12:22:43 ? 12:26:18 kerneldove: lets say you have 4 cores, 3 jails 12:26:35 you restrict jid 1,2,3 to cores 1,2,3 12:27:00 leaving core 0 entirely free for host, and host can also use all cores freely 12:27:16 now you add a 4th jail, what do you do? 12:27:53 i don't know but that won't happen on this box. i have 64 cores and i'm only creating 20 jails. i need each to have its own 1 core 12:30:01 for 20 jails I would use automation and let ansible or whatever do the math to pin jails->cores 12:30:13 dch: thanks :3 12:30:21 my point is, eventually you need to choose between either 12:30:38 - manually assigning cores & jails in your /etc/jail.conf.d/thing.conf 12:30:59 - using a generic policy to allow each jail to use max 1/64 of cpu resources 12:31:24 I'm not an expert in cpusets but look for policies like ft & rr in the manpage 12:31:36 they should allow you to do that 12:32:19 not having done this personally, I'd probably try first to limit *all* jails to 20/64 specific cores with a cpu-list 12:33:44 and then within that, use pcpu from rctl to constrain them, but not have to make a specific per-jail policy 12:34:02 I'll ask this in our next jails call and see if anybody else actually does this 12:34:12 I'd like to have a jailed bhyve for instance doing this 12:40:18 kerneldove: you could use a (ucl) variable in that command and set that variable for each jail instead, but if that is the only use for the variable it wouldn't save you much 12:43:02 dch nice, pls lemme know what they say 12:43:23 nimaje, ya that's what i was thinking 12:43:56 $cpucore = "1"; in jail1.conf, 2 in jail2.conf... 12:46:31 kerneldove: https://git.sr.ht/~dch/ansible-jails may be of use 12:46:56 (shameless self-promotion) 12:48:14 ty 13:53:39 hi. does the MAC address persist for tap(4) ? if the freebsd host reboots? Asking here if anyone might know because rebooting the server is non-trivial 13:58:01 Hi 13:59:18 f451: it will persist, unless you change hostuuid 14:01:31 I think I broke my Pulseaudio :P 14:05:56 mzar: hostuuid of the server? 14:06:16 sorry if this is a dumb q 14:17:23 np 14:30:31 heh i seem to have got layer2 filtering working in pf :D 14:44:13 and what is going on with AI 14:44:23 crawl bots 14:45:19 Macer: the periodic script runs daily, but it only checks if scrub is needed. I think that by default it scrubs the pool every 35 days. And iirc this period configurable. 14:45:22 My website got attacked by them this month with +1000,000,000 requests 14:46:09 and the phpbb forum got dead now.. 14:46:13 *is 14:47:46 There post out there from years, the last month was just 100 view, and now 363789 14:49:59 My hosting gives free 1TB bandwidth every month, and my website size is just under 50KB.. but now I pay them more for extra one :P 14:50:26 main page is just 2Kb 14:51:54 I also read that Freshports website is also under attack, It will be really really sad if it closed :( 14:52:57 mosaid: isnt there an addon thing for the webserver that quenches ai scrapers 14:53:21 It's used JS 14:53:33 Very heavy really on it 14:53:46 anubis it's in the ports 14:54:03 And my website is for non-js and old machines 14:55:10 f451: anubis, I can bypass it easily, but I don't want to post that trick online; those spammers will surly use it 14:55:32 And that trick will no more work 14:55:43 I use it for my non-js browsing 14:56:02 theres others in ports 14:56:19 AI is pushing the web to use js and other bloated more and more 14:56:46 * mosaid mosaid hates AI more than ever now 14:58:05 f451: they relay on other modern stuff 14:58:20 dch: possibly relevant to your interests as you know you have something similar for OCI: https://reviews.freebsd.org/D52412 14:58:57 ivy: this looks like a fantastic feature 14:59:17 I just gave up on having publicly visible servers when I wasn't able to make pf happy with large tables, even having 64GB of RAM. 14:59:21 we'd definitely want to use this instead of maintaining a manual set of lists 14:59:36 I never found out the magic combination of sysctl flags or values to make it stop complaining. 14:59:55 kwak kwak 15:00:20 Is there any world where "pkg: An error occurred while fetching package: No error" does not poison what remains of my week-end? 15:00:32 I opened https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=289357 15:00:40 but really it's just depressing 15:01:05 dch: that's what i was thinking :-) but we should see if my list of sets/packages works for you or what we want to change (for example i'm wonderinging if we want a version of "minimal" that doesn't include hardware/networking stuff not needed in jails/containers...) 15:01:25 i actually would have done that already except i'm not sure what to call it 15:03:53 dch: this should also simplify the installer a lot, but since i don't know anything about that i punted it to isaac :-d 15:04:02 ivy: :-) 15:04:22 ivy: I still can't find a good name for everything-except-the-compiler 15:21:25 dango: in pf, what did you have 'set limit table-entries' set to? 15:25:40 by default it's ISTR 65536 but for a few things thats too small. on a busy firewall with huge tables i had mine set to 400000. in /etc/sysctl.conf theres net.pf.request_maxcount=400000 15:26:54 Hecate: whats pkg version ? 15:27:30 pkg -v 15:28:29 f451: It was a while ago so unfortunately I don't remember what things I tried. I might as well try again and ask here with specific questions and error messages if I run into issues again. 15:29:23 for huge tables, you def deed to increase it. i was using blocklists 15:29:40 s/deed/need/g 15:45:49 mosaid: this is the one i was thinking of: www/iocaine - not installed it/tried it though 15:47:26 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=287944 15:47:40 doesn't appear to be in the tree yet though 15:50:38 Nice 15:55:00 f451: 2.2.2 15:59:20 Macer, +1 what divlamir said. It enables periodic to run /etc/periodic/daily/800.scrub-zfs daily but the zfs scrub runs every daily_scrub_zfs_default_threshold="35" days. You can read about periodic in the Handbook: https://docs.freebsd.org/en/books/handbook/config/#cron-periodic 16:06:33 Hecate: hmm. ive seen this in a poudriere context but not youre exact one. hte problem went away after replacing my poudriere-devel with https://github.com/dsh2dsh/poudriere.git. But poudriere-dsh2dsh is in the ports tree now 16:24:06 rwp: yeah i'll have to look at that.. i haven't ran it in months :) 16:24:28 but again i'm still curious if scrub does things the on the fly chksum doesn't do 16:25:09 so for instance i have a drive with a chksum error using scrub that i didn't see prior to scrub. so if it tried to access the data wouldn't the chksum come up and be corrected? 16:27:24 If you only have ONE device and it fails a checksum then that data is lost. If you have REDUNDANT devices then the redundant storage with correct checksums will be used and the blocks with failing checksums will be "healed". 16:28:29 Macer: normal disk access will repair checksum errors, the purpose of scrub is to detect checksum errors in infrequently-accessed files. that avoids, say, both copies of a data block on a mirror going bad without anyone noticing 16:29:06 yeah. i was disconnected from libera :/ i mean tto say raidz2 16:29:32 is that possible? 16:29:36 well.. i guess for sure it is. heh 16:30:57 well, it also avoids say, a block going bad on one side of a mirror, then the other side of the mirror failing completely, then you replace the disk but you can't recover the data 16:31:02 i guess it makes more sense using it for mirros or maybe even raidz1 but when you have raidz2/3 then don't you already have 2 disks of redundant data to chksum during normal operations? 16:31:12 along with plenty of other bitrot situations which definitely happen in real life 16:31:19 ah ok 16:31:49 ok. that makes a lot more sense. i always just looked at it as zfs will fix everything on the fly. but the mirroring example is probably the best one with regard to scrubbing. 16:32:11 it can also matter for raidz2, let's say you have a failing cable or controller that causes some proportion of newly written data to be corrupted, eventually, you'll corrupt all redundant data of the same block 16:32:12 i guss you want to make sure that data is repaired prior to the 2nd disk dying 16:32:37 and you might not notice that if you aren't reading the data that often after writing it (think archive, logs, backups...) 16:33:46 it's true that the chance of scrub avoid data loss goes down as redundant increases, but it's always still helpful, i would never not scrub zfs pools 16:33:53 s/redundant/redundancy 16:34:31 yeah it sounds like it's the thing to do 16:34:48 i'll set it up when i have a chance today and just let it do its thing once a week or something 16:35:24 the default of every 35 days is probably fine, especially on raidz2. although it doesn't hurt to run it more often if you want 16:35:31 one of my pools takes 12 hours to scrub though 16:36:02 speaking which.. that's about to finish. and it did find 1 bad chksum on a disk 16:36:28 12K repaired, 93.33% done, 00:50:50 to go 16:36:40 smartmontools is your friend ;) 16:37:07 i've been slowly but surely replacing cheap barracudas i bought years ago ... i think i'm down to 6 of 12 of them lol 16:37:45 ah. i guess 7 of them left. 5 died so far. although i think that may have been due to heat which was sort of my fault. 16:39:53 8x4tb seagate constellations in raidz2 here 16:40:46 scrub can take 6-12 hrs. frequency is 7 days 16:41:31 meanwhile... my hgst drives... 16:41:34 9 Power_On_Hours 0x0012 087 087 000 Old_age Always - 91024 16:41:38 those things are such troopers 16:42:10 yeah? not used those before 16:42:18 too bad WD owns them now :( 16:42:44 it's like when oracle bought sun 16:45:17 ive not had good experiences with wd either 16:53:20 anyone played with the podman work from dch et al? I feel like I'm doing something wrong in my pf.conf, because when I'm trying to use it the jails can't successfully make outbound network calls 17:05:07 josephholsten: heya, try it in a minimal pf.conf, with `block log ..` everywhere, run `service pflog onestart`, and see what `tcp -vvveni pflog0` tells you is blocking it 17:23:08 dch: I'm just using the minimal nat pf from /usr/local/etc/containers/pf.conf.sample and vtnet0, https://pastebin.com/gWHWkfDP 17:27:29 I'd like to find some time to play with podman on FreeBSD too, this autumn.. 17:27:58 Tables should be declared after the macros, non ? 17:28:37 It doesn't have any blocks, so tcpdumping the pflog isn't helping. But I hadn't tried just tcpdumping the whole vtnet, lets see 17:28:44 I thought order of statements matter 17:31:05 Ah, I am wrong: "With the exception of macros and tables: says the man page 17:33:27 f451: in my case I think I'm just shit out of luck because pkg does not want to reveal its secrets 17:38:49 hrm, I'm seeing domain packets, but not what I'd expect. I wonder if the podman jail doesn't have a sane resolver 17:41:35 (and the base image doesn't have host or drill, and needs a shared lib libprivateldns.so.5 if I just mount in the host's /usr/bin) 17:42:36 it's not just using its /etc/resolv.conf like a regular jail? 17:49:45 I was wondering if the /etc/resolv.conf was broken, but no it's fine. And after fixing the /usr/lib volume for the needful .so I got drill working. 17:50:16 but it's now saying it got 0 bytes rcvd, and "error sending query: Could not send or receive, because of network error" 17:52:35 So a NAT issue? Who populates this cni-nat table? 17:52:56 Should it be persistent prolly? 17:53:19 Hecate: what i'd do in your position is to manually build and install pkg from a new ports tree. I dunno if it would fix the problem though. But it's what i'd do. I saw that error in pkg v2.2.0 & 2.2.1. but it's 2.2.2 here and no error 17:55:49 Hecate: what fo you have in /etc/pkg? 17:56:36 pkg repos -le 17:57:19 might be relevant: i'm running 14-stable here 17:59:20 Hecate: also see https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286532 18:03:46 divlamir: that's a good question, I'll dig into that 18:16:39 f451: I have FreeBSD 18:16:39 FreeBSD-kmods 18:17:16 f451: that TLS thing is peculiar because `fetch` is more than happy to work on https://pkg.freebsd.org/FreeBSD:14:amd64/latest/data.pkg 18:17:41 that jail is 14.3-RELEASE 18:18:47 f451: also, pardon my ignorance but I have not used port trees in 10 years. What's the current accepted way to do such a thing? 18:27:23 You can get away with an old school `make install` in the case of pkg. No dependencies to build 18:27:38 josephholsten: heres what I would try 18:28:15 make the container, try the following in it 18:28:50 `fetch -v http://1.1.1.1/` 18:29:17 given you're using a fresh vm afaict and a vanilla pf it will probably work 18:29:40 then check whats in /etc/resolv.conf and I'm guessing it will either be missing or not appropriate for the container 18:30:03 e.g. on my prod systems, raw dns is only allowed from jail -> locked down local resolver on the jail host 18:30:33 `fetch: http://1.1.1.1: Address family for host not supported` 18:30:55 oho thats weird 18:30:56 https://docs.skunkwerks.at/LqHthEkTSeGDwV0PDUQSyg#faq-reported-user-issues-amp-solutions 18:31:28 it's making me want to just run a debugger to see what call is failing. 18:43:01 Nice, those meta-package sets of ivy! I can make use of some very minimal jails 18:47:13 Where can I read more about these upcoming pkg groups? In a mailing list archive maybe, whicj one? 18:48:20 i don't know if that is documented anywhere other than bapt's head :-) 18:48:53 Patience then, the time will come :) 18:49:24 package sets might not be what you want for a minimal jail, the "minimal" is already quite large, you can create minimal jails by just installing some packages by hand 18:49:50 although as i was saying earlier, it would be nice to have some sort of minimal-jail set, i'm just not sure what to call it or how to organise it... 18:51:04 Some "minimal" default is already nice to have. As you said in the review, those who need more, I mean less, can do it pkg by pkg 18:51:58 No default suits everyone, but as long it's a sensible one, it works 20:15:06 so, first pkgbase tests have been completed 20:15:13 that's a nice way 20:17:32 are you switching to pkgbase ketas ? 20:19:01 not yet 20:20:27 OK 20:21:20 i'll do some embedded image generation via that at first 20:24:31 i could run all that via poudriere too after it would actually run like i want 20:46:29 is there no way to set a jail to use 1 core, rather than how cpuset works which requires that i set a jail to use 1 SPECIFIC core? the difference is how cpuset works requires me to add a unique command for each jail cpusetting it a specific core, instead of just telling all jails to take 1 core each 20:46:57 kerneldove: rctl can do that, but don't ask me how, i've never actually used it 20:47:18 ya it seems the rctl method isn't as smooth as handing out dedicated cores 20:50:21 with rctl you set the percentage, so set it to 100% for each jail you want to limit, is exactly what you ask for 20:51:14 ya true but like it said it seems the cpu throttling isn't smooth from what i've read 20:51:16 I noticed 15 aplhpa dropped. 20:51:42 why? it gives you what you ask for 20:51:52 alpha ;/ coffee 20:52:12 it's not throttling, it cuts you off at the limit you give 20:52:44 divlamir, do you know how much drift there is between what it can burst to and the limit set? 20:53:41 how much is it? 20:54:00 ya like could the jail use 125% cpu before the limiter kicks in? 20:54:00 you mean it down't kick in fast enough? 20:54:16 kinda but more how persistent is the limit 20:55:11 idk the implementation details, but it's the closest thing you describe 20:57:45 according to the docs, the system would just deny more cpu time than permitted 21:03:05 interesting discovery: buildah/podman-build jails seem to have no ip address according to `jls -h` and so apparently I don't truly know anything about jails 21:03:33 jail(8) says ips are mandatory, ocijail laughs in my face 21:04:34 kerneldove: see here: https://github.com/freebsd/freebsd-src/blob/main/sys/kern/kern_rctl.c#L418-L445 21:04:59 "This makes %cpu throttling more aggressive and lets us act sooner than the limits are already exceeded." 21:05:19 There is some sort of heuristic, to act before the limit is actually reached 21:07:52 hahaHAHAhahehe oh. $ ifconfig vnet38718f1c; description: associated with jail: buildah-buildah2100252023 as nic: eth0 21:08:42 oh podman, I don't have an eth0, buddy. This is a vm. 21:13:10 All a jail needs is an ip and a hostname. 21:16:54 hrm, cigarettes too 21:17:22 i don't think a jail actually requires an ip address 21:18:05 although now i think about it, i've never tested that 21:18:42 You can use a "shared" ip from the host. It still counts as one. 21:19:28 i know, but i mean it shouldn't require an IP address at all... perhaps this is enforced for historical reasons though 21:19:37 divlamir, ok i'll try pcpu 21:19:48 what about the new service jails, why would they requier an ip? 21:20:39 svcjs require an ip address because (iirc) there's no way to configure them not to have one 21:20:46 ivy: I was thinking peerhaps the same thing. I'm no expert far from it. I was looking at jail.c. 21:21:02 i forget though, perhaps some combination of svcj_options would provide an svcj without an ip address 21:24:01 kerneldove: my .conf for service jail has ip4 = inherit 21:24:39 I don't have a single jail without some kind of networked service, so I've never asked myself if a jail w/o an ip is a possibility 21:25:12 rj1: i believe kerneldove is talking about the svcj feature in rc(8) which runs a service in a jail automatically (without using jail.conf) 21:25:23 (this is new in 15.0) 21:25:25 ya 21:26:07 Oh ok my bad guys. I have not tested 15 any or kept up. Thanks for info sounds cool! 21:26:30 I was looking at a service jail on 14.3 21:26:39 it's still a little half baked, so the answer to "why can't i do X with svcj?" is probably that no one implemented it yet 21:27:11 * rtj is half baked too 21:27:52 aren't we all 21:28:36 * divlamir feels more like burned out 21:33:04 oooh, magic service jails sounds fancy. 21:33:06 i got racct enabled in loader.conf and rebooted, and now i see i set limits in /etc/rctl.conf. wish we could set them right in the jail's .conf file no? 21:34:07 oh no. the buildah gives the right interface when I RUN /sbin/route get 1.1.1.1; but it still refuses RUN /usr/bin/fetch http://1.1.1.1 21:34:26 or maybe i can? like in exec.created = "rctl -a jail:${name}:pcpu:deny=80"; ? 21:38:07 that shoud work too 21:41:09 ill try it now 21:41:50 and maybe rctl -r ... when you stop it 21:42:35 ya i was wondering that. gonna test and see if it's needed 22:07:40 can we enable racct at runtime like 'sysctl kern.racct.enable=1' ? 22:11:36 wdym? it's enabled in rc.conf, rctl_enable 22:12:14 wtf? https://klarasystems.com/articles/controlling-resource-limits-with-rctl-in-freebsd/ says to use the sysctl 22:17:22 ah, you mean loader.conf. you can try, but then you can't enable the service in rc.conf as it won't be available at boot time 22:18:12 what's the point btw? you either want/need it or noe 22:19:40 if you want to unload all rules at runtime: `rctl -r :` 22:22:55 can't use sysctl kern.racct.enabled=1 at runtime it seems 22:23:20 seems to require being in loader.conf so it's there at boot, damn 22:23:42 why is it a problem? 22:23:56 well i like when i can enable stuff at runtime 22:24:41 don't enable the service then, it won't load any rules 22:25:33 no dude what i'm saying is i was hoping i could sysctl kern.racct.enable=1 then service jail start testjail with rctl -a commands in its conf file 22:26:49 you can do just that, no need for the sysctl call 22:27:17 nah i just tested it 22:27:21 try it yourself 22:28:14 with kern.racct.enable=1 in loader.conf, it should work 22:28:22 and is that what i said? no 22:30:34 whatever, i used the handbook when i played it, so loader.conf it is 22:32:27 what i was saying is, it's not doing anything until you load some rules i.e. enable it as you say 23:26:32 where's userlanddove 23:33:53 and other system animals 23:52:09 be the userland dove you want to see in the world