12:52:41 I vaguely remember there being a way to blacklist processes from getting oomkilled – anyone know what I have to search for? 12:53:39 building electron and friends keeps oomkilling central parts of my home infrastructure and it's really annoying. 12:56:06 <[tj]> protect(1) 13:13:48 oh, freebsd does Linux's memory overcomitment strategy too? Dang, that's annoying 13:15:52 <[tj]> I didn't think we did out of the box 13:16:16 <[tj]> $ sysctl vm.overcommit 13:16:17 <[tj]> vm.overcommit: 0 13:18:25 yeah, def not enabled by default 13:21:42 hmmm, I thought the OOM Killer was an artifact of memory overcommitment; I guess it would also make sense if the kernel itself needs more memory just to operate and needs to reclaim it from running processes 13:23:01 <[tj]> I think swap is memory 15:05:26 Swap is memory, yes. . . ish. 15:07:24 still looking for a COW buffer option ;] 15:07:47 Don't make me cowsay in here! 15:08:15 moooo! 15:08:45 over the weekend i was chatting about an alternative to using a ramdisk, looking for an option to buffer writes against ZFS or another FS until a manual commit 15:09:27 <[tj]> Demosthenex: I've never heard of someone doing this, I think you are in new territory 15:09:43 <[tj]> you could format your memory disk as zfs and stream diffs to a file 15:13:07 /ex figlet mmmmm, Buffers | cowsay -n 15:19:20 [tj]: today i host a large minecraft server on a ramdisk. it does a nonstop stream of tiny writes that is bad for ssds. i've made several ssds go read only after running out of writes in the past. i use a ramdisk and rsync the data down to my zpool every 15 minutes (save pause, full save, rsync, save resume). it's just 90% of the ramdisk is idle data / wastes space 15:19:40 i was just looking for a way to minimize the ram footprint while aggregating 15 minutes of writes at a time 15:20:30 i considered a COW buffer that is flushed at a scheduled interval... 15:25:26 <[tj]> I think it is a great idea, I'm not sure if you are going to find what you need 15:25:30 <[tj]> zfs is an option 15:25:53 <[tj]> I'm not really a filesystem person so I'm not sure if there is something slimmer you could try 15:26:06 <[tj]> sounds fun and if I had the time for distractions... 15:26:22 i dream of one day understanding vfs 15:33:33 surely those huge minecraft server providers have some solution to that issue 15:43:20 Demosthenex, maybe spinning rust in a raid10 (or raid3) config would be better for that implementation. 15:45:27 CrtxReavr: i have hdd (spinning rust) on my zraid. the server data lives there, with zfs snapshots too. but in the past hdd was slow for minecraft. ramdisk is fast, aggregates the writes, and if it crashes i can discard the ramdisk and load from zraid 15:46:26 if i could tell ZFS to cache writes in ram forever for a specific filesystem, and flush on command, that may work 15:48:02 while ssd's have a specified limit for writes before they break, i'm not sure i want the write load direct to the hdd's either :P 15:48:19 but going from 20G to 25G of tmpfs in ram is awkward 15:51:39 Ok I am really confused... I am like 1-2 weeks late on doing this I have been busy with personal issues. So pflog has shown some interesting things. so the NAT works when there is no wireguard iface, packet is passed out and comes back and statefully is passed to the epair, which the other iface is within the jail. Now "default" in the routing table is wlan0, now there is a 0.0.0.0/1 rule created 15:51:41 by wg-quick(8) for wg0, now what I presume the state somewhere is being broken, as my laptop has a default drop all for inbound connections for obvious reasons, no state means the packet passes through pf and as expected by my ruleset it matches "block in on wg0" and is dropped. This does not happen when wlan0 is used without the wireguard tunnel, wg0 is not explicitly named it is expanded out 15:51:43 from "block log all" (technically the all isn't required), which means there is also an expanded rule for wlan0, but a stateful packet doesn't pass through pf and thus non-syn tcp packets should ALWAYS pass... 15:51:56 sorry to interfere with others issues :P 15:53:05 never, its irc. jump in! 15:53:53 if you find the issue, please submit bug report, but only if you can provide the audience with more details and reveal the willingness to help with troubleshooting 15:54:54 Here is the simplified pf rules for debugging which I am using https://bpa.st 15:54:56 hrm, i guess i could create a ram device, make it a zpool, make a compressed zfs filesystem.... 15:55:06 https://bpa.st/TH3A 15:55:17 sorry didn't copy it properly haha 15:55:49 Back in the DOS game days, I had a box with 64 MB of RAM (back when most people only had 4-16 MB. . 15:56:43 sure, Demosthenex please read about makefs(8), you will be able to do the above in batch mode 15:56:43 I could allocate a 32 MB ramdrive and xcopy the game directory over. . . drastically cut down on game/level loading/saving time. 16:03:42 hehe 16:12:00 so, is there an iostat for tmpfs? 16:28:50 hmm, anyone here know how to change the user-id of the from header in alpine? I read there should be a config option to change it but I do not see one, currently it uses the name I set, and then for the email my unix username with the domain I set... my unix user is different from my email user-id and I cant seem to change it :/ 16:53:00 (It's unix, the user licenses are free.) 16:57:52 https://www.reddit.com/r/Crostini/comments/wufzch/alpine_email_client/ 18:58:31 i could only comment on mutt 19:01:44 you need to look up what mta alpine is using and look up that documentation 21:18:43 is there a way to recursively unmount filesystems with FreeBSD umount? Linux has --recursive but I can't seem to find a similar feature in BSD umount nor zfs-unmount 21:29:29 zfs-unmount has -R 21:29:48 er, dangit that's just mount 21:31:22 exactly, rtprio 21:31:57 I'm trying to cleanly unmount zfs delegated datasets within a jail upon shutdown 21:33:39 so umount --recursive /tank/delegated/ would be great to unmount all child dataset a jail user may have created 21:40:18 time for a quick shell script it seems :) 21:49:59 how often is this something that you do? 21:51:35 zfs list -Ho name -r red |sort -r |xargs -n1 echo zfs umount 22:34:45 i think i did a booboo 22:35:26 i created a zfs pool using an entire nda disk, rather than creating a GPT and then putting the pool on a partition on that 22:35:54 now, that's fine by itself, but it seems the bootloader can't do anything with that. 23:12:55 yeah, you can't boot with that 23:15:57 right 23:19:58 would i be right in assuming that i also can't boot if the bootloader and the pool are on different disks?