-
phryk
I vaguely remember there being a way to blacklist processes from getting oomkilled – anyone know what I have to search for?
-
phryk
building electron and friends keeps oomkilling central parts of my home infrastructure and it's really annoying.
-
[tj]
protect(1)
-
Dooshki
oh, freebsd does Linux's memory overcomitment strategy too? Dang, that's annoying
-
[tj]
I didn't think we did out of the box
-
[tj]
$ sysctl vm.overcommit
-
[tj]
vm.overcommit: 0
-
rtprio
yeah, def not enabled by default
-
Dooshki
hmmm, I thought the OOM Killer was an artifact of memory overcommitment; I guess it would also make sense if the kernel itself needs more memory just to operate and needs to reclaim it from running processes
-
[tj]
I think swap is memory
-
CrtxReavr
Swap is memory, yes. . . ish.
-
Demosthenex
still looking for a COW buffer option ;]
-
CrtxReavr
Don't make me cowsay in here!
-
Demosthenex
moooo!
-
Demosthenex
over the weekend i was chatting about an alternative to using a ramdisk, looking for an option to buffer writes against ZFS or another FS until a manual commit
-
[tj]
Demosthenex: I've never heard of someone doing this, I think you are in new territory
-
[tj]
you could format your memory disk as zfs and stream diffs to a file
-
CrtxReavr
/ex figlet mmmmm, Buffers | cowsay -n
-
Demosthenex
[tj]: today i host a large minecraft server on a ramdisk. it does a nonstop stream of tiny writes that is bad for ssds. i've made several ssds go read only after running out of writes in the past. i use a ramdisk and rsync the data down to my zpool every 15 minutes (save pause, full save, rsync, save resume). it's just 90% of the ramdisk is idle data / wastes space
-
Demosthenex
i was just looking for a way to minimize the ram footprint while aggregating 15 minutes of writes at a time
-
Demosthenex
i considered a COW buffer that is flushed at a scheduled interval...
-
[tj]
I think it is a great idea, I'm not sure if you are going to find what you need
-
[tj]
zfs is an option
-
[tj]
I'm not really a filesystem person so I'm not sure if there is something slimmer you could try
-
[tj]
sounds fun and if I had the time for distractions...
-
ivy
i dream of one day understanding vfs
-
mfro
surely those huge minecraft server providers have some solution to that issue
-
CrtxReavr
Demosthenex, maybe spinning rust in a raid10 (or raid3) config would be better for that implementation.
-
Demosthenex
CrtxReavr: i have hdd (spinning rust) on my zraid. the server data lives there, with zfs snapshots too. but in the past hdd was slow for minecraft. ramdisk is fast, aggregates the writes, and if it crashes i can discard the ramdisk and load from zraid
-
Demosthenex
if i could tell ZFS to cache writes in ram forever for a specific filesystem, and flush on command, that may work
-
Demosthenex
while ssd's have a specified limit for writes before they break, i'm not sure i want the write load direct to the hdd's either :P
-
Demosthenex
but going from 20G to 25G of tmpfs in ram is awkward
-
polarian
Ok I am really confused... I am like 1-2 weeks late on doing this I have been busy with personal issues. So pflog has shown some interesting things. so the NAT works when there is no wireguard iface, packet is passed out and comes back and statefully is passed to the epair, which the other iface is within the jail. Now "default" in the routing table is wlan0, now there is a 0.0.0.0/1 rule created
-
polarian
by wg-quick(8) for wg0, now what I presume the state somewhere is being broken, as my laptop has a default drop all for inbound connections for obvious reasons, no state means the packet passes through pf and as expected by my ruleset it matches "block in on wg0" and is dropped. This does not happen when wlan0 is used without the wireguard tunnel, wg0 is not explicitly named it is expanded out
-
polarian
from "block log all" (technically the all isn't required), which means there is also an expanded rule for wlan0, but a stateful packet doesn't pass through pf and thus non-syn tcp packets should ALWAYS pass...
-
polarian
sorry to interfere with others issues :P
-
Demosthenex
never, its irc. jump in!
-
mzar
if you find the issue, please submit bug report, but only if you can provide the audience with more details and reveal the willingness to help with troubleshooting
-
polarian
Here is the simplified pf rules for debugging which I am using
bpa.st
-
Demosthenex
hrm, i guess i could create a ram device, make it a zpool, make a compressed zfs filesystem....
-
polarian
-
polarian
sorry didn't copy it properly haha
-
CrtxReavr
Back in the DOS game days, I had a box with 64 MB of RAM (back when most people only had 4-16 MB. .
-
mzar
sure, Demosthenex please read about makefs(8), you will be able to do the above in batch mode
-
CrtxReavr
I could allocate a 32 MB ramdrive and xcopy the game directory over. . . drastically cut down on game/level loading/saving time.
-
Demosthenex
hehe
-
Demosthenex
so, is there an iostat for tmpfs?
-
polarian
hmm, anyone here know how to change the user-id of the from header in alpine? I read there should be a config option to change it but I do not see one, currently it uses the name I set, and then for the email my unix username with the domain I set... my unix user is different from my email user-id and I cant seem to change it :/
-
CrtxReavr
(It's unix, the user licenses are free.)
-
CrtxReavr
-
Demosthenex
i could only comment on mutt
-
rtprio
you need to look up what mta alpine is using and look up that documentation
-
veg
is there a way to recursively unmount filesystems with FreeBSD umount? Linux has --recursive but I can't seem to find a similar feature in BSD umount nor zfs-unmount
-
rtprio
zfs-unmount has -R
-
rtprio
er, dangit that's just mount
-
veg
exactly, rtprio
-
veg
I'm trying to cleanly unmount zfs delegated datasets within a jail upon shutdown
-
veg
so umount --recursive /tank/delegated/ would be great to unmount all child dataset a jail user may have created
-
thedaemonAtWork
time for a quick shell script it seems :)
-
rtprio
how often is this something that you do?
-
rtprio
zfs list -Ho name -r red |sort -r |xargs -n1 echo zfs umount
-
MelMalik
i think i did a booboo
-
MelMalik
i created a zfs pool using an entire nda disk, rather than creating a GPT and then putting the pool on a partition on that
-
MelMalik
now, that's fine by itself, but it seems the bootloader can't do anything with that.
-
rtprio
yeah, you can't boot with that
-
MelMalik
right
-
MelMalik
would i be right in assuming that i also can't boot if the bootloader and the pool are on different disks?