-
mason
Macer: Unless something changes, I'm guessing stable-14 is where we'd see it, and that doesn't show 2.3:
cgit.freebsd.org/src/tree/sys/contrib/openzfs/META?h=stable/14
-
nimaje
anyone else having problems with the atom feed for the ports repo? rssguard only says parsing error
-
Demosthenex
so, i am curious if i can use a COW buffer on freebsd for my use case. i have a modest minecraft server which i run out of a 16GB ramdisk. i have a cron job that pauses saving, does a full save, and then rsyncs the data from ram to my ZFS raid, and resumes normal saving. minecraft is notorious for nonstop writes and having corrupt saves, so ramdisk has been a critical strategy. however the difference
-
Demosthenex
in size over 15 minutes is quite small. i was considering whether i could setup a COW buffer and only save the changes, and once every interval commit those changes to disk
-
Demosthenex
the goal being not to dedicate 16G of ram all the time, just the COW buffer. anything like that on freebsd?
-
Demosthenex
i kind of thought maybe some zfs filesytem with a no-write or write to ram flag, that only saves to disk when instructed
-
jmnbtslsQE
im' not any expert in zfs but i think it already writes to memory and then commits to disk periodically (unless SYNC is used n the write)
-
jmnbtslsQE
and one way to control it is vfs.zfs.txg.timeout
-
jmnbtslsQE
are you certain you need this separate memory disk? why not just use zfs on an SSD?
-
jmnbtslsQE
when you say 'corrupt saves', what does this mean for your approach / how does this require the memory disk?
-
jmnbtslsQE
but answering your question, i think zfs does that already as said above, but i could be wrong..
-
Demosthenex
two reasons for memory. modded minecraft sometimes corrupts the save on crash, so it's safer to have a good rollback copy 15 minutes old. by pausing the nonstop saving, flushing the save to ramdisk, rsyncing that down to a real drive, and resuming saving to the ram disk, you get a good clean copy
-
Demosthenex
when the server reloads, it'll automatically sync the real disk copy over the ramdisk, restoring to before the crash
-
jmnbtslsQE
hmm, can you do that with a zfs snapshot and rolling back to the snapshot?
-
Demosthenex
second is that minecraft is latency sensitive on reads, and does a nonstop high volume of write IO. it has a reputation for destroying ssd's, and i've lost one before exactly that way
-
jmnbtslsQE
OK
-
Demosthenex
i'd rather queue up 15 minutes of writes in RAM, and save those changes to disk at the scheduled interval
-
Demosthenex
of course, with the option to discard the writes in ram ;]
-
Demosthenex
i do use zfs snapshots via zrepl to capture the save routinely
-
jmnbtslsQE
hmm, not sure then
-
Demosthenex
so, i'm not as concerned about impacting the longevity of the HDD array with ZRAID
-
Demosthenex
i could try moving it to just local on the filesystem, and zfs snapshot
-
jmnbtslsQE
i would definitely recommend that, except it doesn't soud like a spinning disk will handle your workload, or if it does, it sounds like you will lose more money in hdd wear than in ssd wear
-
jmnbtslsQE
you can also put your ssd into a raidz if your concern is the same as for hdd, i.e. availability of your ssd storage (rather than destroying the ssd per se)
-
jmnbtslsQE
(mirror would be better for this though)
-
jmnbtslsQE
disadvantage with multi-drive vdevs is then you are wearing out multiple drives at once
-
jmnbtslsQE
but at some point, for a demanding workload, resources need to be allocated/spent
-
Demosthenex
so again, that's an opportunity for write aggregation in ram to do it's job
-
jmnbtslsQE
i think zfs already does this aggregation, but i don't have the details, and i admit you won't be able to control it with as much granularity
-
Demosthenex
ssd's have a limited write capacity. hdd's do have mechanical wear but not the same way. they will last much longer
-
jmnbtslsQE
well it will surely depend at least partly on the pattern of writes and throughput vs writes per second
-
jmnbtslsQE
but maybe HDD is acceptable, especially if zfs can do the in-memory buffering that you need
-
break19
Demosthenex: You're aware that the typical failure mode of an SSD is to become read-only, right? If a mechanical disk fails, your likelihood of recovering data from it is slim, by comparison
-
Demosthenex
break19: i am. i've had that happen repeatedly in the past. part of why i use a ramdisk now
-
Demosthenex
and my zraid i have spares available
-
jmnbtslsQE
are these disks 7200 or 10k?
-
break19
Demosthenex: understood. I was just making sure you were aware. It's obvious when you think about it, but those of us who grew up with the old school spinnyspins don't "know it" instinctively, if you get my drift.
-
break19
In fact, I have an old "used up" SSD at work that, while we cant write to it, or use it in a production system, is perfectly useful as a fast, cheap restore point. lol
-
Demosthenex
yeah. i've done raid since the 90's and play with flash arrays on real san's these days. this is my home hosting, so just WD red's
-
Demosthenex
the key here is i know i can cause too much drive stress, so i'm using a ramdisk to cope
-
Demosthenex
and given most of the data is idle, i was thinking maybe a form of COW could handle it
-
Demosthenex
or long delayed writes
-
nwe
good evning, what does ppl running for DE/WM ?
-
ivy
nwe: i don't currently use freebsd on desktop but when i did i liked sway
-
nwe
ivy: will take a look on that :)
-
ivy
hikari also seems somewhat popular, i personally didn't like it. and a lot of people use KDE
-
ivy
(unless i am mistaken, i feel like KDE is more maintained in freebsd than GNOME)
-
nwe
ivy: why did you go away from FreeBSD as dekstop OS?
-
ivy
nwe: i repurposed my freebsd desktop into something else and my other desktop is an M1 Mac which freebsd doesn't support well, so i went back to macOS
-
nwe
ah I see!
-
nwe
ivy: thanks for response, I will take a look on sway.
-
Demosthenex
i run stumpwm and lots of terminals.
-
Demosthenex
i run freebsd as desktop and server, because i expect a unix workstation, not a wintendo.
-
MelMalik
forgive me for being an eternal fence rider
-
Demosthenex
emacs or vi. you have 30 seconds to comply.
-
Demosthenex
;]
-
MelMalik
Demosthenex, Nano.
-
nimaje
for some reason always when I end up in vi, I try to use some vim commands or movements
-
MelMalik
Sorry, I was alt-tabbed.
-
break19
vim. It's like emacs, but without the operating system.
-
nimaje
but the operating system part of emacs is the good part, you should replace the editor part, maybe with evil
-
MelMalik
emacs: the editor so bad they wrote an editor for it
-
cybercrypto
hey, honest question: why people always picky about using raiz1 only (single parity risky loosing all data suring resilver) and I never see a single site recomending mirror with more than 2 disks? (2 disk mirror also same risk resilvering with single parity).
-
ivy
cybercrypto: i don't know about 'sites recommendings' but 3-disk mirror has always been very common in enterprise environments
-
mason
cybercrypto: It's all two-way mirrors or raidz2 here. I think I did a three-way mirror once on a box without four disks.
-
ivy
if you care that much about uptime though, draid is also worth a lot (i think we have that now via openzfs)
-
ivy
s/a lot/a look
-
cybercrypto
ivy: I agree.
-
break19
cybercrypto: You have to measure data security vs data availability. If I had more, and bigger drives, I would be using raidz2. Right now I'm running 3x1TB in a raidz1, with it soon to be swapped to raidz2 with 4x3TB drives
-
cybercrypto
draid sounds promissing, it is quite great idea/concept.
-
break19
Yep, and if I had 10 drives (and a box for said 10 drives) I'd use it. As it stands, my data storage desires far outweigh my data storage budget. :)
-
cybercrypto
break19: 8-D
-
break19
I currently have 3 of the 4 3TiB drives sitting in boxes awaiting the final drive and the controller and mini-sas cables.
-
break19
under $20 USD each for the drives, all new. all are HGST SAS drives, but sourced from a couple of different places (because thats what I was always taught to do for building a new raid)
-
break19
they're older 6gbps SAS, rather than the newer 12G standard, but for my needs, that's plenty. It's a Plex server, and there's only three of us in the household, and it's rare that we have more than one stream going at a time, and more than two will probably never happen. About the only time the 17yr old watches movies, she's watching whatever the wife is watching.
-
duncan
the older LSI controllers were massive power hogs and would increase power consumption further by not allowing the CPU to enter low C-states. even though hard drives consume power, it is one thing to watch out for