05:20:59 Macer: Unless something changes, I'm guessing stable-14 is where we'd see it, and that doesn't show 2.3: https://cgit.freebsd.org/src/tree/sys/contrib/openzfs/META?h=stable/14 12:27:17 anyone else having problems with the atom feed for the ports repo? rssguard only says parsing error 16:49:04 so, i am curious if i can use a COW buffer on freebsd for my use case. i have a modest minecraft server which i run out of a 16GB ramdisk. i have a cron job that pauses saving, does a full save, and then rsyncs the data from ram to my ZFS raid, and resumes normal saving. minecraft is notorious for nonstop writes and having corrupt saves, so ramdisk has been a critical strategy. however the difference 16:49:10 in size over 15 minutes is quite small. i was considering whether i could setup a COW buffer and only save the changes, and once every interval commit those changes to disk 16:51:17 the goal being not to dedicate 16G of ram all the time, just the COW buffer. anything like that on freebsd? 16:52:36 i kind of thought maybe some zfs filesytem with a no-write or write to ram flag, that only saves to disk when instructed 16:53:36 im' not any expert in zfs but i think it already writes to memory and then commits to disk periodically (unless SYNC is used n the write) 16:53:47 and one way to control it is vfs.zfs.txg.timeout 16:54:07 are you certain you need this separate memory disk? why not just use zfs on an SSD? 16:55:00 when you say 'corrupt saves', what does this mean for your approach / how does this require the memory disk? 16:56:47 but answering your question, i think zfs does that already as said above, but i could be wrong.. 16:58:29 two reasons for memory. modded minecraft sometimes corrupts the save on crash, so it's safer to have a good rollback copy 15 minutes old. by pausing the nonstop saving, flushing the save to ramdisk, rsyncing that down to a real drive, and resuming saving to the ram disk, you get a good clean copy 16:58:48 when the server reloads, it'll automatically sync the real disk copy over the ramdisk, restoring to before the crash 16:59:06 hmm, can you do that with a zfs snapshot and rolling back to the snapshot? 16:59:21 second is that minecraft is latency sensitive on reads, and does a nonstop high volume of write IO. it has a reputation for destroying ssd's, and i've lost one before exactly that way 16:59:35 OK 16:59:53 i'd rather queue up 15 minutes of writes in RAM, and save those changes to disk at the scheduled interval 17:00:03 of course, with the option to discard the writes in ram ;] 17:00:43 i do use zfs snapshots via zrepl to capture the save routinely 17:02:14 hmm, not sure then 17:05:28 so, i'm not as concerned about impacting the longevity of the HDD array with ZRAID 17:05:52 i could try moving it to just local on the filesystem, and zfs snapshot 17:07:54 i would definitely recommend that, except it doesn't soud like a spinning disk will handle your workload, or if it does, it sounds like you will lose more money in hdd wear than in ssd wear 17:08:55 you can also put your ssd into a raidz if your concern is the same as for hdd, i.e. availability of your ssd storage (rather than destroying the ssd per se) 17:09:11 (mirror would be better for this though) 17:09:58 disadvantage with multi-drive vdevs is then you are wearing out multiple drives at once 17:10:18 but at some point, for a demanding workload, resources need to be allocated/spent 17:13:49 so again, that's an opportunity for write aggregation in ram to do it's job 17:14:50 i think zfs already does this aggregation, but i don't have the details, and i admit you won't be able to control it with as much granularity 17:15:11 ssd's have a limited write capacity. hdd's do have mechanical wear but not the same way. they will last much longer 17:17:49 well it will surely depend at least partly on the pattern of writes and throughput vs writes per second 17:18:17 but maybe HDD is acceptable, especially if zfs can do the in-memory buffering that you need 17:22:45 Demosthenex: You're aware that the typical failure mode of an SSD is to become read-only, right? If a mechanical disk fails, your likelihood of recovering data from it is slim, by comparison 17:26:28 break19: i am. i've had that happen repeatedly in the past. part of why i use a ramdisk now 17:26:43 and my zraid i have spares available 17:28:29 are these disks 7200 or 10k? 17:28:55 Demosthenex: understood. I was just making sure you were aware. It's obvious when you think about it, but those of us who grew up with the old school spinnyspins don't "know it" instinctively, if you get my drift. 17:31:19 In fact, I have an old "used up" SSD at work that, while we cant write to it, or use it in a production system, is perfectly useful as a fast, cheap restore point. lol 17:54:48 yeah. i've done raid since the 90's and play with flash arrays on real san's these days. this is my home hosting, so just WD red's 17:55:13 the key here is i know i can cause too much drive stress, so i'm using a ramdisk to cope 17:55:29 and given most of the data is idle, i was thinking maybe a form of COW could handle it 17:55:41 or long delayed writes 19:54:42 good evning, what does ppl running for DE/WM ? 20:07:02 nwe: i don't currently use freebsd on desktop but when i did i liked sway 20:09:54 ivy: will take a look on that :) 20:10:48 hikari also seems somewhat popular, i personally didn't like it. and a lot of people use KDE 20:11:11 (unless i am mistaken, i feel like KDE is more maintained in freebsd than GNOME) 20:11:13 ivy: why did you go away from FreeBSD as dekstop OS? 20:12:01 nwe: i repurposed my freebsd desktop into something else and my other desktop is an M1 Mac which freebsd doesn't support well, so i went back to macOS 20:12:22 ah I see! 20:12:42 ivy: thanks for response, I will take a look on sway. 20:20:29 i run stumpwm and lots of terminals. 20:20:49 i run freebsd as desktop and server, because i expect a unix workstation, not a wintendo. 20:29:30 forgive me for being an eternal fence rider 20:36:11 emacs or vi. you have 30 seconds to comply. 20:36:20 ;] 20:38:50 Demosthenex, Nano. 20:38:53 for some reason always when I end up in vi, I try to use some vim commands or movements 20:38:56 Sorry, I was alt-tabbed. 21:46:40 vim. It's like emacs, but without the operating system. 21:47:46 but the operating system part of emacs is the good part, you should replace the editor part, maybe with evil 21:53:16 emacs: the editor so bad they wrote an editor for it 21:56:06 hey, honest question: why people always picky about using raiz1 only (single parity risky loosing all data suring resilver) and I never see a single site recomending mirror with more than 2 disks? (2 disk mirror also same risk resilvering with single parity). 21:58:04 cybercrypto: i don't know about 'sites recommendings' but 3-disk mirror has always been very common in enterprise environments 21:58:08 cybercrypto: It's all two-way mirrors or raidz2 here. I think I did a three-way mirror once on a box without four disks. 21:58:32 if you care that much about uptime though, draid is also worth a lot (i think we have that now via openzfs) 21:58:49 s/a lot/a look 22:04:35 ivy: I agree. 22:05:30 cybercrypto: You have to measure data security vs data availability. If I had more, and bigger drives, I would be using raidz2. Right now I'm running 3x1TB in a raidz1, with it soon to be swapped to raidz2 with 4x3TB drives 22:05:56 draid sounds promissing, it is quite great idea/concept. 22:07:50 Yep, and if I had 10 drives (and a box for said 10 drives) I'd use it. As it stands, my data storage desires far outweigh my data storage budget. :) 22:09:56 break19: 8-D 22:11:27 I currently have 3 of the 4 3TiB drives sitting in boxes awaiting the final drive and the controller and mini-sas cables. 22:13:29 under $20 USD each for the drives, all new. all are HGST SAS drives, but sourced from a couple of different places (because thats what I was always taught to do for building a new raid) 22:16:08 they're older 6gbps SAS, rather than the newer 12G standard, but for my needs, that's plenty. It's a Plex server, and there's only three of us in the household, and it's rare that we have more than one stream going at a time, and more than two will probably never happen. About the only time the 17yr old watches movies, she's watching whatever the wife is watching. 23:41:59 the older LSI controllers were massive power hogs and would increase power consumption further by not allowing the CPU to enter low C-states. even though hard drives consume power, it is one thing to watch out for