00:26:52 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=255271 is bug of polyex 00:26:54 Title: 255271 – Cosmetic: service -e lists sendmail if sendmail_enable="NONE" 02:04:39 Hey... quick and stupid question: 02:05:31 Let's say I create a zfs vdev with a single disk. Can I later grow this vdev into mirror, adding the second disk? 02:06:31 and even further:, Can I then grow the vdev mirror (with 2 disks) into a vdev with 3 disks raidz? 02:09:52 you can turn a non-redundant vdev into a mirror with zpool-attach(8) 02:10:32 you can't do the latter 02:12:50 llua: thanks for the prompt response. 02:13:47 llua: Can I turn a non-redundant vdev into a raidz (without the mirror step) 02:14:04 no 02:15:29 I see. lets say O do the firt step. (non-redundant into mirror vdev -> into a pool. Can I add a new vdev mirror (2 mirror vdev using 2 disks each) to that same pool and expand capacity? 02:18:20 with zpool-add(8) you can do that, only newly written data would be striped between the two vdevs tho 02:18:42 there is a #zfs channel too btw 02:19:55 good stuff, thanks. 03:12:25 install_from_index() in freebsd-update has be so slow, looping over install(1) for hours. 03:12:36 has been* 09:21:01 There is a way, using the geom_zero loadable module, to set up a raidz using only two disks - but I don't believe it's possible to do with only one. 09:26:41 yay, i'm on 14.1, and discovered my hosting provider changed plans, and I now have double the CPUs at less cost :) 10:18:06 any idea why CPU and memory use of processes running in the linuxlator layer are not visible in htop? 10:18:15 I can seeh the process, and see its CPU load 10:18:22 but not in the bars at the top 10:18:38 all the CPUs there are at like 1%, while the process says 40% cpu load 10:42:02 yourfate: see if they appear in `top(1)` first, I'm guessing that htop is missing some logic to read the linux flavoured processes 10:42:17 htop shows the process 10:42:24 but doesn't add it to the totals att he top 10:45:57 https://i.imgur.com/fcW6Qyr.png 10:46:26 or maybe I don't fully understand it 10:47:14 aaaah I think WCPU mans "this process uses 40% of the cpu time being used rn" 10:47:19 not 40% of avaiable CPU time 10:48:01 yourfate: WCPU is cumulative usage of all cpu cores 10:48:19 IIRC but man.freebsd.org/top will give a proper answer 10:48:32 soo, in theory, if there was only 1 task, which used, say 20% of one CPU core, but b/c its the only task running, WCPU would be 100%? 10:48:42 no 10:48:57 if you have 1 process running at 100% on 2 cores then: 10:49:09 top will show CPU0 100% user, CPU1 100% user 10:49:24 and CPU2-7 will be 0% user 10:49:34 and WCPU should show 200% for gameserver 10:50:43 yourfate: theres a more accurate description in the manpage above, but my explanation is useful enough 10:50:47 WCPU = Weighted CPU 10:51:00 right, I just read that 10:51:23 ok, it seems the gameserver is just a lot more threaded than I had assumed, using all the cores roghtly equally 10:51:28 roughly 10:52:12 the OS will try to spread the load across all cpus unless a program binds itself to specific cores intentionally 10:52:26 ** not entirely accurate, again, but reasonably useful 10:53:38 another thing, it says size ~7.8GB in there 10:53:44 which I assume is memory usage 11:20:56 size is the total size of the virtual memory mapped for the process 11:22:41 that also counts mmap() ed files, shared libaries ect 11:25:04 oooh that would explain it, I think this process uses mono :x 12:33:53 is raidz expansion something that is supposed to work with 14.1? 12:56:45 ridcully: no, i think this is only in 15.0 12:57:52 lw: thanks 16:05:23 so, looking at the output of `zfs list`: https://gitlab.com/-/snippets/3717175 I don't understand it. why are there ro many different datasets mounted on `/`? 16:05:25 Title: zfs filesystem list ($3717175) · Snippets · GitLab 16:05:52 there are also a bunch of old snapshots I'm trying to get rid of 16:05:56 yourfate: they're not actually mounted - if you check the 'canmount' property it will be (i believe) 'noauto'. those are your different root filesystems created by bectl 16:05:57 but thye say they are referenced by those 16:06:12 if you don't want them, use bectl to delete your old BEs, that should also delete the zfs filesystem 16:06:31 so, I guess those are snapshots created when system upgrades happened? 16:06:33 (if you use freebsd-update, i believe it creates a new BE each time you run it, so you can easily roll back if something goes wrong) 16:06:38 yes 16:06:44 nice feature 16:07:07 so 'bectl list' and 'bectl delete ' for the ones that aren't active, if you don't want them anymore 16:07:25 er, destroy, not delete 16:07:55 ye, i'll keep the one befor now 16:07:59 and destory the others I guess 16:08:07 actually to be pedantic, those aren't snapshots, they're complete filesystems (cloned from a snapshot) which is why they show in 'zfs list' by default 16:08:57 * kevans ponders a delete synonym for destroy 16:09:22 the BSD tools seem to like destory 16:09:50 in this case i assume it's bectl destroy to match the zfs destroy command 16:10:04 I get really annoyed when I guess the wrong verb for commands. I haven't had a problem with bectl because I've worked on it, but I do wonder if others trip over it 16:10:27 yeah 16:10:29 bectl doesn't seem to provice autocomplete info :/ 16:10:32 kevans: i think in this case i'm probably confused because it was "ludelete" on Solaris... 16:16:19 woah, deleting those freed up around 40 GB 16:16:31 quite a bit on a 160 gb server 16:16:50 that's a surprisingly large amount, do you store application data on / rather than on zfs datasets? 16:17:39 I guess /compat is the biggest thing 16:17:48 with the compatibility layer stuff / linuxlator 16:17:58 home is in its own dataset 16:18:27 i'd probably put compat in its own dataset too so that doesn't happen again, usually these BE filesystems are pretty small 16:18:37 I want to set up a whole-machine backup. if I create a snapsho of the zroot pool, and then back that up using restic backup, then destory the snapshot, that seems decent? 16:18:38 like upgrading from pX to pY is going to be a few MB at most 16:21:10 here ist the output of `bectl list`: https://gitlab.com/-/snippets/3717180 16:21:12 Title: bectl ist ($3717180) · Snippets · GitLab 16:21:23 that doesn't accumulate to 40GB I think 16:21:48 (output from before I deleted them) 16:22:24 i usually just do zfs destroy -R blah, where blah is the snapshot taken during upgrades (and -R being needed because it has a clone) 16:22:35 yourfate: it might do if you had includede the snapshots in the list 16:22:48 hm? 16:23:13 'zfs list' only shows proper datasets by default - 'zfs list -t all' will also include snapshots, which can consume space 16:23:15 .oO(have we really been doing auto-BE since 13.1?) 16:23:24 time flies 16:23:53 kevans: 13.1 wasn't that long ago! lots of releases nowadays... feels like 14.0 only just came out and now there's 14.1 16:23:56 lw: I thought the bectl list would include the snapshot sizes. 16:24:17 I added the output of `zfs list -t snapshots` before the deletion to the last snippet. 16:24:33 those sizes match what bectl reports 16:25:28 back to what I originally planned: is doing a ZFS snapshot, then backing that up to a remote maching using restic backup a valid strategy? 16:37:28 oh damn, the snapshot contains nothing, I need to snapshot all the datasets individually? :/ 16:46:38 yourfate: Or recursively. 16:48:52 wait, I can't find the snapshot I created 16:49:14 ah b/c the dataset is not mounted anywhere? 16:52:29 but those I then have to find in all the differnt `.zfs` directories 16:52:34 I can't just back them up as one 16:52:44 I think? 16:55:16 you can recursively create snapshots, and you can use zfs send to stream them either to another zfs system or dump them to a file vs trying to rsync a bunch of .zfs/snapshot directories. 17:09:41 isley: but i'd still have to call the zfs send on each one of the recursive snapshots 17:10:10 I can't just do a recursive zfs send for all the snapshots, can I? 17:25:28 yourfate: i have found tools like synoid and sanoid to be quite helpful for zfs backups.. as it uses zfs send and zfs recv 17:26:02 https://github.com/jimsalterjrs/sanoid 17:26:03 Title: GitHub - jimsalterjrs/sanoid: These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.) 17:26:04 I can zfs-send right into restic, it can backup from stdin 17:26:14 yes it can 17:26:35 I generally like it and use it for all my other machines, so I thought I'd use it on the bsd server too 17:27:00 the backup target is a hetzner storage box, which I can ssh/sftp to. 17:27:36 yes 17:27:56 you can even tunnel through the ssh and have compression on the fly ahnd encrypted 17:28:34 good things to learn, zfs send | zfs recv is the foundation.. after learning that, i went to sanoid/syncoid as i could have it scheduled and ssh tunnel to remote locations, that is all 17:29:51 I have zfs send/recv'd before 17:29:58 when I moved a zpool to a new drive 17:30:20 but I still have to zfs send every dataset snapshot manually 17:30:58 also, if I had the snapshot as an FS, I can then use the restic mount to browse the old snapshots, if I use zfs send I cannot browse the data of old snapshots 17:40:46 yourfate: you can use -R 17:41:21 not sure if it's exactly what you look for though 17:42:24 that sounds nice 17:42:33 still can't browse the files then tho 17:42:57 I'd have to zfs receive it fully, can't just browse the mounted remote backup repo 17:44:00 you can browse the remote one just as you browse the local one 17:44:25 I wouldn't do zfs receive on the other end 17:44:53 I'd just `zfs send -R zroot@snap | restic backup` basically 17:45:18 so your snapshot ultimately is backed up as a file? 17:45:54 ye 17:46:06 but restic dedupes the file 17:46:12 as in, the content of the file 17:46:55 i think it's discouraged to back up zfs send streams as files over the long term because of possible changes in zfs that could render your file backup unusable or difficult to use, though these days i guess that's not as likely 17:47:51 if that dedup just deduplicates on a fixed block boundary, then it may not actually succeed in deduplicating at the logical level (since identical content across snapshots is not necessarily aligned that way) 17:48:49 but, backing up like this for a disaster recovery scenario, where you do it regularly and don't expect to ever use the data (and if you do it's not too many months after you created it), is a good idea 17:49:16 i think the consensus is that you want to do regular backups into an actual zpool 17:50:02 but that does make it more difficult, so it's just a judgement to make. backing up to a file is not unacceptable 17:52:28 (also, in the case of incremental snapshots, if you backup to file, any corruption or failure in just one incremental snapshot could make it impossible to use any of the ones which came after) 17:53:10 ye, that's why what I initially wanted to do was to backup the `.zfs/snapshot/...` directory 17:53:31 but those are strewn all over the FS when I do a recursive snapshot 17:53:39 I can't coveniently get those from one location 17:53:48 hmm, i haven't used .zfs much, but i don't think it changes the issue, which is what you are doing on the receiving side 17:54:07 it does, in there I just see the data as regular files 17:54:21 and can back them up like regular data, not some zfs send file 17:54:42 ah, you mean to backup the files on the filesystem themselves 17:54:52 ye 17:55:21 well i haven't used .zfs much, but another option would be to clone the snapshots in question, not sure if that is preferred for you 17:55:42 clone? 17:55:42 could surely be automated 17:55:51 clone so that you essentially mount the snapshot 17:56:00 ye, which is what the .zfs does 17:56:04 they are basically all mounted in there 17:56:08 but not recursively 17:58:10 looking at my system, it seems like you would need to go into the .zfs for each child dataset 17:58:17 that's the issue right? 17:58:41 yes! 17:58:53 b/c for example if I backup the root dataset, it won't include the homes 17:59:05 then I'd have to backup each individual home dataeset 17:59:18 yeah, but you could automate this by getting the mountpoint property of each child 17:59:27 yes 17:59:41 that seems like it would make the backup script complex 18:02:13 i don't think it'd be too bad. in the case of a child dataset snapshot, just add the name to the name of the file you send to your backup service. the parent and childs are all treated the same and you iterate based on the output of zfs list -o name,mountpoint -r -H or similar 18:02:53 you'd just need to figure out how to systematically identify your dataset name in the backup file. maybe replace / with something else depending on what your service accepts 18:02:56 right. if there was a simple way to recursiveyl mount a snapshot I'd just not have to parse anything 18:03:19 I found a tool call `zfsnapr` that does that 18:03:21 idk how good it is 18:03:43 yourfate: This discussion has gone on for long enough that I'd be grateful if you'd restate your goal. 18:05:00 mason: the plan is to backup a recursvie snapshot of an entire zpool using a non-zfs-aware backup tool, in my case restic backup. For this it would be cool if I could recursively mount the snapshot as one filesystem, so I don't have to collect the data from the individual child-snapshots from their respective `.zfs` directories 18:05:15 use sanoid/syncoid, and it'll take care of determining the most recently common snapshot and incrementally handle all the rest 18:05:33 ah.. non zfs-aware... 18:05:56 ye, I'm backing up to a storage service that is not zfs 18:06:07 otherwise it would be easy 18:07:11 to be exact, the target is a hetzner storage box, which I access over ssh/sftp 18:09:58 yourfate: I'm not sure I'm going to recommend this, but what about trying as a POC at least something like a file on the remote side, accessed via sshfs or something similar, used as backing storage for a pool to which you back things up with native ZFS tools? 18:10:16 The ZFS operations are all local then. 18:10:33 There are other ways you might make the file available, but sshfs comes to mind as arguably the simplest. 18:10:59 I already mount part of that box using sshfs, so this is an intersting idea. but idk how well that would handle connection issues etc 18:11:17 Well. ZFS should at least keep things sane if the connection breaks. 18:11:30 iscsi would be another option, albeit with somewhat more complexity. 18:11:55 Anyway, it's what I'd try first. 18:15:14 agreed 18:35:39 yourfate: Oh, forgot to mention, mdconfig is probably your friend here, in case it wasn't clear. 18:36:11 y'll look into it, but probalby not now :D 18:36:15 I 20:00:37 Hey guys, can I use NULLFS to share poudriere data directory with an apache jail ? 22:56:22 devnull: I think so