-
saper
-
VimDiesel
Title: 255271 – Cosmetic: service -e lists sendmail if sendmail_enable="NONE"
-
cybercrypto
Hey... quick and stupid question:
-
cybercrypto
Let's say I create a zfs vdev with a single disk. Can I later grow this vdev into mirror, adding the second disk?
-
cybercrypto
and even further:, Can I then grow the vdev mirror (with 2 disks) into a vdev with 3 disks raidz?
-
llua
you can turn a non-redundant vdev into a mirror with zpool-attach(8)
-
llua
you can't do the latter
-
cybercrypto
llua: thanks for the prompt response.
-
cybercrypto
llua: Can I turn a non-redundant vdev into a raidz (without the mirror step)
-
llua
no
-
cybercrypto
I see. lets say O do the firt step. (non-redundant into mirror vdev -> into a pool. Can I add a new vdev mirror (2 mirror vdev using 2 disks each) to that same pool and expand capacity?
-
llua
with zpool-add(8) you can do that, only newly written data would be striped between the two vdevs tho
-
llua
there is a #zfs channel too btw
-
cybercrypto
good stuff, thanks.
-
llua
install_from_index() in freebsd-update has be so slow, looping over install(1) for hours.
-
llua
has been*
-
debdrup
There is a way, using the geom_zero loadable module, to set up a raidz using only two disks - but I don't believe it's possible to do with only one.
-
yourfate
yay, i'm on 14.1, and discovered my hosting provider changed plans, and I now have double the CPUs at less cost :)
-
yourfate
any idea why CPU and memory use of processes running in the linuxlator layer are not visible in htop?
-
yourfate
I can seeh the process, and see its CPU load
-
yourfate
but not in the bars at the top
-
yourfate
all the CPUs there are at like 1%, while the process says 40% cpu load
-
dch
yourfate: see if they appear in `top(1)` first, I'm guessing that htop is missing some logic to read the linux flavoured processes
-
yourfate
htop shows the process
-
yourfate
but doesn't add it to the totals att he top
-
yourfate
-
yourfate
or maybe I don't fully understand it
-
yourfate
aaaah I think WCPU mans "this process uses 40% of the cpu time being used rn"
-
yourfate
not 40% of avaiable CPU time
-
dch
yourfate: WCPU is cumulative usage of all cpu cores
-
dch
IIRC but man.freebsd.org/top will give a proper answer
-
yourfate
soo, in theory, if there was only 1 task, which used, say 20% of one CPU core, but b/c its the only task running, WCPU would be 100%?
-
dch
no
-
dch
if you have 1 process running at 100% on 2 cores then:
-
dch
top will show CPU0 100% user, CPU1 100% user
-
dch
and CPU2-7 will be 0% user
-
dch
and WCPU should show 200% for gameserver
-
dch
yourfate: theres a more accurate description in the manpage above, but my explanation is useful enough
-
dch
WCPU = Weighted CPU
-
yourfate
right, I just read that
-
yourfate
ok, it seems the gameserver is just a lot more threaded than I had assumed, using all the cores roghtly equally
-
yourfate
roughly
-
dch
the OS will try to spread the load across all cpus unless a program binds itself to specific cores intentionally
-
dch
** not entirely accurate, again, but reasonably useful
-
yourfate
another thing, it says size ~7.8GB in there
-
yourfate
which I assume is memory usage
-
babz
size is the total size of the virtual memory mapped for the process
-
babz
that also counts mmap() ed files, shared libaries ect
-
yourfate
oooh that would explain it, I think this process uses mono :x
-
ridcully
is raidz expansion something that is supposed to work with 14.1?
-
lw
ridcully: no, i think this is only in 15.0
-
ridcully
lw: thanks
-
yourfate
so, looking at the output of `zfs list`:
gitlab.com/-/snippets/3717175 I don't understand it. why are there ro many different datasets mounted on `/`?
-
VimDiesel
Title: zfs filesystem list ($3717175) · Snippets · GitLab
-
yourfate
there are also a bunch of old snapshots I'm trying to get rid of
-
lw
yourfate: they're not actually mounted - if you check the 'canmount' property it will be (i believe) 'noauto'. those are your different root filesystems created by bectl
-
yourfate
but thye say they are referenced by those
-
lw
if you don't want them, use bectl to delete your old BEs, that should also delete the zfs filesystem
-
yourfate
so, I guess those are snapshots created when system upgrades happened?
-
lw
(if you use freebsd-update, i believe it creates a new BE each time you run it, so you can easily roll back if something goes wrong)
-
lw
yes
-
yourfate
nice feature
-
lw
so 'bectl list' and 'bectl delete <name>' for the ones that aren't active, if you don't want them anymore
-
lw
er, destroy, not delete
-
yourfate
ye, i'll keep the one befor now
-
yourfate
and destory the others I guess
-
lw
actually to be pedantic, those aren't snapshots, they're complete filesystems (cloned from a snapshot) which is why they show in 'zfs list' by default
-
» kevans ponders a delete synonym for destroy
-
yourfate
the BSD tools seem to like destory
-
lw
in this case i assume it's bectl destroy to match the zfs destroy command
-
kevans
I get really annoyed when I guess the wrong verb for commands. I haven't had a problem with bectl because I've worked on it, but I do wonder if others trip over it
-
kevans
yeah
-
yourfate
bectl doesn't seem to provice autocomplete info :/
-
lw
kevans: i think in this case i'm probably confused because it was "ludelete" on Solaris...
-
yourfate
woah, deleting those freed up around 40 GB
-
yourfate
quite a bit on a 160 gb server
-
lw
that's a surprisingly large amount, do you store application data on / rather than on zfs datasets?
-
yourfate
I guess /compat is the biggest thing
-
yourfate
with the compatibility layer stuff / linuxlator
-
yourfate
home is in its own dataset
-
lw
i'd probably put compat in its own dataset too so that doesn't happen again, usually these BE filesystems are pretty small
-
yourfate
I want to set up a whole-machine backup. if I create a snapsho of the zroot pool, and then back that up using restic backup, then destory the snapshot, that seems decent?
-
lw
like upgrading from pX to pY is going to be a few MB at most
-
yourfate
here ist the output of `bectl list`:
gitlab.com/-/snippets/3717180
-
VimDiesel
Title: bectl ist ($3717180) · Snippets · GitLab
-
yourfate
that doesn't accumulate to 40GB I think
-
yourfate
(output from before I deleted them)
-
scoobybejesus
i usually just do zfs destroy -R blah, where blah is the snapshot taken during upgrades (and -R being needed because it has a clone)
-
lw
yourfate: it might do if you had includede the snapshots in the list
-
yourfate
hm?
-
lw
'zfs list' only shows proper datasets by default - 'zfs list -t all' will also include snapshots, which can consume space
-
kevans
.oO(have we really been doing auto-BE since 13.1?)
-
kevans
time flies
-
lw
kevans: 13.1 wasn't that long ago! lots of releases nowadays... feels like 14.0 only just came out and now there's 14.1
-
yourfate
lw: I thought the bectl list would include the snapshot sizes.
-
yourfate
I added the output of `zfs list -t snapshots` before the deletion to the last snippet.
-
yourfate
those sizes match what bectl reports
-
yourfate
back to what I originally planned: is doing a ZFS snapshot, then backing that up to a remote maching using restic backup a valid strategy?
-
yourfate
oh damn, the snapshot contains nothing, I need to snapshot all the datasets individually? :/
-
ek
yourfate: Or recursively.
-
yourfate
wait, I can't find the snapshot I created
-
yourfate
ah b/c the dataset is not mounted anywhere?
-
yourfate
but those I then have to find in all the differnt `.zfs` directories
-
yourfate
I can't just back them up as one
-
yourfate
I think?
-
isley
you can recursively create snapshots, and you can use zfs send to stream them either to another zfs system or dump them to a file vs trying to rsync a bunch of .zfs/snapshot directories.
-
yourfate
isley: but i'd still have to call the zfs send on each one of the recursive snapshots
-
yourfate
I can't just do a recursive zfs send for all the snapshots, can I?
-
voy4g3r2
yourfate: i have found tools like synoid and sanoid to be quite helpful for zfs backups.. as it uses zfs send and zfs recv
-
voy4g3r2
-
VimDiesel
Title: GitHub - jimsalterjrs/sanoid: These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.)
-
yourfate
I can zfs-send right into restic, it can backup from stdin
-
voy4g3r2
yes it can
-
yourfate
I generally like it and use it for all my other machines, so I thought I'd use it on the bsd server too
-
yourfate
the backup target is a hetzner storage box, which I can ssh/sftp to.
-
voy4g3r2
yes
-
voy4g3r2
you can even tunnel through the ssh and have compression on the fly ahnd encrypted
-
voy4g3r2
good things to learn, zfs send | zfs recv is the foundation.. after learning that, i went to sanoid/syncoid as i could have it scheduled and ssh tunnel to remote locations, that is all
-
yourfate
I have zfs send/recv'd before
-
yourfate
when I moved a zpool to a new drive
-
yourfate
but I still have to zfs send every dataset snapshot manually
-
yourfate
also, if I had the snapshot as an FS, I can then use the restic mount to browse the old snapshots, if I use zfs send I cannot browse the data of old snapshots
-
jmnbtslsQE
yourfate: you can use -R
-
jmnbtslsQE
not sure if it's exactly what you look for though
-
yourfate
that sounds nice
-
yourfate
still can't browse the files then tho
-
yourfate
I'd have to zfs receive it fully, can't just browse the mounted remote backup repo
-
jmnbtslsQE
you can browse the remote one just as you browse the local one
-
yourfate
I wouldn't do zfs receive on the other end
-
yourfate
I'd just `zfs send -R zroot@snap | restic backup` basically
-
jmnbtslsQE
so your snapshot ultimately is backed up as a file?
-
yourfate
ye
-
yourfate
but restic dedupes the file
-
yourfate
as in, the content of the file
-
jmnbtslsQE
i think it's discouraged to back up zfs send streams as files over the long term because of possible changes in zfs that could render your file backup unusable or difficult to use, though these days i guess that's not as likely
-
jmnbtslsQE
if that dedup just deduplicates on a fixed block boundary, then it may not actually succeed in deduplicating at the logical level (since identical content across snapshots is not necessarily aligned that way)
-
jmnbtslsQE
but, backing up like this for a disaster recovery scenario, where you do it regularly and don't expect to ever use the data (and if you do it's not too many months after you created it), is a good idea
-
jmnbtslsQE
i think the consensus is that you want to do regular backups into an actual zpool
-
jmnbtslsQE
but that does make it more difficult, so it's just a judgement to make. backing up to a file is not unacceptable
-
jmnbtslsQE
(also, in the case of incremental snapshots, if you backup to file, any corruption or failure in just one incremental snapshot could make it impossible to use any of the ones which came after)
-
yourfate
ye, that's why what I initially wanted to do was to backup the `.zfs/snapshot/...` directory
-
yourfate
but those are strewn all over the FS when I do a recursive snapshot
-
yourfate
I can't coveniently get those from one location
-
jmnbtslsQE
hmm, i haven't used .zfs much, but i don't think it changes the issue, which is what you are doing on the receiving side
-
yourfate
it does, in there I just see the data as regular files
-
yourfate
and can back them up like regular data, not some zfs send file
-
jmnbtslsQE
ah, you mean to backup the files on the filesystem themselves
-
yourfate
ye
-
jmnbtslsQE
well i haven't used .zfs much, but another option would be to clone the snapshots in question, not sure if that is preferred for you
-
yourfate
clone?
-
jmnbtslsQE
could surely be automated
-
jmnbtslsQE
clone so that you essentially mount the snapshot
-
yourfate
ye, which is what the .zfs does
-
yourfate
they are basically all mounted in there
-
yourfate
but not recursively
-
jmnbtslsQE
looking at my system, it seems like you would need to go into the .zfs for each child dataset
-
jmnbtslsQE
that's the issue right?
-
yourfate
yes!
-
yourfate
b/c for example if I backup the root dataset, it won't include the homes
-
yourfate
then I'd have to backup each individual home dataeset
-
jmnbtslsQE
yeah, but you could automate this by getting the mountpoint property of each child
-
yourfate
yes
-
yourfate
that seems like it would make the backup script complex
-
jmnbtslsQE
i don't think it'd be too bad. in the case of a child dataset snapshot, just add the name to the name of the file you send to your backup service. the parent and childs are all treated the same and you iterate based on the output of zfs list -o name,mountpoint -r -H or similar
-
jmnbtslsQE
you'd just need to figure out how to systematically identify your dataset name in the backup file. maybe replace / with something else depending on what your service accepts
-
yourfate
right. if there was a simple way to recursiveyl mount a snapshot I'd just not have to parse anything
-
yourfate
I found a tool call `zfsnapr` that does that
-
yourfate
idk how good it is
-
mason
yourfate: This discussion has gone on for long enough that I'd be grateful if you'd restate your goal.
-
yourfate
mason: the plan is to backup a recursvie snapshot of an entire zpool using a non-zfs-aware backup tool, in my case restic backup. For this it would be cool if I could recursively mount the snapshot as one filesystem, so I don't have to collect the data from the individual child-snapshots from their respective `.zfs` directories
-
scoobybejesus
use sanoid/syncoid, and it'll take care of determining the most recently common snapshot and incrementally handle all the rest
-
scoobybejesus
ah.. non zfs-aware...
-
yourfate
ye, I'm backing up to a storage service that is not zfs
-
yourfate
otherwise it would be easy
-
yourfate
to be exact, the target is a hetzner storage box, which I access over ssh/sftp
-
mason
yourfate: I'm not sure I'm going to recommend this, but what about trying as a POC at least something like a file on the remote side, accessed via sshfs or something similar, used as backing storage for a pool to which you back things up with native ZFS tools?
-
mason
The ZFS operations are all local then.
-
mason
There are other ways you might make the file available, but sshfs comes to mind as arguably the simplest.
-
yourfate
I already mount part of that box using sshfs, so this is an intersting idea. but idk how well that would handle connection issues etc
-
mason
Well. ZFS should at least keep things sane if the connection breaks.
-
mason
iscsi would be another option, albeit with somewhat more complexity.
-
mason
Anyway, it's what I'd try first.
-
jmnbtslsQE
agreed
-
mason
yourfate: Oh, forgot to mention, mdconfig is probably your friend here, in case it wasn't clear.
-
yourfate
y'll look into it, but probalby not now :D
-
yourfate
I
-
devnull
Hey guys, can I use NULLFS to share poudriere data directory with an apache jail ?
-
saper
devnull: I think so