10:23:38 long time no see ... what's the status of PKGBASE? 10:24:14 every time I have to upgrade dozen of freebsd jails with freebsd-update I'm wondering this :) 10:24:28 freebsd-update is painfully slow 10:38:38 mage: pkgbase is progressing, IIRC GhostBSD fully switched to pkgbase 11:13:51 mage: i've been using pkgbase on 15.0 for months and had no problems at all. the initial switch is still a bit of a faff, but if you're installing a jail from scratch with pkgbase that's not a problem 11:14:03 mage: at this point i'd *never* go back to any other method and i highly recommend pkgbase 13:01:18 huh, seems after installing FreeBSD onto a new, third hard drive, it automatically pulled in my old 2 HDDs with their zpool 13:34:35 jemius: i would expect it to be in the `zpool import` list, but not automatically imported 13:37:59 I reinstalled with a new zpool name and now it seems fine 13:38:24 Anyways. Now I'd like to repartition the unused two drives, but gpart tells me they're "busy". Hmm. 13:50:47 the ones with the zpool on them? 13:50:51 is it active and mounted? 13:50:57 `zpool list` 13:52:19 Fortunately I only had that happen when I tried to "install" a new build and it just warned me before I overwrote my old pool and I was able to rename it then 13:53:24 It left them alone for me... but I was on different controllers though (if that matters) 14:02:22 ah, got it – you have to delete all partitions on a device, first 14:13:20 how would I put a users home directory into another zpool? 14:20:19 zfs create otherpool/home 14:20:40 zfs unmount zroot/usr/home (or whatever it is 14:20:51 zfs set mountpoint=/home otherpool/home 14:23:08 Hm. Thx 14:23:20 so is the `mount` command line tool ever used for anything on freebsd? 14:25:11 if it's zfs, no, not really 14:28:34 So if I create a new zpool, that already contains a ZFS file system – what happens now if I do zfs create within that? Is it a filesystem within a filesystem? Seems more like a new mountpoint / directory to me 14:30:22 it won't let you create a zpool if it's already in use by another pool 14:31:05 oh, but if you have your old pool, and zfs create oldpool/blah yes, you'll get /oldpool/blah _probably_ 14:31:41 * you can 'regraft' the tree and you can choose to unmount branches also 14:32:13 that's why i set the mountpoint otherwise it would have been /oldpool/home rather than /home above 14:41:56 rtprio, I was asking because your command from above basically is zfs create within a zpool, right? 14:42:35 yeah, that's right 14:43:33 jemius: it's called dataset what was created by zfs create 14:43:49 it's worth playing with zpool/zfs before it's your OS drive(s) 14:45:09 https://docs.freebsd.org/en/books/handbook/zfs/ 15:59:58 how can you make such a new mountpoint on a different pool permanent? A reboot resets everything… 16:05:31 it shouldn't 16:06:58 Seems I've meesed up then and have two /home 16:06:59 https://paste.debian.net/1339516/ 16:07:43 oh, that was my bad, i just unmounted hauptsystem/home, but didn't tell it not to boot next time 16:08:13 zfs set mounted=no hauptsystem/home 16:09:17 alternately boot into single user, move the files around if you need to and `zfs destroy -r hauptsystem/home` 16:09:23 "mounted is readonly" 16:10:20 er, shit 16:10:43 should be canmount or something similar, alternatively unset the mountpoint property 16:11:03 thanks nimaje i was drawing a blank 16:13:12 alright, that did the trick. Thx guys 16:15:27 if you ever migrate a pool to a new / different system you have to mount everything manually again, right? 16:17:37 you have to zpool import the pool, what gets mounted is decided by the properties of the datasets 16:18:07 "properties of the datasets" – what does that mean? 16:23:14 stuff you can read via zfs get and sometimes write via zfs set from the dataset, like mounted, mountpoint, … zfsprops(7) should list them all I think (except that you can create arbitiarly user-definded properties, that just get saved for you, but zfs won't use itself) 16:29:03 OK, thx 16:39:01 I'm writing large amounts of data on a pool, and zfs get sometimes reports that the amount of data on the FS is actually decreasing 16:39:02 https://paste.debian.net/1339522/ 16:39:05 can that make sense? 16:39:51 are you writing it to the correct pool? 16:40:12 yup 16:40:22 Also, several calls show that it is increasing, overall 16:40:28 just sometimes, sometimes it decreases for a minute 16:40:59 There are simultaneous writes ongoing. Maybe it sometimes defragments or re-compresses stuff? 16:41:21 there is some buffering or caching 17:23:43 it seems FreeBSD calculates file sizes in Gibibyte, and Linux in Gigabyte? 17:23:51 At least `du` gives different sizes 17:24:06 What does `du` return on a compressed file system? 17:24:50 you can choose what output you get 17:25:01 in both zfs/zpool and du/df 17:27:59 sth here is very awkward, my folder of movies has an apparent size of 102GB (as on Linux), but the (compressed?) size is 92GB – that's 10GB less. `zfs` however shows that only 7GB are saved by compression 17:28:02 https://paste.debian.net/1339533/ 17:28:04 Oò 17:28:37 are you familar about how 'blocks' work on a file system? 17:29:28 obvously if you have a file 1 byte long, it takes more than 1 byte to store, up to 4096 bytes because that's the block on the file system 17:30:20 i'm sure you can fine a more eloquent stack overflow answer about it 17:30:32 hm 19:36:02 Movies tend to be a lot larger though, so there aren't that many of them in 102GB. Certainly not enough to explain the "missing" 3GB. 19:36:20 But one possible explanation, or part of one, can be that the reported savings here do not include savings in metadata compression (I think..?) 19:36:32 Still much much more than I would have expected, though. 19:49:03 me too. The video data itself is imcompressible 19:49:14 many contain subtitles… but still… 20:04:40 du -h gives actual on-disk use. Add -A to get "apparent" use. 20:05:09 ZFS is pretty good at not wasting cycles on data that can't be compressed anyway. So there's that, at least. 20:05:49 (I add -h out of habit since I don't deal well with 512b blocks)