05:52:32 I haven't been able to find a clear answer on this, but is there a way to zfs send (or something else) a whole pool to make a full backup, which can be zfs received later? I have a broken configuration on my vdev (ashift=9 instead of 12) so I want to dump it somewhere, recreate the vdev and then send it back, maintaining all the datasets and snapshots 05:53:08 Axman6: You can capture the stream to a file, although this is more fragile than having it temporarily live in another pool. 05:58:51 hmm, may be I missed this, but sending the pool using zfs send tank would actually work, and retain all the snapshots etc? 06:00:06 Axman6: You'd want to send a replication stream, -R, but it's just a stream of data at that point, while it's in-flight. But you'd be far, far better having it live inside another pool for the purpose, rather than just sitting as a bare file somewhere. 06:01:12 yeah that's probably the way I'll need to go. Sadly to do that I'll need to buy a lot of new hardware 06:04:45 Axman6: Look on the bright side. That extra hardware can then turn into a dedicated back-up pool. 07:48:41 Hello when I build curl 7.88.1 from port I have still a security report about vulnerabilities but when I look on the official website I don't see any threat on the last 2 versions . 07:49:49 Maybe it takes time for the site to update but I would like to know the exact threat. How can we get this information? 08:09:58 Ok I have found the reference thanks to vuxml 08:11:01 and what was it? 08:12:33 Indeed these threat are resolved in the last versions and are not referenced in vuxml anymore. 08:13:51 meena, they are here https://curl.se/docs/security.html 08:13:52 Title: curl - Security 08:15:19 https://vuxml.freebsd.org/freebsd/pkg-curl.html 08:15:20 Title: FreeBSD VuXML - curl 08:15:39 maybe there is something which is not up to date on my system 08:20:31 pkg audit doesn't show any threat too 08:20:39 So I'm not sure whether to rely on the security report provided by the port system 08:40:50 are you on quarterly? 08:41:37 yes 08:42:44 no sorry I am on main 08:45:44 main on both kernel and port. Is it bad ? 09:16:47 Midjak: mind showing the exact security report itself? 10:02:14 they are 3 fixed by 7.88.0 https://cve.mitre.org/cgi-bin/cvename.cgi?name=2023-23914 https://cve.mitre.org/cgi-bin/cvename.cgi?name=2023-23915 and https://cve.mitre.org/cgi-bin/cvename.cgi?name=2023-23916 10:02:15 Title: CVE - CVE-2023-23914 10:02:47 yuripv 10:04:39 Well I think it's not a big deal as pkg audit doesn't show any threat on 7.88 10:05:33 maybe port tree will be updated later 10:06:31 i just had curl update ( in ports ) 10:07:10 oh I think I understood sorry 10:07:36 this report ony informs that curl can run as server and subject to threat 10:08:32 but doesn't indicate there is a threat actually 10:08:53 only* 10:21:20 Is thunderbolt working on freebsd 10:36:10 Midjak: that's not "exact security report" 10:36:48 i'm just wondering *why* it is shown to you, so wanted to see the exact text the ports show 10:37:07 oh yes sorry 10:39:53 servers and may therefore pose a remote security risk to the system. 10:40:21 This port has installed the following files which may act as network servers and may therefore pose a remote security risk to the system. 10:40:32 it's just a warning not a big deal 10:41:53 I was cheated by the extreme capitalization :-)) 10:42:03 sorry for that 10:44:47 :) 17:40:38 Hi 17:41:37 which path of snapshot in zroot refer to OS settings and apps installed? I installed a wacom pen driver that I think made my pen not work. I only want to revert the software settings part. Not the database etc part 17:58:04 Can someone share /dev/MAKEDEV with me ? 17:59:48 it seems netflix and amazon prime won't run on freebsd? 18:08:12 ox1eef_: isn't that a linux thing? 18:12:37 Indeed, looks like 'mknod' is the fix in my case (deleted large parts of /dev by mistake). OpenBSD has /dev/MAKEDEV - not sure about Linux. 18:14:01 Letiute: Yeah, I think "widevine" is the issue there. 18:14:23 mason and there is no clean way out? 18:15:06 Letiute: Conceivably you could run Chrome in a Linuxulator, but that has reports of success that vary over time. A VM might be another option. 18:15:38 ok 18:16:50 my wacom pen/ table stopped working since followed this: https://www.freshports.org/x11-drivers/xf86-input-wacom 18:16:51 Title: FreshPorts -- x11-drivers/xf86-input-wacom: X.Org legacy Wacom tablet driver 18:17:34 xinput does not shows wacom pen 18:19:37 uhid2: on usbus0 18:19:37  in dmesg 18:20:10 i never ha wacom devices but i think modern way to deal with them is with libinput, not legacy xorg drivers 18:20:57 pkg remove xf86-input-wacom 18:23:58 willr eboot 18:24:28 or how can i rever to a snapshot for this issue and not revert / change database files (postgres)? 18:32:02 Letiute: you can also copy the file change out of the snapshot 18:34:04 rtprio "copy teh file change"? 18:36:12 yeah, the snapshot lives in .zfs/snapshots/ and you can treat it like a normal file 18:37:51 or read about zfs-rollback 18:39:43 rtprio ok will read 18:39:43  which of the following I need to revert to: zroot, zroot/ROOT, zroot/tmp, zroot/usr, zroot/usr/home, zroot/usr/ports, zroot/usr/src, zroot/var, there are many 18:40:18 rtprio I thought those were meta data and pointers :) 18:40:39 and real files can't be extracted from snapshots unless restored 18:40:51 I didn't send/rec yet 18:40:55 they are just on my system 18:43:28 which one depends on what file you're tring to recover 18:44:30 no idea 18:44:42 if the file is in /var it will probably reset your postgrs 18:45:51 if it's the package you're trying to revert, why not just uninstall the package 18:46:10 rtprio I think, thigns messed when Iinstalled xf86-input-wacom 18:46:39 then uninstall that package, or did you already? 18:46:40 and did kldload cuse 18:46:54 and did  pw groupmod webcamd -m user1 18:47:13 also did  webcamd -d ugenx.y 18:47:39 I uninstalled few moments ago. Do I need to reboot rtprio? 18:48:11 wacom was not smooth before. Not its not working. 18:48:14 anyway rebooting 18:48:23 if the module doesn't unload, perhaps so. it would be the easiest to reboot 18:48:33 how to unload? 18:48:37 via terminal 18:48:42 kldunload 18:48:46 ok 18:49:26 rebooting 18:49:31 futile 18:50:37 as in the module still loaded? 18:50:42 and can it not be unloaded? 18:50:56 no idea 18:50:59 how can I check 18:51:03 kldstat 18:51:06 ok 18:51:42 don't see it though 18:51:56 that was the point of rebooting, yeah 18:52:04 still best to reboot 18:52:07 let see 18:54:59 I am trying to unlock my account when I lock screen with xscreensaver when I type my password it doesn't unlock my screen anybody know whats wrong 18:56:33 capslock? 18:57:01 its off 18:57:50 rtprio works like a charm after reboot. no idea what changed though 18:58:20 Letiute: if the module was not loaded it sounded like perhaps that was it 18:58:48 sn00p: there should be a log for what's wrong 19:00:35 rtprio speaking of snapshots, I guess the postgres data is in /var/db/postgres/...  so while making snapshots, I see zroot/var also. so if I restore it, pg will be affected.? so anything OS related also there in that dir? 19:01:01 rtprio can you also paste the path of file view/recover from snapshots (that was new forme) 19:01:57 when things like happen, how can I treat OS files snapshot and postgres files snapshot differently and separatly? 19:02:31 yes, you're corect about the /var snapshot 19:03:01 if you want it to be ... detached from /var you'd want to put /var/db/postgres on it's on partition 19:04:21 ( stop postgres; mv /var/db/postgres to a different name; zfs create -o mountpoint=/var/db/postgres zroot/postgres; chown, copy data back in, restart postgres ) 19:04:54 Letiute: os snapshots would primarly be ROOT and perhaps usr, not much else, 19:06:41 or set postgres_data to difrerent path in different dataset 19:07:24 and again, if i say, botched a file in /etc, i'd probably cp /.zfs/snapshots/mysnapshot/etc/rc.conf /etc/rc.conf and not restore the whole snapshot 19:14:26 rtprio otis I see 19:14:47 whats better, postgres_data var change or mount -o? 19:17:04 by the way, rare question, anyone you happen to change keymas of keyboard in freebsd. I used to have capslock and escape swapped and left alt & ctrl swapped 19:17:04 i don't understand 19:17:33 mount -o? you don't use /sbin/mount with zfs 19:17:49 rtprio I meant, whats better way of achieving the same thing: postgres_data var change in pg settings or mount -o? 19:18:22 my `moutn -0 ` i meant zfs create -o mountpoint=/var/db/postgres zroot/postgres; 19:19:17 you can create another file system and configure postgres to it, if you want. the way i posted preserves the default config which can be easier 19:19:42 last question: I am doing nvme raid. I have postgres and os, what compression level with zstd is ok if capacity is a concern but not the level where it gives huge performance issues 19:19:59 rtprio understood! 19:20:42 you might have to test some things, but i'd probably not use compression; 19:20:50 rtprio another filesystem? like it would be treated as a partition/drive by zfs? 19:20:59 rtprio ok 19:21:06 any idea who to contact to remove a bugreport from freebsd bugzilla? 19:21:07 remember, only new files are compressed when written if you change the compression setting 19:21:47 without -o mountpoint=/var/db/postgres it would appar as /postgres which i suppose you could configure in postgres, but it doesn't follow the common pattern 19:23:36 yes, I was wondering if there would be a way to recompress them 19:25:23 that might be worth some of your own investigation 19:26:19 ok 19:29:53 untitled, I think the most you can do is have it marked as spam, if it is spam. Otherwise, the bug reporter can close it. 19:36:31 https://freshbsd.org/freebsd/src/commit/1dc1f6bd313876 probably about time 19:36:32 Title: FreeBSD / src / 1dc1f6bd313876 - FreshBSD 19:38:11 No commits found in 4 milliseconds 19:38:46 https://cgit.freebsd.org/src/commit/?id=1dc1f6bd3138760a9 19:38:47 Title: src - FreeBSD source tree 19:39:04 not sure why freshbsd failed 19:44:49 debdrup: https://freshbsd.org/freebsd/src/commit/1dc1f6bd3138760a9e96e13017cc3c05e5e1b1e9 19:44:50 Title: FreeBSD / src / 1dc1f6bd3138760a9e96e13017cc3c05e5e1b1e9 - FreshBSD 19:44:58 it wants the full commit hash 19:45:27 i'm pretty sure it didn't used to want the full commit hash 19:46:52 git rev-parse --short produces short hashes that're shorter than what I stripped that hash down to (and it can get even shorter than that, but at that point there's a risk of collision) 19:47:48 debdrup: I think git gives you the shortest possible that won't result in that. 19:47:58 Oh, that's possible, yeah. 19:48:05 Currently it looks like it can make do with only 13 bytes. 19:48:53 I can assume an init. repo with one commit isn't the shortest possible but some something like src or ports its at the min. limt and you're getting the smallest without issues. 19:49:36 debdrup: oh look, Freaky just joined 19:49:43 meena: I asked him to :3 19:49:54 .o/ 19:49:55 But if you're saving commits outside of git then you might have issues in the future. So make you want something longer. 19:50:10 Freaky: meena noticed that freshbsd needs full hashes now, is that correct? 19:50:30 I linked to https://freshbsd.org/freebsd/src/commit/1dc1f6bd313876 which doesn't work, but meena pointed out that https://freshbsd.org/freebsd/src/commit/1dc1f6bd3138760a9e96e13017cc3c05e5e1b1e9 does. 19:50:32 Title: FreeBSD / src / 1dc1f6bd313876 - FreshBSD 19:50:45 Also, thank you for taking time for this silly nonsense. :P 19:50:53 I don't think partial hashes have ever worked 19:51:30 Hm, I thought I remembered having used them, but maybe I'm wrong? 19:51:32 though you can query them: https://freshbsd.org/freebsd/src?q=commit%3A1dc1f6bd313876* 19:51:33 Title: FreeBSD / src - FreshBSD 19:51:53 That makes sense, though, since it's using wildcards. 19:52:00 Well, a wildcard. 19:53:22 I wouldn't dream of asking you to implement gits rev-parse --short functionality in whatever language it's using, so it's probably fine to leave it as is. 19:54:37 Well, I guess it would be the reverse of that functionality, but it's still a way too big ask, I think. 19:58:51 were there some ... accidents in the source tree 19:59:04 git history eventually leads to a commit that looks somewhat out of place 20:02:31 rtprio: I'm not sure what you mean 20:03:11 There are probably unavoidable artfacts from the source having gone from cvs over svn to git. :P 20:04:01 i don't know if i could find it again 20:04:55 debdrup: I don't think it's an unreasonable request, shouldn't be too hard 20:06:05 * Freaky wonders what happened to sha256 commit ids 20:13:51 didn't realize this was your site, Freaky 20:22:50 I think most of the community sites are made by people that're fairly active in the community, which is quite nice. 20:24:15 debdrup: links with short hashes are now converted into prefix commit hash searches 20:24:48 Freaky: that was quick! 20:26:35 only a couple of lines in the router 20:33:44 this power consumption thing I've been seeing whenever I reboot into FreeBSD has had me very confused 20:34:30 although I am also using hardendebsd which has some dev-friendly (as opposed to power-friendly) options which may impact matters 20:34:46 I pull like 0.7W more than I do under Linux. Is kern.hz still a thing in 2023, and if so, should I be setting it low? 20:35:08 Try it. 20:36:02 Doesn't seem like 0.7W is worth the extra time it makes everything take, but aside from kern.hz, FreeBSD is basically a soft-realtime OS already, so changing it doesn't hurt anything. 20:36:48 0.7W is probably also within the margin of error for any kind of testing you're doing, given how many lines of code are involved. 20:36:51 does kern.hz do anything by default now? 20:37:03 or is it just if kern.eventtimer.periodic=1 20:37:39 Those are two different things. 20:38:39 I'll set kern.hz to 100 20:38:50 (I wonder howit'd go if I set it to 10) 20:39:21 Try it. 20:39:37 You should probably investigate what it actually does. 20:40:47 Freaky: the periodic mode forces eventtimers to only use the hardware clock, which as you probably know can drift. 20:42:28 I thought it switched between ticking at hz and "tickless" mode with dynamic oneshot timers 20:42:51 Right, if it's set to 0, as is the default, FreeBSD uses a dynamic t ickless mode. 20:42:54 tickless* 20:42:58 I think there's varying efficiency for various timer coalescing algorithms in any event. 20:43:59 Freaky: using the periodic mode has it relying on the actual hardware though. 20:44:51 I think even with tickless mode, kern.hz=100 in hypervisor guests still has some effect, due to hypervisors not handling tickless kernels very well. 20:46:17 hz(9) has more information on it all, and I don't remember most of it I think. :( 20:48:06 There's also some link to timecounter(4), but I can't remember what that's all about. 20:55:02 rtprio: re: commits out of place; wasn't it you bisecting into some openzfs merge commit which contents looked completely out of place? 22:23:21 i might have asked about it before 22:23:33 but i'm having trouble finding it again 22:31:25 * skered shakes the 8ball... 22:31:42 blah no xorg update. 22:32:02 haven't you heard, the cool kids are all wayland these days 22:32:31 * skered shakes it for xwayland updates... 22:32:41 doh none there either. 22:36:24 If I want to treat a `/dir` differently in terms of backups, compression, even mirroring etc, do I need to create a new pool for it? 22:38:06 What do you mean by "mirroring", as in live copy? 22:38:59 parv   `mirror` as in raid . 22:39:24 stripe, mirror, raidz1.2 are some features of zfs 22:40:44 hi folks :) in FreeBSD-13.1 do we have a kernel option to contol the buffer size for TTYs? I've already changed consmsgbuf_size but it seems not related. My problem is when in the terminal I do (cu -l /dev/ttyu1) and another terminal (printf '<\n\n>') I can only get like 382 characters exactly. There must be something which is blocking the next output. I already had the same issue in NetBSD and in there I changed (kern.tty.qsize = 4096) in the kernel 22:40:45 options and it wored properly. So I'm wondering if we have such an option here in FreeBSD 13.1 22:40:51 parv do I make sense? 22:45:08 Letiute: yes, for 'mirroring' no for the rest 22:45:11 Letiute, Backups can be made either via ZFS snapshots or via rsync on a directory of a dataset. Separate ZFS compression does require at least a separate dataset. ZFS mirror would mean starting with new pool layout. So all the things you mentioned could relate to each other. 22:45:30 you can zfs create zroot/blah and set different compression on /blah 22:45:39 but you can 22:45:42 can 22:45:56 can't make /blah a raidz2 when the rest is a mirror 22:46:44 i SEE 22:46:49 I see* 22:47:01 if you have a specific example / configuration it could help us advise you 22:47:07 what is required for /bla to have raidz2? 22:48:37 rtprio I will share  a tiny topology diagrame soon for the big picture but for now I was thinking to make /postgresdb1  and /postgresdb2 (2 databases living on different dirs via TABLESPACE), both having different compression levels 22:48:38 Letiute: at least four drives 22:49:12 rtprio the defintion of a drive , for zfs, is just a partition of physical drive? 22:49:33 putting postgres data onto it's on zpool is not a bad idea, but i think you might be expecting too much of compression 22:49:53 rtprio y ou mean its 'own' zpool? 22:50:03 Letiute: yes, own zpool 22:50:28 I am setting ztsd-3 for db1 and ztsd-7 or 9 for db2 22:50:39 it could be a partition but if you're going to make multiple pools from partitions you're going to take a performace hit and lose out on some feature of zfs 22:50:53 so I "can" make a new zpool for few directories? 22:51:00 Using 4 partitions of a disk for RAIDZ2 is pointless 22:51:04 Letiute: have you tested that compression with postgres is enough compresion to be worth the effort 22:51:22 what features of zfs will be lost in that case? 22:51:46 performance hit due to simultanious data acces on same physcial drive? 22:51:47 Letiute: you have some reading to do, as it seem you might misunderstand the topology of a pool and a filesystem 22:53:38 pool is a set of disks with a specific configuration (mirror, raidz1, etc) and can have many partitiosn (from zfs create) 22:53:39 rtprio no I have not tested. This would be the test 22:54:11 so it sounds like you'd need a box with 5 drives, at least one for the os and 4 for the raidz2 for postgres 22:54:35 but database purists would say to put postgres on mirrors instead 22:55:11 for now I have only one dsk 22:55:49 rtprio: simple example: https://pastebin.com/aSJmwYfk 22:55:50 Title: titan:yuri:/usr/src$ git branch --contains f04cb31e7c17* maintitan:yuri:/usr - Pastebin.com 22:56:00 (just look for any openzfs commits :) 22:56:19 rtprio as you said "you can zfs create zroot/blah and set different compression on /blah" so I have to create a new pool / dataset for /db1 /db2? 22:56:40 rtprio which reading doc would you redommend for zfs? 22:56:47 Do check the block size used by Postgresql in order to set the same for "recordsize" of the pool for performance, etc 22:57:07 I see 22:57:55 parv default value is 8192 bytes in pg. what is iin zfs? 22:58:07 128 kiB 22:58:16 oh. so .. 22:58:43 8kb vs 128k 22:59:39 yuripv: yep 23:02:50 Letiute: you're free to do what you want, but if you're testing compression on zfs you only need a single disk. to test performance of single disk vs raidz probably should be a different test 23:03:09 rtprio `zpool create pg1pool  /pg1` `zpool set compression=... pg1pool` 23:03:25 rtprio ok I will talk about raidz in a moment 23:07:49 * AmyMalik boggles 23:11:57 no, it's zpool create zroot/pgpool1 23:12:04 no, it's zfs create zroot/pgpool1 23:12:06 damnit 23:12:18 rtprio parv https://jamboard.google.com/d/18hni1wEORDsOwjxoVvgqYZ-p1f8y7Mjur4in1ecJMuM/viewer?pli=1&f=0 23:12:19 Title: Untitled Jam - Google Jamboard 23:12:53 maybe you should reread `man zfs` and `man zpool` 23:13:22 rtprio so I have to create pgpool1 "inside" zroot? I cant create an indepednat pool of pg1/2? 23:13:33 rtprio ok 23:13:41 what do you think of the drawing? 23:14:48 i don't have a web browser 23:17:22 so you have 2 2tb disks? 23:18:28 this manual* mirroring thing... i don't think you want that 23:18:47 rtprio nvme is 1tb each 23:18:59 and you definately don't want to mirror it from a striped 1tb 23:19:03 rtprio well i can keep it there plugged all the time too 23:19:18 mirror the 1tb for what you're doing 23:19:26 why is that? 23:19:55 1tb don't have the full data. 1tb raid0 canlt live without the other 1tb making it 2tb total 23:20:08 because raid0 is fragile for starters 23:20:14 its striped raid0 23:20:17 yes 23:20:19 fragie 23:20:36 and depending on how you're planning mirroring ... i just. it's just gross 23:21:24 could you backup 2tb of snapshots onto that drive? that would be great 23:21:38 you could find a file you deleted six months ago. 23:22:37 rtprio yes but mirroring will happening online/live 23:22:43 i also see you're planning on a raidz2 of ssd and hard drives... are those all the same size? i don't think that's goin to work, or it's going to work poorly 23:23:04 and I don't have to restore the snapshot if its already mirrored. I just plug and boot from hdd if nvme fails 23:23:16 Letiute: a zpool mirror will happen live 23:23:35 rtprio all radz2 are 1tb each 23:24:03 the speed of ssd will be just limited to the speed of hdd 23:24:05 thats all 23:24:07 right? 23:24:20 rtprio yes 23:24:28 you're paddling upstream of how zfs is generally accepted and while that might work for you it might not work well 23:24:28 zpool mirror will happen live. 23:24:45 and anyone you ask for help is going to have a wtf moment as you describe your topology 23:24:54 I see 23:25:16 also i don't think you can have a nested mirror like you're describing 23:25:32 which one? 23:26:03 you can't have a raid0 raid a member of a zpool mirror 23:26:45 again, you can do it how you want, but if i had this pile of drives on this list, i told you what i'd do 23:28:11 I see. 23:28:24 is #2 a usb or estata or what? 23:28:27 so #1,2 is not doable? 23:28:31 *esata 23:29:03 #2,4 are in hdd drive docker box (external) 23:29:16 navme is m.2 23:29:20 nvme* 23:29:25 other drives are sata 23:29:28 2.5" 23:30:17 you could manually maintain the mirror from 1 to 2 but that seems like a lot of work and #1 is still raid0 so i wouldn't trust it with any data 23:31:06 manually? rsync? 23:31:18 yes, something like that 23:31:33 so I can"not" make #1 to #2 mirror? 23:31:37 Letiute, that "mirror" looks like a backup than ZFS mirror 23:31:48 parv ya but technically a miror 23:31:54 can it not happen? 23:32:03 i'll say again, you can copy data to it, but it's not automaitic 23:32:12 so parv is right, that's a backup, not a mirror 23:32:26 but technically i cannot make a mirror like that? 23:32:29 or can I? 23:32:34 you can't 23:32:43 mirror nvme1,2 and then you are fast and redundant 23:32:54 why because the soruce is raid0? 23:33:03 you could put postgres data on ssd1,2 mirrored, that would also be fast and redundant 23:33:20 Letiute: yes, mirror members must be identical 23:33:44 so the cause of not being able to be mirrored by #1,2 is that #2 is not two drives? 23:33:51 its one 23:34:17 does it have the exact same disk geomerty? no it doesn't not, can't be mirroed 23:34:27 i suggest we move on so we don't debate this al day 23:34:40 ok.. 23:35:33 i donno parv, how would you set this up 23:36:17 rtprio, Just as you have mentioned 23:37:29 For the need of 2TiB data on ZFS stripe, buy 2x 2 TiB SSD for ZFS mirror 23:38:05 2x 1TB drives you mean? 23:38:12 and at least 1 spare for all the disks 23:38:17 or even 2x 1tb hdd 23:38:36 s/all the disks/all the topologies/ 23:38:39 2x 1TB drives you mean?   not 2x 2tb 23:40:00 so just for understanding, if I had 1tb + 1tb hdd, I could mirror #1? 23:40:29 the reason I can't is the disk geometry? and that the 2tb is a drive with 2 partitions 23:40:33 correct? 23:42:21 if you had 2 more nvme, made another raid0, maybe 23:42:41 you can only mirror the same thing 23:42:44 Letiute, If your data will fit in less than 1 TiB of space, then 2x 1TB for ZFS mirror will be enough 23:43:15 git it out of your head you can just make up blocks of data like linux fakeraid and everything will be o 23:43:32 k, real data on real filesystems need to be the same 23:44:02 s/1TB/1 TiB/ # for shame! 23:45:35 ok 23:46:11 rtprio why nvm raid has to mirror to another nvme? can't they be ssds on one mirror and nvme on other? 23:47:38 are they the exact same size? sectors? 23:47:58 no but I thought zfs would just take the least one 23:48:22 rtprio if this is the case, then I can't mix ssd and hdd in #3 as well? 23:48:26 i don't understand why you feel the need to do things the hard way 23:48:48 it might work, but no one is goign to want to help you with it 23:48:57 I see 23:49:02 you mean #3 might work? 23:49:37 you know you could also try these things and read the messages zpool tell you 23:49:48 ok 23:49:55 it might warn you that they're different sizes and might perform lie dogshit 23:50:13 if that's what you want to do, then perhaps i have better things to do now 23:50:14 got it 23:50:15 you mean #3 might work? or might not work? 23:50:32 might work 23:50:37 ok 23:50:45 why not try it and see how it goes 23:50:51 I think then, #2 might not even boot... 23:50:56 ok. 23:51:00 the hard way :) 23:51:04 but I should rethink 23:51:18 about your advice. my topology is off 23:51:23 #4 is ok? 23:51:38 a usb drive that you rsync to; sure can't see a problem with that 23:52:14 not even with -c in rsync? 23:52:19 checkum 23:52:23 checksum* 23:52:54 oh . I thought you mean "rsync cant see a problem with data corruption" 23:53:02 ok. so #4 is ok. 23:53:11 thank you rtprio parv 23:53:25 yes, rsync will happily sync your corrupted data to it's backup destination 23:53:47 which is why i suggested saving snapshots onto that 2tb drive 23:53:54 am.. ya but the source (zfs) should and would correct it 23:54:08 am ok.. 23:54:15 what if you bungle a file? something crashes and eats the file. 23:54:30 rsync would just wisk that along and there's no corruption as far as zfs is concerned 23:54:38 this one, still? 23:54:45 understood. the only problem is restor time of snapshots. e.g if #1 fails, I have to formate, reinstall freebsd, the restore snapshort from 2tb 23:55:02 AmyMalik evolving proces :) 23:55:03 not the way i'm describing you should do it 23:55:35 rtprio snaps make sense 23:55:38 just boot into the other nvme and update bios order. order a replacement drive. no downtime 23:55:45 rtprio how are you proposing ? 23:55:58 zroot mirror on nvme1,2 23:56:09 postgres mirror on ssd1,2 23:56:25 zfs-send snapshots to the 2tb drive 23:56:27 rtprio wait. if #1 crashes, I can't make mirrors/plug play 23:56:45 the only option I would have is to plug a new nvme, install freebsd, restore snapshot 23:56:47 why do you thin that 23:56:48 correct? 23:56:57 why do you think that? 23:57:09 no you'd just boot off the other nvme, that's the whole point of a mirror 23:57:31 and then swap it out, and rebuild the mirror, re-redundant again 23:57:40 which mirror? we are putting snapshots. didn't made a mirror 23:57:55 always this one, apparently. 23:57:56 two distinct strategies 23:58:14 This is going in circles; meow out 23:58:39 yeah man, have you done any reading about zfs before coming in here with the worst desgined zfs layout? 23:59:23 rtprio  yes since 2 days. Are  we on same page that #1 is raid 0 and i just save snapshots to another hdd. If #1 crashes, what will I swap? which mirror? 23:59:51 let it be the last point for today :)