00:01:10 zpool is the volume manager (and software RAID) part of it, and zfs is the filesystem part of it. 00:05:35 V_PauAmma_V: where I learn it? 00:07:06 can peole not google any more ? 00:07:35 man zfsconcepts 00:07:36 https://docs.oracle.com/cd/E19253-01/819-5461/index.html 00:07:41 the whole zfs documentation 00:08:51 you can ask your phone to do it for you heh 00:09:03 hey google what is the difference in zppol and zfs 00:09:44 openzfs docs need a lot of work 00:09:49 cpet: I want to learn, not be converted... 00:10:05 converted to what ? 00:10:17 tell me what is the difference in ask google than asking in here ? 00:10:22 Unlike Google now, I'm in the business of providing accurate (and if at all possible usable) information. 00:10:46 first result was a link to oracles docs 00:10:53 probably more accurate 00:12:18 See above, "usable". 00:12:45 V_pauAmma_V: thanks for the man zfsconcepts I didnt know: there is no stupid question! 00:13:09 Chip1972: apropos x will give you every man page with x in it 00:13:09 (I don't think snapping at someone for not using Google is productive.) 00:13:13 if you dont want to google 00:14:02 "(I don't think snapping at someone for not using Google is productive.)" I agree 00:14:27 arch users are migrating to fbsd? 00:14:49 why does it matter ? 00:15:20 How are you cpet :) 00:16:47 i have no beer for turkey day 00:17:04 oh, broke? 00:17:14 no people dont want to work holidays any more 00:17:24 haha! 00:17:27 so I didnt know rouses closes at 2pm today 00:18:21 Chip1972: where did you ear such thing ? 00:19:10 since searching google is bad, i do not know as I cant google 00:19:13 :( 00:20:35 i like it when people move form linux to freebsd they complain that wayland doesnt work right :) 00:21:01 again not googling at the fatc that wayland support is there but behind a bit 00:23:14 https://www.phoronix.com/news/FreeBSD-Q2-2025-Status-Report 00:23:26 wonder when that will be available 00:25:11 wow. long smart tests still going on these disks 00:25:29 yeap it takes a while 00:26:10 i think when it started one of them said 1000 minutes 00:27:10 zfs in itself does a pretty good job at detecting issues and fixing them 00:27:30 we used to have a program called badsec which would remap bad sectors 00:28:19 yeah but i haven't added these to the pool yet. i wanted to make sure they'd pass the long smart test before adding the 2nd vdev 00:28:46 https://cgit.freebsd.org/src/tree/sbin/badsect?h=stable/2.2 00:28:48 heh 00:29:23 now the firmware does that 00:29:41 yeah i thought hard drives do that on their own 00:30:42 i would of just added it in :( 00:31:00 lol 00:31:09 if smart says the drive is good 00:31:20 then hell if im going to wait 14 hours 00:31:34 well... smart hasn't said it's good yet heh 00:31:59 they're down to 20% left .. so i can wait it out 00:32:07 smart does its own small little tests to see if the drive is good or not 00:32:16 i need to yank them to make sure i have the serial numbers and positions right in the spreadsheet anyways 00:32:18 based on what the results are 00:32:39 yeah I wouldnt last that long 00:32:50 added drives, add drives to spread sheet goto break room and dirnk coffee 00:32:51 heh 00:32:54 i'm not really sure what it does for teh short testing 00:33:02 but i figured the long test scans every sector 00:33:08 in one sitting end to end 00:33:49 i had a NAS that said a drive was bad i rebooted it it then said it passed 00:33:55 so i dont trust that smart stuff much 00:33:58 once i get this vdev going in can start ordering an 8TB drive a month or something and expand it until it gets 12 wide like the other vdev 00:34:12 hopefully without destroying the pool lol 00:34:36 attaching a drive should cause any data loss 00:34:41 most places i'm seeing say 4 disks are required to start a raidz2 vdev.. i always thought it was 3 00:35:02 but i guess that would be awkward 00:35:19 4 00:35:28 i hope you mean shouldn't :) 00:35:45 what mnakes sense to you ? 00:35:49 should or shouldn't ? 00:36:10 attaching a drive shouldn't cause any data loss 00:36:24 yeap 00:36:58 well for now i'm going to have mismatched vdevs .. not sure if that's a problem with zfs. i'd think not but there may be science in it 00:37:34 nope 00:37:46 1 vdev will be 12 8TB raidz2 and the other will (for now) be 4 8TB raidz2 00:38:01 yeap the minimum 00:38:03 so i have something to scale off of .. hopefully larger drive prices come down 00:38:39 i'll have 12 bays to work with so i could just make a 20TB x 12 pool or something and get rid of all these old disks 00:40:29 9 Power_On_Hours 0x0012 087 087 000 Old_age Always - 93576 00:40:40 the 4TB drives i'm running are troopers 00:44:17 there is a ZPOOL and ZFS tutorial for noobs? 00:45:21 https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/ 00:45:43 There's a chapter on that topic in the user handbook. https://docs.freebsd.org/en/books/handbook/zfs/ 01:13:18 # 1 Extended offline Completed: read failure 30% 1255 - 01:13:20 hah 01:13:22 so it failed 01:13:28 one of them did at least 01:17:05 hmmm I should prolly start replacing my crappy old SAS drives 01:17:14 though it's a RAID-Z2 and has a hot spare so meh 01:18:19 what is difference between mount ZPOOL and mount ZFS? 01:20:18 As far as I know, you can't mount a zpool, only a zfs (which can be the root filesystem of a zpool). Why do you ask? 01:25:53 according to 'ZFS for Dummies' 01:25:53 Use the -R flag to mount the pool to an alternate root location 01:25:53 # zpool import -R /mnt/tank2 tank 01:26:34 This is actually mounting the root filesystem of the pool. 01:28:50 (in addition to importing the pool) 01:32:43 what is difference between mounting and importing? 01:34:38 cpet: you know what people are more inclined to do than searching on google for answers? Asking ChatGPT. :P 01:35:01 Importing (and exporting) is what lets you move pools between computers or VMs. Mounting is what makes a zfs (which again, may be the root zfs of the pool) visible somewhere in the directory tree. 01:37:10 can I mount whithout and importing? 01:38:07 You need to import the zpool in order to mount the zpool's datasets. 01:39:08 (datasets being what I not-quite-correctly called "filesystems" above.) 01:39:20 how fstab import the zpool? 01:41:22 I think that on multiuser startup, some script in rc.d/ will automatically import all zpools that the computer or VM has access to. 01:41:59 So you wouldn't need to touch fstab, if that automatic importing is what you want. 01:44:22 shame on zfs docs... 01:46:12 That "ZFS for dummies" is 5 years old, judging by the URL. It may have been correct then, but OpenZFS changed meanwhile. 01:48:00 I'm reading the handbook page, is that up to date? 01:50:20 is like trying to build a wall, and only find documentation to build a skyscraper 01:53:29 JetpackJackson: I seem to remember that most of that chapter dates from 2022 or 2023. So nearly everything in it should still apply. 01:54:21 Alright cool 01:55:29 Makes me wish I had paid more attention in my vm install with zfs cause I just picked the first one (stripe, no mirror, I think?) lol 01:56:07 Gonna see if I can install on one disk, add more disks, and then change the stripe-y stuff 01:59:03 That's very easy to do in VMs. I've added disks (well, partitions) to a stripe several times when my first disk turned out to be too small. 02:50:15 Ah ok. Want to be able to add disks freely to my upcoming media server setup so I wanna prepare by figuring out how it all works 03:32:32 # 1 Extended offline Completed: read failure 10% 31134 - 03:32:40 hm.... 2 of the 4 failed the smart test 03:32:45 makes me wonder if there is something else wrong 04:24:36 Macer: you do know that zfs will detect those bad sectors and mark them so ? 04:24:41 zfs is a pretty resilient fs 04:31:20 i've had it break several times, it's quite fragile. but it can detect bad sectors and refuses to knowingly return incorrect data 04:33:05 i guess it depends on the drive as well 04:37:19 Macer: is it procedure to scan the drives first or are you doing it as a good employee ? 04:41:33 expand: expansion of raidz2-0 in progress since Thu Nov 27 22:35:46 2025 04:42:10 ah well.. 2 of the 4 disks failed the long smart test so i'll just pass on those two and just expand the existing vdev 4 more disks and work on making another 16 disk vdev later 04:42:27 i considered making the vdev 24 disks wide but that is kind of pushing it 04:45:48 LxGHTNxNG: ive done some weird things with zfs and never had it break but then again i install the OS and leave it alone 04:52:17 so cool you can ecpand them now 05:12:10 rtj: https://freebsdfoundation.org/blog/openzfs-raid-z-expansion-a-new-era-in-storage-flexibility/ 05:30:26 took half my turkey meal and froze it for xmas and new years 05:30:37 cook once eat on 3 occasions 05:46:05 cpet: yes, thats what i was talking about 05:47:11 rtj: the handbook says not to make pools bigger than 9 drives 05:49:32 good to know. i cant afford that many. no need to worry here. ;) 05:50:08 yeap and that is why my zfs pool include a mix of nve ssd and HD 05:50:09 heh 05:50:30 not smart not efficient but it works 08:51:36 15R coming soon! 09:16:45 soon ! 09:16:54 i keep refreshing https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/15.0/ 09:16:57 lo 09:16:59 l 09:17:59 maybe you should go an eep 09:33:16 hernan604: you do realize that the actual release date is set to Dec 2 ? 09:33:31 and that too can change 09:34:00 releng/15.0 was created so you can upgrade now using src to 15.0-release 09:38:50 cpet: yes, but the iso always land there earlier 09:38:59 (than the expected date) 09:44:56 hernan604: if you really wanted it you would just compile the src 09:51:44 4.03T / 69.1T copied at 223M/s, 5.83% done, 3 days 12:59:26 to go 09:51:47 uhm.... 09:52:02 so it was going rather well... now it almost seems like it completely stopped its expansion reflow 09:53:56 cpet: nah, im just looking for fun 10:13:39 uh oh 10:13:45 this can't be good 10:13:57 it's mounting the local filesystem after i decided to reboot it and now it's just sitting there 10:18:52 ok. it finally started up. that would have sucked 10:19:19 not the end of the world but still sucked.. but yeah i'm guessing because this drive sucks and isn't a nas drive that it's just super slow 10:19:48 its a HDD the max it will go is SATA 6 speeds 10:19:54 which is 600 mbits/s 10:37:58 4.03T / 69.1T copied at 223M/s 10:38:08 that's like hdd maximum 10:38:15 pretty much 10:44:05 lets all add in terabytes of storage the complain when it takes to long at the max speed of the sata interface 11:29:01 there is no 70t hdd so raid i guess 11:29:39 of 2 disks :p 11:29:49 i love how far we have come 12:21:19 bro that's pretty much the up limit of a new SATA3 disk ketas 12:21:31 cpet: oh. no. it was actually not going that fast.. that speed was slowly declining and zpool iostat was showing very little activity 12:21:38 maybe i should have been more clear 12:21:49 200+MB/s something 12:22:53 the equivalent of 2GbE basically 12:23:49 Macer: what are you transfering? 12:24:11 i'm expanding a raidz2 vdev 12:24:24 yeah it's pretty sata hdd max 12:24:32 esp if random access too 12:24:57 maybe it was the random access when it really got slow. zpool iostat was showing like 2MB/s for the drives 12:25:05 yeah 12:25:12 after rebooting and having it resume it seems to be going faster again 12:25:15 i bet it's random anyway 12:25:32 that's weird zfs behaviour maybe 12:25:56 not sure... i guess i'll just wait it out 12:26:06 my personal favourite of raid is 0 + 1 12:26:20 but it's not like you keep rebooting during disk ops to get +10mb/s extra oomph 12:26:29 parallels in series basically 12:26:31 yeah lol 12:26:39 that wouldn't be practical. i did it because i figured something was wrong. 12:26:51 unsure if i would even want 0 part of it 12:27:08 scottpedia: that pool is for raw storage. speed isn't that serious. 12:27:13 but in some cases you need it 12:27:42 if you want to write down 100t file it doesn't fit 12:27:44 :p 12:28:33 ideally speaking they should have some kind of a statistical analysis of how often a drive goes haywire 12:29:00 what were benefits of raidz? 12:29:02 speed? 12:29:08 i'd say space 12:29:10 aka what kind of importance constitutes the need for a raid 1 setup 12:29:18 space+redundancy 12:29:21 oh maybe 12:30:00 read about apple's fusion drive some time ago 12:30:15 here i see myself doing 2 hdd mirrors 12:30:19 they say it's kind of good cause it's a tiered storage system 12:30:43 dunno how well that could perform in practice 12:30:44 managed to crapperize myself a bit with singles 12:30:49 in my case i'm going to expand this vdev to 16x8TB then move onto making another 16x8TB vdev (or larger maybe) later ... i'd be a little nervous making a raidz2 vdev > 16 drives 12:30:52 so don't do that 12:31:22 or, do, if it's ok 12:31:26 lol. at least with singles if you really screw up you can just mirror on the vdev to work your way out of that even though you have to go 1:1 12:31:51 i've seen people who accidentally added that single disk vdev to a pool lol 12:32:12 you can mirror it out 12:32:27 actually zfs needs more general features 12:32:41 the only time it probably matters a lot is when people try to add slog or l2arc and do it wrong and add a 128GB nvme or ssd to the pool lol 12:32:47 one dreaded one is to replace device with new smaller device 12:33:13 does zfs even let you do that? 12:33:23 it has... hacks 12:33:30 you'd think it would spit out an error if you try to replace with something like half the size of everything else 12:33:31 but no 12:34:16 i think they have that new thing where you can add different sized disks .. not sure if that's in 2.4 or not though 12:35:02 i have my few pools too large and sane idea is to replace with larger disks eh 12:35:04 that's probably something i would never attempt. i guess i can see the use case of throw anything you can at the storage due to financial constraints 12:35:35 different size disks always worked 12:35:43 yeah but you'd have to wait until you replace them all with larger disks for it to expand.. i guess it's easier with mirror vdevs 12:35:56 in mirrors at least 12:36:46 yeah. depending on yow you do it you just have to replace 2 at a time to expand the space .. that's a lot of space to give up though. i'd probably take that route for super important production storage 12:36:54 and have 3 way mirror vdevs lol 12:37:14 yeah why not 12:37:22 someone told it i was wtf but 12:37:26 it's just a disk 12:37:27 lots of read speed at least :) 12:38:14 do zfs mirrors take speed of the drive into consideration? like if you had a 4TB ssd and a 4TB platter would it just favor the ssd and let the platter trot along? 12:38:36 i never tried 12:38:54 apparently it's mixed result it seems 12:39:00 i have a 4TB SSD sitting on a table and a 4TB platter .. i should try that 12:39:03 just to see what happens lol 12:39:18 need guard space i guess 12:39:20 i'm curious if zfs adjusts for that 12:39:23 i missed it on disks 12:39:26 damn 12:40:06 i highly doubt 4t hdd is byte equal to 4t ssd 12:40:24 that can be adjusted with partitioning 12:40:29 yeah 12:40:35 but it should be the same (in theory) 12:40:45 i don't think they use a different standard for sizing 12:40:46 never know eh 12:41:07 i'd guess that zfs would adjust to the slightly smaller drive size regardless 12:41:17 i have 4 same hdd's here, but also diffrent ones 12:41:35 i had brainfart and forgot the part size 12:41:37 :p 12:41:47 ah. that's why i just copy the partition tables from the other disks 12:42:05 so now that pool stays that 12:42:07 then gpt label them 12:42:59 back in the day when i used freenas figuring out where broken disks were was a mess .. they identified them like 3 different ways .. zfs would add them one way but the web ui would show them differently 12:43:12 the best thing i ever did was just install vanilla fbsd on the nas 12:43:18 unsure why all 4 12t hdd's from 3 makers are byte equal tho 12:43:25 and what's max diff? 12:43:32 they should be 12:43:38 in old ones i see 200m diffs on 160g 12:44:13 maybe there is some sort of requirement for it nowadays? 12:44:17 could use public smart db's to do stats :p 12:44:23 like it has to be x +/- 1MB 12:44:38 well there are no hdd standards iirc 12:44:45 on size 12:44:54 or maybe everyone moaned 12:45:03 and they were fffs ook 12:45:03 there sort of is... like 1000GB = 1TB :) 12:45:11 so if they market it wrong then i'm sure that would be a big issue 12:45:23 1000 vs 1024 is another fun too 12:45:29 kind of like how WD didn't let people know they were pawning off SMR drives 12:45:48 yeah. i was a little ticked off about that change. 12:46:01 change? 12:46:10 disks were always sold by 1000 12:46:14 i mean.. 1024 USED to be the thing.. probably not with TB but earlier disks 12:46:21 it was? 12:46:23 like if it was a 1GB disk then it was 1024MB 12:46:39 as far as i can remember.. yes .. other than what you lose formatting 12:47:09 at least i don't recall anything like that >=1999 12:47:36 and people have always moaned how disk is too small in os 12:47:37 :p 12:48:28 yeah. they started doing that in the 2000s or something when disks started becoming larger because if they change 1024 to 1000 then they can market it with more space 12:48:29 I always get mixed up when different programs show GB vs GiB 12:48:53 ounces make pounds i guess 12:49:45 gb / gib is fuckup 12:49:59 it's great confusion 12:50:28 at many times it doesn't even matter 12:50:44 remnant of binary 12:51:21 well.. i THINK most filesystems still use the 1024 method whereas disks use the 1000 12:51:24 if 78g didn't fit but 72g did it's out of space anyway 12:51:37 i'd have to check on that though 12:51:40 df -h / df -H examples 12:51:42 :p 12:52:31 and i'm sure changing that would involve a major undertaking because i doubt that filesystems can come away from using binary :) 12:53:17 i'm sure when quantum computers become the norm it this will all be moot and filesystems can just be in all states at one time 12:53:51 shrodinger's filesystem 12:54:09 you can cat from that 12:54:12 :p 12:54:13 *schrodinger's 12:55:00 the files can be deleted and exist at the same time! 12:55:11 wait... i think they do that already 12:55:42 Schrödinger's Scheiße 12:57:19 ah well.. let me start doing stuff. i have 2 days and counting until this raidz expansion is done lol .. guess i just need to keep the thing running 12:58:00 on a side note. is RELEASE still slated for today? 12:58:43 oh seems so. the 15.0 release process says they started building RELEASE today 12:59:46 hmm 12:59:53 hopefully it 13:00:01 it's all fixed now 13:00:04 I thought it was for December 13:00:14 i mean lagging release using is ok 13:01:04 because apparently devving testing isn't enough and going prod is like wait what 13:10:21 Ah 14:34:51 JetpackJackson: i'm just going off the 15.0 release page ... the RC builds were available the same day or a day or so after 14:35:06 Ah 14:35:10 Cool 14:35:11 i think the announcement is for dec 2 14:35:21 but i'm not sure if that's also when they let it out into the wild 14:36:43 9.65T / 69.1T copied at 281M/s, 13.96% done, 2 days 13:40:35 to go 14:36:46 it seems to be picking up speed 14:37:19 you have collected a lot of data 14:41:13 87.2T 69.1T 18.2T - - 3% 79% 1.00x ONLINE - i guess the expansion doesn't show until it's done 'reflowing' the data? 14:44:49 this has to be pretty intense for the disks.. lots of r/w action across the entire pool 17:41:50 oh yeah !!! what's up freebies 18:08:29 Hi 18:10:39 :) 18:13:40 great. had a drive die while expanding lol 18:13:45 go figure 18:16:29 it broke so badly that zfs just kicked it out the vdev 18:17:49 12.2T / 69.1T copied at 259M/s, 17.65% done, paused for resilver or clear 18:53:11 Macer, I don't know if I should feel sad for you or not. You were just running a testing system exercising the pathways, right? This was a testing system and not an actually in use system, right? 20:44:51 sup 20:45:24 the system 22:35:00 15-RELEASE: https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/15.0/FreeBSD-15.0-RELEASE-amd64-memstick.img 22:46:46 so just replace the vdev, what's the big deal? 23:35:45 hernan604: Nothing's official until the announcement comes out! 23:49:27 hernan604: it's not "fully" official because there can be some last minute rebuilds with fixes until the announcement. Think of it as a "beta" to the release until it gets the green flag. 23:49:39 ^