00:14:18 TIL pkg prime-list. Useful. 00:15:38 demido: I trust what that particular ZFS person says. He knows more about it than most of us, FWIW, based on what I've observed. 00:16:41 ok thanks mason. so my external usb drive zfs mirror command sequence ends up as: gpart create -s GPT , gpart add -l mylabel0 -t freebsd-zfs -a 1M , gpart create -s GPT , gpart add -l mylabel1 -t freebsd-zfs -a 1M , zpool create mylabel mirror gpt/mylabel0 gpt/mylabel1 ; zpool import mylabel 00:16:50 sharing that in case anyone else can use it 00:17:42 pretty awesome to be able to get a zfs mirror over 2 external usb drives and get all the benefits of zfs like bit rot healing 00:17:57 prior to zfs it was FS dark ages 00:23:24 demido: The other thing, actually, is to go back and ask him *why* that'd be ignored. 00:23:51 true 00:25:16 ok just asked i'll report back what i learn 00:27:22 demido++ 00:35:55 mason dagger said "ZFS splits vdevs up into metaslabs, which are typically 16G. it'll only use the beginning (approximately) n*16G of space" 00:36:07 "I suppose you might get exceptionally lucky and that final <4k of space might be enough to fit an extra metaslab in, but the odds are... what, 1 in 4.8 million at best? for random disk sizes" 00:38:08 so i'm kinda thinking it's still a good idea to set gpt part -s to qty of whole MB, then let zfs metaslab however it will. thoughts? 00:42:59 demido: His final comment about varying disk sizes is something that comes up a surprising amount. 00:47:03 It does make sense based on the way disks are sized, I suppose. I'd just let ZFS do it's thing. 00:47:38 so skip setting -s on clean 4096 boundary and let zfs figure it out? 00:48:03 mason which comment? just wanna be sure i'm not mistaken 00:48:17 "I suppose you might get exceptionally lucky and that final <4k of space might be enough to fit an extra metaslab in, but the odds are... what, 1 in 4.8 million at best? for random disk sizes"? 00:48:42 Yes. There are cases for the -s arg, but I don't think it'll make a difference in this one. Give it a shot and see what happens? Or have you already and you're having problems? 00:49:15 haven't ran into a prob was just taking some advice from tsoome 00:49:29 demido: I was thinking specifically about "20:39 < Dagger> but note that this is also your buffer for replacing a disk with a smaller one" 00:49:45 tsoome: Is a very good source. I'd listen to them before I listened to myself. That's for sure. 00:50:32 mason ah ok ya i was wondering about that. i'm not quite sure what it means 00:51:46 mason are you saying that it actually happens quite a bit that the zfs metaslab ends close enough to the end of a partition's size, that it can be a problem with a replacement disk being smaller? 03:27:59 demido: Nothing about ZFS. What I'm saying is that unlike spinning rust, SSDs advertised as being the same size often aren't. 03:28:36 So if you have a slightly larger one, then need to mirror it to another disk the same size, you'll often find that the new disk is actually smaller despite the advertised size. 03:40:02 I thought that was an issue with spinning rust too 04:21:37 hodapp: Not generally. Spinning rust disks tend to be far more uniform. At least in my experience. Where I've been bitten by mismatched SSDs a few times now. 04:22:27 i remember this being a thing with HDDs in the past, but nowadays the remaining vendors seem to have standardised on how many sectors each size should have 04:22:45 (i still leave a margin though, just in case) 05:08:34 An acceptable plan might be to use the two disks that are currently the same size and then if one fails it will be in the future and larger SSDs will be cheaper so buy two more larger ones and upgrade to the larger sizes. 05:09:50 Note that if one buys two at the same time of anything that are designed to be identical like two SSDs and one fails then there is a higher likelihood that the other one is also identical and will also fail very soon. Twice now I have had two identical spinning disks fail within a week of each other because they were identical and lived identical lives. 05:11:50 i have 6 SSD's and 2 NVE's 05:11:58 one of those drunk purchases :P 05:12:10 lets buy 4 ssd for some random reason 05:12:47 so now the desktop has freebsd ins tall on a mirror NVE with a mirror 500 x 2 and a mirror 2 x 1tb SSD 05:15:52 i would have lost about 500 gb if I did a 4 way z2 05:40:38 mason so shouldn't that mean that i DO use -s for whole MB boundary and clean 4096 ending? 05:48:33 demido, I would. Have you Read The Fine Mason docs? https://wiki.freebsd.org/MasonLoringBliss/ZFSandGELIbyHAND 05:49:53 cpet, Mirroring is simple and robust and 2-4 devices makes a lot of sense. Using raidz2 on 8-12 devices though also makes a lot of sense. It all depends upon the situation. 05:50:00 the #zfs guy said it wouldn't hurt so i might as well. worst case scenario i lose some disk space but idk about that as much as i care about solid setups. and yea i skimmed the mason doc looks pretty good 05:50:41 2 usb drives here 05:51:42 Some SSDs that say things like 500GB or 1000GB are not always exactly the same size. If you use partitions then there is always a little space unused at end for reasons of alignment. It's more likely to fit two mismatched drives in that case. 05:55:11 and it would be a solution right? to set the size (-s) to the number of whole MBs 05:55:45 or what rule of thumb would be good to use? 06:16:19 I think Mason documented this in his RTFM Read The Fine Mason doc where it says -a 1m -s 4096m 06:18:11 demido: Could you reiterate the end-of-disk behaviour you were wondering about? Revisiting, I've decided that I don't entirely understand. 06:21:55 I'd tend to pick an -s a bit shorter than the whole disk based on the notion that a replacement disk might be smaller. I tend to install to mirrors, and find something that will comfortably fit both disks. 06:27:17 mason basically tsoome was saying that you want your partition to be whole 4096s, so the partition doesn't end partway through a 4096 block 06:27:41 so in order to do that, you set -s with some base 2 SI amount that fits within the disk 06:27:59 so -a 1M to align the start, then -s $x so the end is aligned cleanly 06:28:13 so the whole partition is made up of whole 4096 byte blocks 06:28:19 that make sense? 06:28:23 2025 amd we still partion are disk like it was 1993 06:28:24 :D 06:28:31 Ah, ah. I'm not sure how much that will matter in reality but it sounds pleasantly tidy. 06:32:18 demido: surprised mason didint tell you to join #freebsd-fs 06:35:16 I'd have to know about it before I could recommend it. 06:44:36 GPT defaults to 4096n 09:31:35 how often do you guys have nvme drives fail? 11:02:11 demido: that's probably more or less random 11:02:35 hi 11:04:04 q: mac_do(4), what do I need except security.mac.do.rules: uid=1001>uid=0 to have is working for uid 1001 ? 11:04:18 is mac_do(4) only for jails ? 11:06:41 VVD, how are you doing? 11:06:56 it's been a while we didn't talk 11:32:47 mason, ivy: oh, fair enough. perhaps this is just because I lived through the era when a megabyte was more of a feeling than a measurement. 11:39:32 OK, mac_do(4) describes security.mac.do.rules in a bit modernised way, but it still works 11:44:30 nxjoseph, hello! 11:45:09 VVD, hi! I prepared a meal for myself, see you later! 11:53:14 just realized that the fbsd realtek kernel module seems to kill one of my nics. 11:54:16 i was wondering why opnsense kept just dying on me and saw it dropping out and never returning with a watchdog error 12:02:15 is anyone getting tons of AI generated emails recently? 12:02:22 its just gibberish 12:02:32 lol implying I read my emails 12:02:38 xD 12:07:42 Macer, did you meant that your nic is died physically because of the module? 13:01:04 nxjoseph: yes 13:01:28 it had a watchdog timeout... then would go up and down over and over. i had to install a vendor module. 13:01:39 opnsense has it in their repo 13:02:05 it was taking down everything because it is my vlan gateway too 13:02:40 known issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=166724 13:05:55 it happened randomly so it took me a little while to catch it so i could find out what it was. 13:06:23 ivy: ah. yes that's it. 13:07:08 oh wow. from 2012 :) 13:11:16 well if it happens again i'll install opnsense on my ancient supermicro 1u that i've been using as a fbsd testbed 13:11:35 which is a little overkill but it will definitely work 13:51:02 1.3.6.1.4.1.1466.115.121.1.7 13:51:07 whoops 17:46:00 demido: I've had two nvme drives I've used seriously, and of those, one is failing. I've got another I got in a retail system that I've started using, and I'll be taking regular back-ups. 18:14:50 Hmm, as usual the freebsd publications arent very useful https://freebsdfoundation.org/blog/how-to-unlock-high-speed-wi-fi-on-freebsd-14/ 18:15:04 from the looks of the video, 802.11ac has been ironed out for 14.3? 18:15:16 so that it will be stable enough to daily drive? (at least on the framework laptop) 18:31:56 polarian: There's been A LOT of work on wi-fi drivers for FBSD recently. I believe you're correct about 14.3. They originally wanted to speedline it through on 14.2 but it just wasn't quite ready. 18:32:11 With any luck, there'll be some awesome advances with wi-fi very soon. 18:33:53 meh 18:33:55 802.11n works fine 18:34:13 if I want fast internet I plug in ethernet 19:08:22 Yep.