04:25:06 can freebsd/ufs handle bad sectors? 04:45:52 https://man.freebsd.org/cgi/man.cgi?query=badblocks 05:13:26 hmm 05:13:34 I should just say what is happening instead of guessing 05:14:44 my hdd was dirty, I do not know why, I separated /home and made it its own slice, now I reboot and its dirty 05:43:15 reboot in single user mode and fsck the partition i would assume. 05:43:34 should have chosen zfs..no fsck needed.. 05:44:49 fsck 100's drives from crash is long boot process. whole point of zfs was to do away with fsck foobar, and CRC checksums. 05:45:10 bad harddrive bios, bad raid firmware..yeah..zfs 05:45:19 why jbod and not hardware raid 05:47:16 it says lower ram than 8gb to not use zfs rennj 05:51:44 https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#hardware-requirements 05:51:52 ECC memory. This isn’t really a requirement, but it’s highly recommended. 05:51:59 8GB+ of memory for the best performance. It’s perfectly possible to run with 2GB or less (and people do), but you’ll need more if using deduplication. 05:52:41 Without ECC memory rare random bit flips caused by cosmic rays or by faulty memory can go undetected. If this were to occur OpenZFS (or any other filesystem) will write the damaged data to disk and be unable to automatically detect the corruption. 05:52:57 go up to mountains of colorado get more bit flips 05:59:33 internet is filled with stupid now, best thing to do is test it yourself.. 06:01:26 https://en.wikipedia.org/wiki/Sun_Fire_X4500 thumper 06:01:57 you know back in 2006, i got thumper vm here in vmware 06:02:16 i also had solaris 10 beta machine..what would i know.. 06:03:40 https://imgur.com/p95rKOf solaris 10 beta tv card working. even had tv-card working 06:07:37 and the 90's was more compatible..haha compiling software was easier then today...but keep arguing with me. channel. 06:07:47 ports no shit 06:07:53 heh 06:08:20 hp-ux irix solaris linux freebsd ..was more compat back then 06:08:34 but keep telling me how things are better today 06:14:23 UFS has worked for a long time for me. it seems I don't have journaled soft-updates on this slice... 06:15:12 maybe that's it 06:15:34 yeah you should enable it 06:15:49 nothing wrong with ufs 06:16:36 if you had big old disk array then i would say something.. 06:16:59 1hour boot time on mission critical stuff cause fsck all the drives... 06:17:19 255 drives in fc-al loop 06:17:32 dell emc gear 06:17:52 and linux in 2004 was joke on that gear compared sun or hp... 06:18:16 megaraid driver..or whatever..damn you to hell 06:19:05 linux is lucky ibm dumped 1billion into it in 2000 and 2013 06:19:13 fight off m$ stupid 06:19:39 $2billion, and now ibm owns redhat 06:32:53 zfs zil for writes,l2arc for reads 06:34:04 hybrid storage array with ssd and spinning rust and lots of ram... 06:34:16 that was thumper 06:44:48 http://i.imgur.com/fVNpK.jpg http://i.imgur.com/BJGGz.jpg http://i.imgur.com/px0V8.jpg, thumper vm, which virtualbox could not run, but vmware player no problem 06:45:12 and sun bought virtualbox..and could not get it to run on it 06:45:14 haha 06:45:24 but vmware could run it 06:45:43 i have the old links to blog post about that very issue... 06:46:48 2011 06:48:09 core2duo with 8GB of ram 07:03:00 you want compatible netbsd pkgsrc 07:03:08 thats the best we got right now 07:03:57 lets check https://repology.org/ 07:04:23 nix (nixpkgs unstable) - 97110 vs FreeBSD Ports - 30986 07:05:06 currently containing over 13000 packages. pkgsrc 07:07:00 https://www.pkgsrc.org/ currently containing over 26,000 packages. yeah i think that a lie 07:07:38 smartos people need to fix that error 07:21:09 pkgsrc currently contains over 22,000 packages and includes most popular open-source software. It is the native package manager on NetBSD, SmartOS and MINIX 3, and is portable across 23 different operating systems, including AIX, various BSD derivatives, HP-UX, IRIX, Linux,[4] macOS,[5] Solaris, and QNX. 07:21:16 wikipedia 07:22:29 but fanboys, zealots will argue.. 07:52:12 uskerine: zfs has many nice features, like checksumming everything, transparent compression, cheap snapshots, ... 07:54:29 check summing without ECC is cosmic ray foo 07:54:55 better then nothing but still error prone 07:55:58 https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction 07:56:05 Reed–Solomon error correction 07:58:04 while having ECC ram would be nice, not having it has the same risks on all filesystems and if a random bit-flip on read happens, zfs detects that because of the checksum 07:58:35 i posted the openzfs page notes 07:58:50 Without ECC memory rare random bit flips caused by cosmic rays or by faulty memory can go undetected. If this were to occur OpenZFS (or any other filesystem) will write the damaged data to disk and be unable to automatically detect the corruption. 07:59:02 see openzfs page 07:59:13 https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#hardware-requirements 08:00:50 and DDR5 jedec has error correction but you still need ECC ram parity chip/data line 08:02:19 perhaps you need to read the openzfs page 08:02:30 cause you are spouting lies 08:03:45 rennj: yeah, that is the write case, were stuff can go unnoticed, but in the read case zfs reads some data and its checksum and if they don't match the read error counter gets increased 08:04:57 or probably the checksum error counter, read is probably disk errors 12:15:42 Holy wall of text. 12:40:06 what's the use case for `reboot -r`? 13:29:57 AllanJude: Hi, I was wondering is there is any work underway to use ZRAID expansion with more than 1 drive. Can't remember if or what the reason was for only 1 drive at a time. 13:30:41 mostly complexity. and the use case doesn't usually involve more than 1 drive at a time 13:30:58 like, if you have 4 drives, and are moving to 6+, you might be better off making a new pool instead 13:31:19 makr: to change your root filesystem without actually rebooting (because the BIOS etc part of boot is the slow part) 13:31:44 but no, there are no plans to make RAID-Z expansion support more than 1 disk at a time 13:31:48 yeah, but in my case for example, I have a -Z2 with 6 drives and want to add 2 more. 13:32:17 Just my use case, I know :-) 13:32:43 I use Toshiba N300 drivers. They support 8 max in a bay. So can't even add 6 more in one go. 13:33:24 s/drivers/drives 13:51:01 AllanJude: thanks 13:51:57 weust: that limit is only re: vibration, so if you have good caddies it is likely not an issue 13:52:01 but yeah, you can just expand twice 13:52:41 Oh, storage talk? 13:52:57 weust: I like the M09 Toshibas 13:53:01 SAS 13:53:38 https://www.ebay.com/itm/305571827435 13:53:46 Guess they are actually MG09 14:01:01 AllanJude: They are in a Dell R730xd. I has 12 bays in the front, so I would assume it's OK.... 14:01:20 SponiX: I already have 6 N300's so will stick with them for now. 14:01:35 Also, since mine are SATA I can't mix with SAS 14:01:56 weust: Yeah, those are good Units (R730XD). And those drives generally do well also 14:02:30 My current drives are 5 to 6 years old now. All fine 14:02:31 weust: double check that, my disk shelves and a lot of machines have SAS that can take both, even mixed together 14:03:09 Not saying you have to go SAS. I just often find them cheaper on eBay than SATA 14:03:20 But, I only store remuxed movies and series on them, so write is very low. Read not much either. 14:03:51 Dell specifies with their controller it's one or the other. 14:04:37 And I have two 2.5" SSD's in the back too, for my music and other data. Also SATA, on the same controller. 14:05:22 Sounds like a good setup for what you are doing 14:06:04 Just a home NAS :-) I got the server for free at my last job. I was being decommisioned. Was only used for some light backup for VM transfers between clusters. 14:06:11 So I was lucky there. 14:06:36 cool 14:07:39 The drives are 12TB each, but it's getting full. Those 4K UHD discs eat disk space for breakfast. 14:07:43 I have 3-5 or so N300 drives - They have been doing well for YEARS 14:08:12 I took the 12TB specifically because of the low Wattage in idle 14:08:27 and low noise too in idle. 14:08:47 it's in a room in my house, so needs to be fairly quiet. 14:09:55 My system is in the bedroom 14:10:20 Even with ipmitool to limit fan speed that wouldn't be enough hhaa 14:13:01 I have a fairly large fan blowing across the stacks right now. It makes enough noise the drowned their sound out 14:13:56 And you sleep with that noise too? 14:14:23 Yes 14:14:53 my home HVAC system is louder than all of it combined lol 14:15:20 That is wrong on multiple levels 14:18:29 System https://usercontent.irccloud-cdn.com/file/Vcl3vCLF/IMG_1726.JPG 14:19:22 Client: HexChat 2.16.2 • OS: Fedora release 41 (Rawhide) • CPU: Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz (1.20GHz) • Memory: 245.7 GiB Total (203.8 GiB Free) • Storage: 371.3 TB / 515.4 TB (144.2 TB Free) • VGA: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate] @ Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DMI2 • Uptime: 1d 19h 54m 6s 14:22:07 weust: there is a technique to making the disk shelves quieter. You run them with only one PSU and one IOM6 controller 14:22:15 reduces the noise quite a bit 14:25:15 I do run only one PSU. You're is a HP server? 14:26:06 No, all 3 of those disk shelves are NetApp DS4246 and my computer system running them is a custom built X99 Tower 14:26:16 oh, right 18:09:05 <_al1r4d> wow your screenshots are vintage, rennj 18:22:35 thanks, old man yells at cloud. 19:11:03 hmm when you doas and pipe to less it messes up and doesn't work properly 19:11:07 any workarounds? 19:13:55 how does it not work? 19:14:05 you could `doas sh` and then run your command 19:14:11 you could run it in a subshell 19:18:04 well, stuff like doas | less has the problem, that stdin for the password prompt gets redirected 19:23:51 or you could configure doas to not prompt for the password 19:38:56 its completely fine to not use a keyfile when using geli right? 19:39:26 I know it adds additional security... but this is a server... it SHOULD never be stolen, the encryption is just to make destruction easier... 19:39:58 rtprio: hm I *could* but rather not 19:40:10 prevents accidentally privilege escalation 19:46:08 project gutenberg https://www.gutenberg.org/ebooks/2680 keyfile 19:46:09 hehe 19:46:23 key it secret! 19:46:28 grr keep 19:47:22 yes, completely fine, just depends on your security needs 19:52:12 you could use /etc/services or /etc/hosts only you would know. 19:52:20 but make sure it doesnt change 19:52:34 services could change 19:52:54 use a usb stick with a picture of your sweetie DCIM_14212.jpg 19:54:37 rtprio: ok cool thx 19:54:41 its just a second layer isn't it? 19:54:55 random data stored on another device plus passphrase 19:56:38 iirc it's a composite key. like one really big passphrase 19:58:06 ah 19:58:08 interesting... 20:00:26 well boot .iso.., ram-os/in-memory, i dont want nothing changing 20:00:43 checksum all the files..tripwire 20:04:34 if the software is always changing, how am i going to know if it software fault or hardware fault. static os, i know is good. means its hardware. 20:09:45 last laptop i got 7years of use 2016-2023, power on hours, and it was el crapo hp pavilion 4core/4thread amd, 16GB ram, 512GB sata ssd 20:10:12 the power switch is what failed.. 20:10:53 little plastic nob with metal contact on, the metal contact came off 20:14:30 smartctl data power on hours of sata ssd i did like 22TB on that 512GB drive 20:41:05 i have a drive with 97000 hours, wonder if it will hit 100k or die before 20:55:39 hm so geli stores metadata backups in /var/backups/ however the inital geli encryption made in the installation media doesn't... (I think its called bsdinstall?) 20:59:42 the metadata I assume stores the hashed passphrase and info about the encryption (such as alg and bitsize), I assume this is something you should backup as part of your system backup? 21:00:00 is the metadata sensitive? 21:10:06 i dont think so 21:11:21 to which, it being sensitive or needing to back it up 21:20:11 correct me if i'm wrong.. i think the big problem with hardware RAID controllers is that they lie. they say something is done (e.g. flush to disk), but they're really saying "we're taking care of it, don't worry." but ZFS wants to control the disks because it relies on not being lied to 21:23:09 more or less 21:36:15 so I got two HDDs and an SSD, the SSD is the boot disk which has a single disk pool which was geli encrypted within bsdinstall, the two other HDDs will be a zmirror pool but have geli underneath... if I stick these in the geli_devices in rc.conf will zfs automatically assemble the pool after booting into userspace and being prompted to decrypt them? 21:44:56 try it and find out; that's a very specific question 22:29:14 lol guess trail and error time :)