00:49:45 how much overhead would it cause if i tcp connected/disconnected to the ssh port once a minute to check reachability? 01:47:18 looks pretty insignificant re cpu, adds ~26mb ram for a sec or 2 01:47:45 looks like a pretty nice unprivileged ping hack 01:53:32 hello 01:55:46 anyone else testing out the 15 prerelease? 01:58:36 i just installed the openssl package so some code i'm compiling could use it, now anything that uses ssl is broken? like a cargo fetch errored ssl peer ceritificate or ssh remote key was not ok (ssl cert problem: unable to get local issuer cert) 02:00:05 sudo pkg install ca_root_nss fixed it 02:00:27 kerneldove I just had ca_root_nss fail to build with poudriere a second ago 02:00:34 i guess what happened is the tools detected that openssl pkg was now installed so it used that instead of internal code, and openssl pkg needed root cas pkg installed 02:00:35 haven't gotten a chance to look at that log yet though 02:01:42 last installworld/installkernel I did broke "cc" for me LOL 02:02:13 I'm learning to be diligent with bectl though 02:02:44 ya that's smart. zfs snapshots too 02:03:11 I also had to turn off a all my legacy USB support in the UEFI, or it fails to boot the prerelease 02:03:32 gets to the ACPI line and just hangs there, requires hard power 02:03:52 pretty sure i had to do the same thing years ago with a new ryzen cpu box 02:04:02 Normally my rigs work perfect for FreeBSD or Linux, because my platform is a decade old (X99) 02:04:18 ah that's prolly a bug or deprecation then i'd report it 02:05:23 yeah, I will when I get a GUI/Browser up and going 02:05:55 prerelease still has no binary pkg support (waiting on beefy18), so I'm in for a few days of compiling 02:06:59 also not 100% sure how to report it. guess as a regression of some sort, listing my USB legacy support that I had to disable 02:07:19 BUT, guess not needing that legacy support is anymore might actually be an improvment too LOL 02:09:29 ya true but still good to tell them i think. either -current mailing list or bugzilla? 02:09:32 have x11/kde building right now. but honestly considering just doing like cde or xfce or something 02:09:53 kde build might take longer than I want to wait 02:09:53 i3 is nice too fwiw 02:10:02 i3 is a tiler? 02:10:04 yep 02:10:14 i was almost sure i wouldn't like tilers but now i'll never go back 02:10:18 my kid loves bspwm 02:10:25 that a tiler too? 02:10:30 yeah 02:10:36 nice, smart kid 02:10:58 all I remember is ctrl+enter brings up a terminal, and I can launch my stuff from there 02:11:27 I'm not smart enough to do anything else with it, so I get windows crushed in there and just drag the mouse over them for sloppy focus 02:11:37 ya i set up a rofi hot key so i type hotkey into kb, get a 'picker' overlay, type in few letters of app name, enter, it loads in a new tile 02:13:06 hyprland is getting a lot of hype 02:13:38 If I was passionate about getting Wayland going, I'd probably give that a shot 02:14:20 river is just fine 02:14:43 i'm sticking with x11 (and i guess xlibre when it's in pkgs) too much bickering over wayland 02:15:03 i want my software without social drama i can get anywhere else 02:15:38 kerneldove I'm unsure about xlibre and hyprland both. and mainly because of the drama you just mentioned 02:16:37 the only drama for xlibre is ppl that were pissed it didn't participate in the drama. x11 and wayland are both the past to me. xlibre just wants to focus on evolving the code and that's what i want. even if the guy who started it has weird ideas i only care about code 02:18:21 Yeah, I haven't really followed along. But for the most part I'm the same... run whatever code does the best, or at least that is my goal 02:20:00 configuring every package that is a dep for x11/kde would take me a week in itself. but I really don't like the DE depending so much stuff. It even has like 3 different SQL dbs as deps 02:20:01 yep 02:20:22 I just really love the KDE launcher though 02:20:33 ya that's what pushed me away from gnome/kde. too heavy. i don't want a full gui env, just a tiler and launcher is enough 02:20:41 and its all inclusive settings 02:20:44 maybe you can recreate it with a tiler? 02:20:56 ya that part is nice i guess. less learning 02:21:12 well "plank" comes pretty close to being the launcher I want 02:21:28 i'd look into rofi. if it works for you it would cut out lots 02:21:28 I ran xfce+plank for more than a decade 02:22:37 yeah, so many choices 02:25:37 this time I'm gonna make a BE, install kernel, make another BE, install world, and then test to see if I can still rebuild both before doing anything else 02:25:58 if that works, I'll make another BE, and go back to my pkg building LOL 02:26:39 hey what about testing in a vm first? 02:27:50 kerneldove I might do that on my Linux box... I have one vm there for the KDE Desktop folks. But I can spin up another to tinker with 02:28:09 yay for testing 02:28:32 I don't even have virtualization turned on in my UEFI right now. I was having so much trouble even getting this stuff to boot, I went back to "optimized defaults" and started fresh 02:28:47 and for whatever odd reason, VT is OFF by default 02:29:12 It wasn't as popular way back in 2015 when the boards were made 02:38:29 If I was smart, this would have been a 14.3 release install, and the current branch fun would have been in a vm again ;) 02:40:29 was doing Fedora 42 on this box, but everything worked so well it got really boring 02:42:57 kerneldove what branch on are on? and do you daily FreeBSD? 02:45:50 sorry for all the questions, I'm just trying to stay awake 02:46:07 I should probably just call it a night, and check the status of the builds in the morning 02:57:59 14.3 ya i daily, server and workstation 05:33:35 I have been using fail2ban (yes I am aware of blacklistd) and it was woring. But then after upgrades it is not working. 05:34:07 I have this feeling that patterns has changed and it is simply no longer matching the new patterns. Anyone else experienced this? I say as I dig into this. 15:05:04 That swedish mirror is really bad. Fails to deliver big packages (nextcloud, llvm, perl5). I have to use direct links to the pao mirror to get the files I need. Can't we get anything better? Can't we *pay* for anything better? 15:06:21 shoot your feedback to clusteradm@ 15:06:45 they may not have any idea the situation it's in 15:36:21 I've adjusted the size of two drives in the AWS freshports instance (which is now offline). zpool data01 cannot be mounted. https://dpaste.org/hMjS1 15:36:51 how do you set more routes in rc.conf, I have two separate subnets on two separate interfaces, and I want to pass between them, but theres currently no route... whats the best way to do this, or shoudl I simply bridge it? 15:40:12 Interesting, data01 shows up on zpool import, but it says the drive is not available, I see it there: https://dpaste.org/hLryP#L 15:41:31 I'm sure this zpool, can be recovered, however, I don't know the fix. 15:42:19 polarian, look at the bottom of this section: https://docs.freebsd.org/en/books/handbook/advanced-networking/#network-static-routes 15:42:52 ah I checked the handbook, I must have missed this 15:42:59 CrtxReavr: also your username looks familiar? do you use XMPP? 15:43:44 polarian, there used to be a better syntax example in /etc/defaults/rc.conf 15:43:59 I might still have xmpp records. 15:44:20 hmmm 15:44:35 deja vu... 15:45:42 anyways thanks for the links 15:45:45 I guess I don't anymore. . . it was setup via Google Chat. 15:46:00 I guess I could re-add them. 15:46:19 'Course, I used to have over 300 active trioptimum.com users. 17:06:19 dvl, In your paste https://dpaste.org/hLryP line 7 "zpool import" it says nda2p1 UNAVAIL due to "cannot open". Line 40 shows it as a swap partition not a zfs partition. 17:06:30 Hi guys 17:06:38 I am Retrofan 17:06:57 but using my ZNC 17:07:08 rwp: Yes... Seems like the swap partition is a dedicated drive. 17:08:13 rwp: did I miss something? 17:08:17 But... "zpool import" wants it for a zfs pool? But gpart show says it is a swap partition. That's conflicting to me. 17:08:35 If it were working I think gpart show would say it was a freebsd-zfs partition. 17:08:54 Since it says it is a swap partition then it can't be imported as a zfs pool partition. 17:09:02 I think something got scrambled there. 17:09:03 the swap partition is on nda1? 17:09:29 [17:09 aws-1 dan ~] % swapinfo 17:09:29 Device 1K-blocks Used Avail Capacity 17:09:29 /dev/nda2p1 262143960 0 262143960 0% 17:09:31 kevans, We are looking at https://dpaste.org/hLryP and look at the difference between zpool import and the gpart show sections. 17:10:06 Maybe I am scrambled on looking at it... 17:10:10 right, i'm confused because line 40 is nda1p1 17:10:43 Oh I am the confused one. I confused nda1 and nda2 there. Sorry. 17:10:43 but swapinfo doesn't lie 17:11:22 so it seems that you're right but for the wrong reason, and clobbering partition state with swap would probably do it 17:11:24 and swapinfo -h shows a 250G partition. That seems unfortunate. 17:11:32 zpool import wants nda2p1 and gpart show says nda2p1 is labeled freebsd-zfs but import says "cannot open" that partition. 17:11:46 [17:10 aws-1 dan ~] % cat /etc/fstab 17:11:55 /dev/nvd2p1 none swap sw 0 0 17:12:16 rwp: right, presumably all of the zfs label bits got blown away if it ended up as a swap partition 17:12:19 So, that's unfortunate. The devices got renumbered... Ouch. 17:12:56 Sorry I confused things there because I confused nda1 and nda2. Sorry. But nvd2p1 is not nda1p1 either. 17:13:14 rwp: You led us down the correct path. 17:13:57 Question now is to how to avoid this in future...... Stop using a full drive for swap. 17:14:08 It was the lysdexia talking. Or is it dyslexia? That's the problem. One can never know. :-) 17:14:29 dvl: is this specific use a recoverable scenario in some other way, or is this somewhat fatal? 17:14:41 or i guess, rather: can it be rebuilt? 17:14:48 kevans: I have no idea about the first question. 17:15:11 kevans: However, I have a snapshot of the real data volume from last night. 17:15:43 Now, how to get that back in nicely - I can always run without swap. 17:16:11 ah, ok- so you could rebuild it from a snapshot, which is probably less time consuming than trying to get the pool back into a sane state (if even possible) 17:17:42 I am still confused how I helped there, other than by stirring things up, because zpool import wants /dev/nda2p1 and gpart show nda2 lists that as being there okay, but import says "cannot open". 17:18:16 rwp: he wouldn't have confirmed with swapinfo if you hadn't accidentally thought it was a swap partition 17:18:59 Oh! Now I see it! Gotcha. Victory unintentional then. :-} 17:19:05 rwp: I suspect the root cause is device renumbering combined with two single-partiition drives, one for swap and one for the data01 zpool 17:19:18 yes, i'd take it as an 80%+ win at the very least 17:19:22 rwp: You indeed won this one, thank you. 17:19:46 I will update https://forums.freebsd.org/threads/zpool-missing-after-increasing-disk-size-aws.98986/ 17:20:16 i wonder if we could add some guardrails that would've prevented this 17:21:02 kevans: Yes, I think so. 17:36:59 kevans, BTW... I deduced an rc.conf configuration to use the spawn-fcgi rc script be invoked launching multiwatch to monitor fcgiwrap processes. https://dpaste.org/f92Yb 17:37:31 I have requested a wiki account and put something similar to this in a page there. 17:39:51 *will put. I actually need something somewhat more complicated. I need multiple pools, in multiple jails. But that's the basic part of it. 17:43:14 rwp: i would argue that that's a good candidate for a /usr/local/etc/rc.conf.d/spawn_fcgi.sample to be packaged with it 17:45:36 Agreed. That would be great info to have. 17:47:41 My random commentary on the existing spawn-fcgi rc script is that it appears to have been written thinking only about use with PHP and not about use with generic FastCGI. I am using it with cgit and gitweb git repo browsing. It would be nice if it were more generically written. But it can be coerced. 17:50:13 And I am not sure why the author defaulted to a Unix domain permission mode of 0777 either. The default spawn-fcgi mode is 0660 which seems much more appropriate. Seems to me that one would *always* want to override that back to the 0660 default. (me shakes my head and wonders...) 17:51:37 Anyway... I just thought I would post that follow-up for the moment. I am still tinkering the target system into shape... Currently trying to figure out why fail2ban stopped working. It was working at one point. 18:07:25 dvl: did it fail immediately on first boot after resizing, or did something else happen? 18:08:09 kevans: Immediately upon first boot after resizing. Let 18:08:13 's check 18:11:06 dvl: if you haven't blown it away yet, try to just swapoff and import the pool 18:12:48 kevans: here's the full boot. Swap is off. Let's try. 18:13:58 https://bin.langille.org/?efe336520d9ad1b4#5CCnoL2xdkgn8pcwiwmVjnViocWNSPwVo8YAMTepGphT 18:14:13 ^ have not retried the import yet 18:15:25 kevans: fails: https://dpaste.org/bjnyj 18:16:57 Well, `sudo swapoff -a` does not umount the swap. 18:18:03 ... because I commented that line out in /etc/fstab 18:18:17 dum-di-dum, wonder when there's packages for -current again... I don't have hardware to build everything... 18:19:26 kevans: success: https://forums.freebsd.org/threads/zpool-missing-after-increasing-disk-size-aws.98986/ 18:19:56 dvl: scrub the shit out of it just in case, and you might still need to restore some bits from the snapshot 18:19:57 kevans: Brilliaant idea. 18:20:06 kevans: Already started a scrub. ;) 18:20:50 yay 18:29:15 Ltning: typically around 12 +/ a day or two for a full build + sync 18:30:40 You guys rock! 18:30:49 i confirm 18:31:13 Is there a way to switch a zpool from using partition numbers to GPT labels? 18:36:41 divlamir, Yes. But you have to boot an alternate boot path. Then import using the -d option pointing to the /dev/gpt directory. 18:36:49 Here is some doc: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#changing-dev-names-on-an-existing-pool 18:38:07 That doc is OpenZFS which is Linux centric using /dev/disk/by-vdev but on FreeBSD one would "zpool import -d /dev/gpt" to get that effect and switch to /dev/gpt labels. 18:40:01 It's also possible to do the switch one disk at a time, degrading or using a spare disk as an intermediary, taking a disk offline or replacing a disk with a /dev/gpt label disk, working through each disk one at a time. That works too. That takes a long time to do. But it can be done all online in that case. (I have done it that way.) The import -d /dev/gpt way is all at once and it is done. 18:40:58 Of course if one is not booting from the zfs pool then it is then trivial to change it. But root on zfs is so popular that I think that is the typical case now. 18:42:54 Yes, I am talking about the root pool created by the installer. My data pool is using labels. Seeing that the installer has labled the partitions zfs0 and zfs1, wounder why it wouldn't use this labels instead of nda[01]p4 ... 18:43:42 After reading this, it started looking a bit fragile :) 18:44:26 I wish the installer did this using gpt labels by default. 18:45:39 If this is your system that you just installed then boot the installer again and jump to a shell. It's a live iso boot image usage. Then import the pool using the -d option to switch to gpt labels. Then reboot again to the system. Just after a fresh install that should work easily. I admit I am fuzzy on the details since I have not done that for a while now. 18:46:13 Not fresh at all.. 18:47:21 I'd rather try it when I have some time to spare :`) 18:50:21 Is this a single drive or a mirror/raidzX pool? And also swap too. It's easier to switch swap one disk at a time though since there is no data on it to preserve. But anyway... 18:52:37 A simple two drive mirror, mirrored swap, efiboot on both drives 18:53:15 Random tidbit: Don't use dashes in labels used for swap partitions. If a name such as sw-3K1G3T97B (serial number) is used then gmirror insert fails and the mirror lists as degraded. The only thing that can be done is "gmirror forget -v swap" to clear the error then try again. 18:53:37 I experimented to find this as it is not documented. I will guess that the dash is used for in-band control internally and this confuses things. 18:54:11 woudln't you just let the swapper interleave the two swap areas, rather than building a mirror of swap? 18:54:29 Luckily the FreeBSD installer names them simply swap0, swap1 :) 18:54:39 rtprio, If you do that then you have no redundancy. If a device fails then the system /might/ crash because part of its memory went away. 18:55:31 It's the argument mirror vs stripe, do what you need 18:55:39 I won't say that the system will always crash. But it is definitely likely to crash in that case of using a striped swap. 18:56:54 seems unlikely 18:58:13 So in the crazy combinations possible using the bsdinstaller defaults if one is using it to set up an 8x raidz2 for example then by default it will also set up an 8x mirror of swap. 18:58:19 That works. But you probably don't need that much redundancy for swap either! However having tried it I couldn't observe any performance issue with it. It just felt silly. Really for a 2 disk failure tolerance then a swap mirror of 3x devices is all that is needed. 18:59:31 I would partition all devices identically regardless. Anything else leads to insanity. 19:05:21 kevans: scan: scrub repaired 0B in 00:31:09 with 0 errors on Mon Aug 25 18:50:48 2025 19:05:29 dvl: \o/ 19:06:14 hell yeah, i can use freshports again now =-) 19:06:22 website loads. Going to reboot. 19:06:29 * kevans just needs to recall what he was wanting to look up in the first place 19:06:43 ^ yeah 19:09:05 Woot! Things like this where zfs pulls through in spite of everything is so super awesome. It's definitely superior. (The btrfs fans will never get it, since btrfs raid is, well, insert the usual disparaging remarks here about btrfs raid...) 19:09:32 commits are being processed. 19:09:41 dvl: oh, while you're around: how much of a pain would it be to surface `make -V USE_RC_SUBR` somewhere in the freshports interface? is this something you already collect? 19:09:51 rwp: I onder what would happen if swap was written to. 19:10:17 kevans: Trivial. It's routine to add something like that. Please create an issue and the type of output. 19:10:23 someone elsewhere noted in a roundabout way that it's hard to tell if an rc script is installed by a port because they're handled in a special way, outside of the plist 19:10:31 thanks 19:10:33 ... output you want to see on the webpagte 19:10:54 dvl, Normally one might have redundancy and it would still survive. With a single device, well, some data would be lost. But you had that snapshot that could be used for recovery. 19:11:38 On the practical high level view though it was impossible to need swap given that pool offline and so there is that. 19:18:16 arc_prune I'm begging you 19:18:44 seriously arc_prune is the sole reason my load avg goes up and php stalls :| 19:19:01 is it time to go back to 13.x 19:22:16 Hrm. divlamir reminded me that I have a GPT disk that doesn't show up in /dev/gpt or /dev/diskid. So, one of my zpools has all ID's and then one "da5" device in the list. It's very strange. 19:22:23 Remilia, (me laughs) But more seriously with data01 offline freshports.org was offline and therefore the system just idling and it would be impossible for it to need to use significant swap in that case. So things worked out. And it makes sense that it worked out. But it did seem scary in concept! :-) 19:22:57 I think FreshPorts is all caught up. 19:23:06 ek, Uhm... Remember that FreeBSD kernel removes all alternate /dev device names as soon as any of the alternates are opened. 19:23:41 Before the devices mount then all alternate names are available. Open one and the others are removed from view by the kernel. This is most confusing! 19:23:48 rwp: Right. But, why would any alternate have been used in this case? It all functions perfectly normal. It just throws me off to look at. 19:24:36 This was better than a movie night, the day is saved :D Gonna go use your swap now 19:24:50 If it weren't a production system, I'd play with it to see what I could find out. I'll need to note a reminder to take a look at next available reboot. 19:24:50 ek, The device paths are recorded for zfs and used persistently as the first place to look when importing. But if another device path is used to import at any time then that new one becomes the persistent path. 19:26:06 rwp: Yep. I completely understand. Just not sure why this single device doesn't have a GUID (or any other.) When it was specifically created using one originally? I dunno. Something went goofy. 19:26:15 Ah, so that's why I don't see the labels from `gpart show -l` in /dev/gpt. Was wondering what is going wrong too 19:26:19 In any case the presence of /dev/gpt/ labels does not mean that they will be used. They could be used. And if so then would be persistent. But if the /dev/da0p2 type name is used then that becomes persistently opened and the /dev/gpt devices are hidden immediately upon open. 19:27:07 ek: from discussing this at $work, my interpretation is that it's a geom/zfs quirk in how things get ordered (or don't) for tasting 19:27:09 rwp: Right. I used /dev/diskid to create the pool. It was fine originally. Not sure what would have changed. 19:27:10 divlamir, Hmm... I think gpart show -l should always show the labels. They just don't show in /dev/gpt after a /dev/da0p2 is opened. 19:28:13 kevans: That's kinda my guess. Like, on last boot something happened with the device and, for whatever reason, the ID's weren't used but the disk was still picked up during the pool import. So, it's just running like this now. 19:28:47 ek, I am sure that if we knew exactly what happened along the way it would make sense. Not knowing it is impossible to guess now. Other than knowing that "something happened" which caused the pool to import using the other names. 19:28:54 Obviously, I can't export and re-import using -d since the ID currently doesn't exist. So, I'll just wait until next reboot opportunity. 19:29:20 ek: if it's just da5 and not da5pX, doesn't that mean it is not a GPT disk? 19:29:20 rwp: Exactly. It's just... weird. *shrugs* 19:29:36 (in zpool status I mean) 19:29:46 I keep saying computers are like cats. Subtle and quick to anger! :-) 19:29:59 subtle??? 19:30:13 You ever had cats?? 19:30:21 lol 19:30:33 Remilia: That was my thought initially. But, the other disks weren't partitioned or anything either. All brand new and just used via ID's to create the pool. 19:30:35 Remilia, If it is /dev/da5 not /dev/da5p2 then that is the entire disk and not a partition. That works but we kind'a frown on doing full disk without (GPT) partition tables now. 19:31:47 Originally, I was under the impression that a "zpool create" would partition the disks if they were empty. But, I see no partitions. So, yup, it's just a full disk. 19:32:01 Tenkawa, Yes, I have had cats, plural, before. Currently no cats now. But when I had two cats one would make trouble and would them implicate the other one who mostly liked to watch. 19:32:06 then it shouldn't be anywhere under gpt/ 19:32:58 diskid is different but in my experience those are kinda... random 19:33:06 you might get either the id or the device name 19:33:15 rwp: thats not subtle... thats devious... which cats are... 19:33:24 ek, With "zpool create" it uses whatever device you tell it to use. If da5 then that would be the full disk. If da5p2 then that is a partition on the disk. If gpt/zK1234 (serial numbers I use) then it uses those. But whatever is used must exist at time of pool creation. 19:33:35 export/import might switch or might not, but the device name is stored in the imported pool 19:33:57 so rebooting will not change it 19:34:10 Rebooting will not change it. 19:34:40 rwp: Correct. I used /dev/diskid/DISK-* to create the pool. 19:35:05 If the pool is a secondary data pool then simply export and then import -d pointing to /dev/gpt to search the devices in that path and it is trivial. But if booting root on zfs then one will need to boot something like the bsdinstaller iso, launch a shell, import -d there, then reboot to the system again. 19:35:08 also rwp thank you for explaining but I really should think of a more gender neutral IRC nickname :( 19:35:26 ?! :o 19:35:26 every time I try to answer someone I get 'answered' 19:35:36 rwp: That would be ideal. However, the device doesn't exist in /dev/gpt or /dev/diskid at the moment. 19:35:59 Remilia, Did I misgender? I am sorry and apologize if I did. 19:36:04 How well/possible have any of you cloned /usr from ufs to a zfs volume and remounted it on next boot? I have most of /usr already running on the volume... just not all of it and nervous ... 19:36:06 you probably did not 19:36:19 but I really did not need that explanation, when it was what I myself said above 19:36:59 Don't take what I say like this personally. Just a bit ago I was chatting like that with dvl and that's DVL for goodness sake! It's just me. I do that to everyone. Sorry. 19:37:43 Remilia: I think my biggest hurdle will be finding out why I do not get the GUID or disk ID available at boot for the pool (I think?) I won't be sure until I reboot again. 19:38:15 Remilia: But, the other non-partitioned disks (non-GPT) do have GUID's and disk ID's in /dev/diskid/. 19:38:21 Remilia, Oh, I see now, "so rebooting will not change it" if that had either a "." or a "?" on the end that would have changed how I read it entirely. I read it as a ? and answered and did not read any . there and did not see it as a statement of fact. 19:39:34 ek: you will not get the GUID because there is no GUID Partition Table; as for diskid, do you mean the FreeBSD boot manager or loader? 19:40:06 the ZFS loader would likely pull metadata from zpool, which does not have a diskid for that drive 19:40:24 (don't quote me on that, it's an 'educated' guess) 19:40:26 There's kern.geom.label.gptid.enable="0" in my loader.conf 19:41:18 Tenkawa, You would need to describe your system in more detail. You have a UFS install and want to convert it to ZFS? It's more than just /usr in that conversion. 19:42:41 Remilia: It must be. I don't see gptid's for those disks in that pool. But, each disk (aside from da5) has the /dev/diskid. 19:43:10 as I said, da5 was stored in zpool metadata on zpool creation/import 19:43:42 the other disks were stored with their diskids instead of device names 19:43:45 Remilia: What I'm saying is I used the diskid's to create the pool. So, somewhere along the line, that was no longer available and I'm not sure why. 19:43:59 it is random 19:44:14 I had the same issue in my pools before 19:44:15 ... or how the pool was imported without it. Unless it just searched /dev/diskid *AND* /dev for matching metadata, maybe? 19:45:07 I could never get it to be 100% uniform back then and just ignored it 19:45:16 Remilia: Yeah. It's quite strange. It is working fine, though. There aren't any problems. Just weird to look at in zpool status output. 19:45:30 Remilia: Haha. That's pretty much what I've been doing. 19:46:34 ek: I think the pool metadata keeps old device names after export and they might just carry over like that as long as the zpool ID matches 19:46:49 When I was in a flaky system I had a disk dropped out of the array. Then hotswapped. Then it listed as /dev/da5p2 at that point. I could have zpool online /dev/gpt/zK1234 I think but instead I took the hint and online'd the da5p2 name, and it stuck. That's all easy to do. 19:47:35 Remilia: Yep. That would make sense. So, now I'm wondering if I export the pool before my next reboot, and then import the pool with -d /dev/diskid if it'll be uniform again. 19:47:48 ek: if I recall right I even tried playing with glabel and it did not work either 19:47:48 Depending upon your raid level you can probably take that device offline, then online it with the label. Pretty sure. I think. I feel I should try it before recommending that. 19:47:52 Of course, that all depends on if the diskid comes back after reboot. I'd try it now, but the diskid isn't there. 19:48:20 rwp: I'd do that right now if I had an ID to import it with. :( 19:49:01 Remilia: Hrm. That's no bueno. Again, I suppose it doesn't really matter. It's working. 19:49:01 if you release the disk from the pool it might regain the ID, or you could camcontrol detach/reattach the disk 19:49:14 ek, I think that taking the disk offline will close the device and then the diskid will appear again. I think. But I am using gpt labels not the older disklabels, which are now deprecated according to the handbook (again I think). 19:49:16 I don't wanna botch anything up trying to make it pretty for no reason. 19:49:20 (after removing from the pool) 19:49:38 Remilia: That is true! Could be worth a shot. 19:49:58 personally I would just forget about it 19:50:14 * Remilia does not look at zpool status much unless she gets a warning 19:50:36 ek, How many devices are in the pool? And in the system in total? 19:50:48 plus my current server is a guaranteed resources VPS so there's just a single device pool 19:51:12 Remilia: Same and same. I was just bringing it up since divlamir's somewhat similar situation reminded me of it. 19:51:33 Otherwise, my silly little cron checks will let me know if there's a problem. 19:52:05 rwp: 6 in the pool, 8 total (2 in mirror for zroot and 6 in z2 for storage.) 19:52:38 Remilia: That's handy. 19:52:45 ... the guaranteed resources bit. 19:52:55 ek, That is enough devices that it is worthwhile to use persistent gpt labels. And I suggest using something like device serial numbers or other identification. 19:53:16 I don't have a situation, all is good. It's just that dvl's little hurdle reminded me that lables are _good_ and _numbers_ are bad 19:54:04 Device renumbering is pain. But having swap suddenly occupy your zpool device is next-level evil. 19:54:11 rwp: I agree. I *MAY* be able to migrate or zfs send/recv the data and just rebuild the pool with GPT at some point. 19:54:20 divlamir: Glad you have no situation! 19:54:30 aren't labels kinda just very big numbers? 19:54:50 Remilia: Did you figure out the arc_prune thing? It sounds familiar .. 19:55:03 Earlier this summer a friend had a RAID10 setup with a striped 2x mirror configuration. Had a device fail. Removed and replaced one disk. Was confused about devices and left the system down one disk. Later another device apparently failed and it was the degraded mirror already and it left him with an array offline missing the striped mirror. Too many devices. 19:55:27 Llampec, GPT labels are strings. You put anything you want in them. (But no dashes or gmirror fails) 19:55:40 And they don't get renumbered 19:56:14 rwp: i'm... sorta trolling, lol. 19:57:55 happy adherent of gpt label with drive serial 19:58:00 I'll also throw kern.geom.label.disk_ident.enable="0" and kern.geom.label.gptid.enable="0" in there as things that bsdinstaller defaults to now and things are different with different values there. 20:51:35 I ended up just offlining the disks from the pool, creating a GPT partition on each one using serial number as labels, and replacing them in the pool with gpt/label. 20:52:15 Since the resilver was so quick due to not a lot of data at the moment (1.5 minutes each), I just did it. Whatever. Problem solved! 20:58:12 Dropped the fragmentation down to 0%, so I guess that's nice (not that it really matters, I suppose.) 21:01:19 Cool ^^ I think I'll wait till next reboot to do it the `zpool import -d` way suggested by rwp 21:32:02 Doing it disk by disk is fast if there is only small amount of data. It's data dependent. You can guess by how long a full scrub takes. If a full scrub is a few minutes then that's how long. If a full scrub is many hours then also that's how long. 21:38:07 Yep. Not a lot of data in there. Figured I'd snapshot backup and give it a shot. Worked out fine.