03:00:32 Odd question, but is there a way to clone directly from code.illumos.org? 03:03:22 oooh. nvm I got it. 03:04:18 forgot my local username and my remote username were different. :P 04:58:22 hrm... building edkii (at least ovmf for starters) on smartos was suspiciously too easy 04:59:26 Is there some reason it shouldn't have worked? 05:00:09 it has it's own bespoke build system (possibly systems), which is often never a good sign for being cross platform beyond possibly some specific linux distros 05:02:05 Looking at its build instructions right now, and "how to develop with containers" being its own article definitely gives me the "very linux-centric" kind of vibe. You use Stuart? 05:02:15 no just build 05:05:18 Nice. Glad to know this works though. Maybe I can bap some uefi applications together via rust on smartos. Pretty sure the uefi target requires edk2. 05:06:56 for what? efi applications get passed the pointer to the system table 05:07:16 and it's going to just need rust-native definitions for all the protocols 05:07:26 (which I think uefi-rs already defines common ones) 05:07:39 but it won't be able to read the header files for that... 05:10:06 Not totally sure. Might not require edk2 and I've just got something mixed up somewhere. 15:13:28 [illumos-gate] 17541 Shutdown hang with smb_opipe_read calls blocked -- Gordon Ross 15:13:28 [illumos-gate] 17542 SMB: File ID problem with nested data sets -- Matt Barden 19:01:43 "genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=fffffcc26c379a40 addr=20 occurred in module "smbsrv" due to a NULL pointer dereference" - omnios problem. Anyone want any details? 19:02:02 I presume I should register this as an illumos problem, not an omnios one. 19:04:59 yeah 19:05:22 and if you can, save the crash dump (or at least, grab the stack trace) 19:05:28 that's assuming I can remember where to file said bug report. 19:05:51 crashdump has been saved. Happy to grab stack trace but I don't have my notes so I'd need guidance. 19:06:26 https://www.illumos.org/projects/illumos-gate/issues/new 19:06:47 after the system restarts, it should save the file to probably /var/crash/$HOSTNAME 19:06:48 just got there :) 19:06:57 (the dumpadm command should show the path if unsure) 19:07:10 root@fs3:/var/crash# savecore -f /dev/zvol/dsk/rpool/dump 19:07:10 savecore: System dump time: Fri Aug 15 11:21:11 2025 19:07:10 savecore: Saving compressed system crash dump in /var/crash//vmdump.0 19:07:10 savecore: Decompress the crash dump with 19:07:10 'savecore -vf /var/crash//vmdump.0' 19:07:11 root@fs3:/var/crash# ls -l 19:07:13 total 18086459 19:07:15 -rw-r--r-- 1 root root 2 Aug 15 12:00 bounds 19:07:17 -rw-r--r-- 1 root root 1110 Aug 15 12:00 METRICS.csv 19:07:19 -rw-r--r-- 1 root root 11350835200 Aug 15 12:00 vmdump.0 19:07:21 I'll rename it. 19:07:44 you'll need to expand it (savecore -f vmdump.0 ) 19:08:08 then you can do `mdb 0 -e '::stack'` 19:08:21 (if the files are vmcore.0 and unix.0) 19:09:48 waiting on the savecore expansion. 19:12:09 yeah, it's not the fastest thing... 19:12:32 smb2_durable_timers+0x83(fffffe23ee26ba80) 19:12:32 smb_server_timers+0x20(fffffe23ee26c110, fffffe23ee26ba80) 19:12:32 smb_thread_entry_point+0x8f(fffffe23ee26c110) 19:12:32 thread_start+0xb() 19:13:10 I'm creating a ticket and will put this in it as well. I presume I should attach the original dumpfile as well. 19:14:09 The original dump probably won't fit. 19:14:11 It's 10G compressed. 19:14:18 what part(s) should I include? 19:14:41 unix.0 is only 2M but would it be helpful at all? 19:14:44 Usually I would start with '::status', '$C', '::stacks' in general. 19:15:09 No, you need both to be useful. When someone digs further, they'll probably reach out to get the dump. 19:15:29 rmustacc, that's an mdb command, right? 19:15:42 Yeah. 19:16:33 I'm not a kernel dev (or a dev at all). 19:17:10 Is this what you mean? 19:17:13 root@fs3:/var/crash# mdb 0 -e ::status', '$C', '::stacks 19:17:13 debugging crash dump vmcore.0 (64-bit) from fs3 19:17:13 operating system: 5.11 omnios-r151046-4c557abec1d (i86pc) 19:17:13 build version: gfx-drm - heads/master-0-g77f745e 19:17:13 heads/r151046-0-g4c557abec1d 19:17:43 I didn't know -e took multiple lines. 19:17:54 image uuid: a5449e15-d832-4df4-88ff-05eec1c0eea2 19:17:54 panic message: BAD TRAP: type=e (#pf Page fault) rp=fffffcc26c379a40 addr=20 occurred in module "smbsrv" due to a NULL pointer dereference 19:17:54 dump content: kernel pages only 19:17:55 (curproc requested, but a kernel thread panicked) 19:18:19 I've pasted the output of both mdb commands in the ticket. Anything else I should include? 19:18:19 addr=20 is useful-ish. NULL pointer, but a fields 32 bytes (0x20) in. 19:18:48 i can ping gordon -- probably him or matt would be the most interested I suspect 19:18:48 Both are in the ticket yes. And the `::status` answers the question about which OmniOS revision (incl. illumos-omnios commit IIRC). 19:19:18 Yeah, GWR's your guy. GIVEN that it's an old OmniOS and I KNOW there have been smb/cifs fixes (including some recent ones) it'll perhaps be one of those now-fixed bugs. 19:19:29 I've also included the output of uname -a 19:20:15 huh? 151046? I swear I updated these hosts. 19:20:29 omnios-r151054e NR / 24.34G static 2025-06-24 09:17 19:20:41 That's .. very interesting. 19:21:49 all three of my prod hosts are still running 46 when they should be running 54. This is ... very much uncool. 19:22:48 bug filed 19:23:02 * nomad opens an internal ticket to redo the OS upgrades on his prod servers. 20:40:05 seems like you did everything but the reboot 20:40:30 nomad: were you the person with an x4500? 21:38:58 richlowe, I had a few thumpers but they went to surplus a year ago. 21:40:34 IIRC, there were 7 or so 4540s and one 4500 in the lot. 21:41:08 re: reboot - those hosts have been rebooted since the "pgrade" 21:41:27 I clearly missed a step and then clearly missed the critical "make sure it actually did what you thought it did" step. 21:41:29 so not cool. 21:45:02 the "R" is saying that's the be that'll be active next boot 21:45:18 oh, I misread 21:45:19 ignore me 21:48:36 Don't worry, I'm the one who *didn't notice* what OS version he was running. 21:49:18 ok, I'm going back to the last day of my 'vacation'. I'll try to schedule patching time for these hosts. At the latest they'll be updated in mid-late September on the next regular patching day.