03:56:34 hrm... 03:57:14 rmustacc: the mac ring tx entry point takes a _chain_ of mblk_ts, correct (i.e. it could be more than 1 packet, and of course each packet can itself be segmented) 03:57:17 ? 04:00:03 the man page says yes, but looking at i40e's i40e_ring_tx -- unless I'm missing something, if passed a chain of mblk_ts, it looks like it only sends the 1st one 04:00:28 it's late, and I wasn't feeling great the past two days, so I feel like I'm maybe missing something obvious 04:09:19 jbk: The reality is that mac is only giving the mri_tx entry point single mblk_t's with no b_next. But I don't think it should long term. 04:09:57 ahh ok 04:10:20 But I realize that the docs aren't as good there so it leading to confusion doesn't help. 04:10:51 I think at one point I accidentally wrote some wishful thinking into some of the drafts and then after they sat there for a while I forgot about that. 04:11:31 But for now you can just ASSERT that b_next is NULL. 04:11:49 Which nic is this for again? 04:13:13 we need a working ice NIC (since some of the HW has it built-in), so I _just_ started with the work you did (really just looking over things closely as well as teh programming guide so far) 04:13:26 err some of the HW we want to support 04:13:33 OK. Main thing I wanted to mention there is that the I/O engine is identical to i40e. 04:13:38 yeah 04:13:51 So you can more or less use it verbatim, especially for the issues around TSO. 04:15:21 which reminds me, i need to upstream my changes there -- basically caps the DMA buffer size for data at 2k and trades off more descriptors for smaller buffers (and yeah, the TSO bits were annoying getting that right) 04:15:52 i think some smartos people ahve seen it as well where jumbo frames can cause _long_ vnic creation times 04:16:38 (in terms of minutes.. I think we saw one where the dladm command was blocked for > 30 mins :P) 04:16:44 Requiring contiguous buffers is never great, just was a simplifying approach when I did this a long time ago. 04:16:48 yeah 04:16:57 get it working, then get fancy 04:20:04 i still need to finish writing up an IPD -- I think we could (opt-in) have mac handle a lot of that for a driver and just hand off (more or less) an array of (paddr, len) + flags (I'm simplifying a bit) to transmit... (i've prototyped a bit where not opting in shouldn't see anything different aside from like 2 extra if ()s in mac 04:20:18 just to try to contain any potential blast radius 04:20:20 [illumos-gate] 15657 struct pam_message in conversation function should be const -- Dominik Hassler 04:21:56 The devil is in the details so I'll be curious to see what you come up with and what parts you're trying to hand off. 06:18:06 Hello, coming here for some help. I have NVIDIA GT 1030. The latest driver is 550.100, but the NVIDIA driver search on their website gives me version 387.34 to download. Which one should I install? Thank you. 08:23:45 Hi, is any way to run gdb to debug a proccess inside LX zone? It complainer about "Couldn't write registers: Input/output error" 08:29:47 Also gdb complained "warning: opening /proc/self/mem file failed: Read-only file system (30)" 09:07:42 In dmesg in global zone: "attempt to execute non-executable data at 0x7fffecc10000 by uid 0" 11:59:26 That's presumably with the Linux gdb? There's also the native illumos gdb which, if installed in the global zone, will be at /native/usr/bin/gdb 12:00:04 Although the native gdb might have as much trouble understanding a Linux process as the Linux gdb has understanding running on an illumos kernel 14:14:23 ptribble: Linux gdb in LX zone. 14:14:55 Yeah, I suspect the ptrace and related emulation is going to be fairly minimal. 14:28:53 Xaero, what do you need that gpu for on oi? 14:30:37 Wait, do people run games in Linux zones? 14:31:07 i game on a windows disk 14:31:13 but i use the same gpu on OI 14:37:08 Hmmm so people don’t game much on oi? 14:37:29 I’m sure many foss games would work on it? Sdl2 work I assume? 18:25:49 How to remove zfs from a device? 19:01:10 blue-cat: The simplest way is usually just 'zpool destroy', but specifics will vary based upon data. 19:01:22 That is how you want the data to be removed. 19:13:55 i'm sorry, i had a little connection problems and was disconnected 19:15:43 a few times, and i haven't seen any messages 19:16:30 blue-cat: you can see the logs at https://log.omnios.org/illumos/ :) 19:17:58 zpool destroy? 19:21:16 i probably can't do it from running os, but i can do it from usb 19:21:18 ? 19:51:52 blue-cat: there's also 'zpool labelclear' 19:54:06 if you have multiple pools, you can safely 'zpool destroy' the pools that don't have the currently running root filesystem in it (but you might first have to kill off processes that have stuff open/active on the filesystems to allow them to be unmounted).in th 19:55:34 oh, ooops, blue-cat left before i responded. 20:14:49 sommerfeld, sorry, i have unstable connection 20:16:19 but i still can read the logs, as i've been instructed to do 20:19:23 do i need to `zpool destroy` first if i `zpool labelclear`? 20:25:31 blue-cat: or 'zpool export' 20:26:33 as rmustacc said above, the best command for this it depends on what you're trying to accomplish. 20:27:59 are you trying to render the contents of the former pool completely inaccessible, or do you just want it not marked as containing a pool in a way that might confuse a future 'zpool import' ? 20:28:34 "how to remove zfs from a device?" was my question 20:28:49 right, what exactly do you mean by that? 20:29:52 im not sure 20:30:33 what is the presence of a zfs pool on a particular device or set of devices preventing you from doing? 20:31:19 we've had problems that people don't seem to find easy to repeat with growing zfs pools where the device has a end-of-pool label lingering on it 20:31:28 in that case, labelclear does the job, I believe. 20:31:49 for instance, if you're trying to render the former contents of the pool unrecoverable, "zpool destroy" alone won't cut it (see "zpool import -D" which can bring back a destroyed pool) 20:31:52 that's the only "remove zfs" case I know of where something really does need to happen. 20:32:02 i just want to know how to remove it 20:32:14 if you want reliable destruction, use encryption and forget the key 20:33:35 the lightest weight way to get rid of a pool is "zpool export" (unmounts it, causes it to not be mounted on boot); that can be un-done by "zpool import". 20:34:06 "zpool destroy" is "export, and mark as destroyed", and it can be un-done with "zpool import -D". 20:34:47 potentially undone, as long as you don't do anything else in the meantime 20:35:31 "zpool labelclear" will destroy your labels, you won't be able to get the pool back after that, but a lot of data is still on the disk and readable. 20:35:36 zpool destroy or zpool export followed by zpool labelclear on the component devices should make re-import impossible (but the bulk of the pool contents could potentially be recovered forensically with a lot of work). 20:39:18 If you want to completely delete all data on the host you can use DBAN and wipe the drives. That might be a bit of overkill,depending on what you need. 20:39:32 i'm just completely clueless, but my original question is still "how to remove zfs from a device?". all i know is that zfs is a filesystem. then a close question would be "how to remove filesystem from a device?". in linux, such questions are usually answered with "use wipefs". 20:40:26 But as far as I know zpool labelclear -f does more than wipefs with zfs 20:40:51 what do you mean by 'remove' and by 'device'? What do you want to do with the device (I presume a hard drive, SSD, or equivalent)? If you just want to reuse the drive with a different filesystem then blow away the partition table, repartition it, and on you go. 20:42:41 A very quick read of wipefs(8) manpage looks like wipefs just removes the partition information by default. 20:43:11 yeah, but people say wipefs is not enough for zfs 20:43:18 its why i am asking 20:44:34 I mean if you want to be absolutely sure 20:44:44 you can write zeroes to the whole disk 20:44:57 Mind you, that's still not a "secure" erase 20:45:04 But it will at least be casually gone 20:45:18 the best-in-class approach I believe is still to use ZFS encryption and forget the key 20:45:27 but not an after-the-fact thing 20:45:35 If it's an NVMe disk, you can "nvmeadm secure-erase" probably 20:46:15 its not nvme and fill with zeroes is so long 20:46:19 Is the goal here secure disposal? i.e., you want to throw the disk in a rubbish bin and not have someone recover the contents 20:46:57 if it's a hard drive and you want to securely erase it use DBAN (Derek's Boot and Nuke). That won't work on SSDs or such, though. 20:47:44 * nomad ran DBAN on a bunch of 48 drive thumpers a few weeks ago. That was ... interesting. 20:49:00 if the goal is secure disposal, you can also go rent a hammer drill 20:49:04 and some goggles 20:49:56 thermite! 20:50:02 just some linux distributions advise something like `wipefs -a /dev/sda` before installation 20:50:40 This is why we keep asking what you want to achieve. 20:50:56 We're just guessing at what you actually need because you haven't explained your use case. 20:51:10 i can't remember exactly, but one of them is chimera linux https://chimera-linux.org/docs/installation/partitioning 20:51:20 im not sure why they do this 20:52:57 because sometimes installers see an already formatted disk and think they can't use it. 20:53:25 but after googling i found that wipefs does not erase zfs properly 20:53:27 but *for that use case* blowing away the partition table (which is what that wipefs command does) is sufficient. 20:55:09 installers also can wrongly use already formatted disk and throw error 20:55:45 blue-cat, it looks like you're ignoring what I'm saying so I'm just going to say this then go back to work: Try It. Don't just believe what you read on google. If your reinstall fails because it can't find a usable drive *then* try to fix the problem. 20:58:14 okay, you said just create new (gpt) disklabel? 21:12:44 blue-cat: sure, just create a new partition table (gpt)