-
jbk
hrm...
-
jbk
rmustacc: the mac ring tx entry point takes a _chain_ of mblk_ts, correct (i.e. it could be more than 1 packet, and of course each packet can itself be segmented)
-
jbk
?
-
jbk
the man page says yes, but looking at i40e's i40e_ring_tx -- unless I'm missing something, if passed a chain of mblk_ts, it looks like it only sends the 1st one
-
jbk
it's late, and I wasn't feeling great the past two days, so I feel like I'm maybe missing something obvious
-
rmustacc
jbk: The reality is that mac is only giving the mri_tx entry point single mblk_t's with no b_next. But I don't think it should long term.
-
jbk
ahh ok
-
rmustacc
But I realize that the docs aren't as good there so it leading to confusion doesn't help.
-
rmustacc
I think at one point I accidentally wrote some wishful thinking into some of the drafts and then after they sat there for a while I forgot about that.
-
rmustacc
But for now you can just ASSERT that b_next is NULL.
-
rmustacc
Which nic is this for again?
-
jbk
we need a working ice NIC (since some of the HW has it built-in), so I _just_ started with the work you did (really just looking over things closely as well as teh programming guide so far)
-
jbk
err some of the HW we want to support
-
rmustacc
OK. Main thing I wanted to mention there is that the I/O engine is identical to i40e.
-
jbk
yeah
-
rmustacc
So you can more or less use it verbatim, especially for the issues around TSO.
-
jbk
which reminds me, i need to upstream my changes there -- basically caps the DMA buffer size for data at 2k and trades off more descriptors for smaller buffers (and yeah, the TSO bits were annoying getting that right)
-
jbk
i think some smartos people ahve seen it as well where jumbo frames can cause _long_ vnic creation times
-
jbk
(in terms of minutes.. I think we saw one where the dladm command was blocked for > 30 mins :P)
-
rmustacc
Requiring contiguous buffers is never great, just was a simplifying approach when I did this a long time ago.
-
jbk
yeah
-
jbk
get it working, then get fancy
-
jbk
i still need to finish writing up an IPD -- I think we could (opt-in) have mac handle a lot of that for a driver and just hand off (more or less) an array of (paddr, len) + flags (I'm simplifying a bit) to transmit... (i've prototyped a bit where not opting in shouldn't see anything different aside from like 2 extra if ()s in mac
-
jbk
just to try to contain any potential blast radius
-
gitomat
[illumos-gate] 15657 struct pam_message in conversation function should be const -- Dominik Hassler <hadfl⊙oo>
-
rmustacc
The devil is in the details so I'll be curious to see what you come up with and what parts you're trying to hand off.
-
Guest16
Hello, coming here for some help. I have NVIDIA GT 1030. The latest driver is 550.100, but the NVIDIA driver search on their website gives me version 387.34 to download. Which one should I install? Thank you.
-
vetal
Hi, is any way to run gdb to debug a proccess inside LX zone? It complainer about "Couldn't write registers: Input/output error"
-
vetal
Also gdb complained "warning: opening /proc/self/mem file failed: Read-only file system (30)"
-
vetal
In dmesg in global zone: "attempt to execute non-executable data at 0x7fffecc10000 by uid 0"
-
ptribble
That's presumably with the Linux gdb? There's also the native illumos gdb which, if installed in the global zone, will be at /native/usr/bin/gdb
-
ptribble
Although the native gdb might have as much trouble understanding a Linux process as the Linux gdb has understanding running on an illumos kernel
-
vetal
ptribble: Linux gdb in LX zone.
-
rmustacc
Yeah, I suspect the ptrace and related emulation is going to be fairly minimal.
-
rrogalski
Xaero, what do you need that gpu for on oi?
-
rrogalski
Wait, do people run games in Linux zones?
-
xaero
i game on a windows disk
-
xaero
but i use the same gpu on OI
-
rrogalski
Hmmm so people don’t game much on oi?
-
rrogalski
I’m sure many foss games would work on it? Sdl2 work I assume?
-
blue-cat
How to remove zfs from a device?
-
rmustacc
blue-cat: The simplest way is usually just 'zpool destroy', but specifics will vary based upon data.
-
rmustacc
That is how you want the data to be removed.
-
blue-cat
i'm sorry, i had a little connection problems and was disconnected
-
blue-cat
a few times, and i haven't seen any messages
-
wiedi
blue-cat: you can see the logs at
log.omnios.org/illumos :)
-
blue-cat
zpool destroy?
-
blue-cat
i probably can't do it from running os, but i can do it from usb
-
blue-cat
?
-
sommerfeld
blue-cat: there's also 'zpool labelclear'
-
sommerfeld
if you have multiple pools, you can safely 'zpool destroy' the pools that don't have the currently running root filesystem in it (but you might first have to kill off processes that have stuff open/active on the filesystems to allow them to be unmounted).in th
-
sommerfeld
oh, ooops, blue-cat left before i responded.
-
blue-cat
sommerfeld, sorry, i have unstable connection
-
blue-cat
but i still can read the logs, as i've been instructed to do
-
blue-cat
do i need to `zpool destroy` first if i `zpool labelclear`?
-
sommerfeld
blue-cat: or 'zpool export'
-
sommerfeld
as rmustacc said above, the best command for this it depends on what you're trying to accomplish.
-
sommerfeld
are you trying to render the contents of the former pool completely inaccessible, or do you just want it not marked as containing a pool in a way that might confuse a future 'zpool import' ?
-
blue-cat
"how to remove zfs from a device?" was my question
-
sommerfeld
right, what exactly do you mean by that?
-
blue-cat
im not sure
-
sommerfeld
what is the presence of a zfs pool on a particular device or set of devices preventing you from doing?
-
richlowe
we've had problems that people don't seem to find easy to repeat with growing zfs pools where the device has a end-of-pool label lingering on it
-
richlowe
in that case, labelclear does the job, I believe.
-
sommerfeld
for instance, if you're trying to render the former contents of the pool unrecoverable, "zpool destroy" alone won't cut it (see "zpool import -D" which can bring back a destroyed pool)
-
richlowe
that's the only "remove zfs" case I know of where something really does need to happen.
-
blue-cat
i just want to know how to remove it
-
richlowe
if you want reliable destruction, use encryption and forget the key
-
sommerfeld
the lightest weight way to get rid of a pool is "zpool export" (unmounts it, causes it to not be mounted on boot); that can be un-done by "zpool import".
-
sommerfeld
"zpool destroy" is "export, and mark as destroyed", and it can be un-done with "zpool import -D".
-
jclulow
potentially undone, as long as you don't do anything else in the meantime
-
richlowe
"zpool labelclear" will destroy your labels, you won't be able to get the pool back after that, but a lot of data is still on the disk and readable.
-
sommerfeld
zpool destroy or zpool export followed by zpool labelclear on the component devices should make re-import impossible (but the bulk of the pool contents could potentially be recovered forensically with a lot of work).
-
nomad
If you want to completely delete all data on the host you can use DBAN and wipe the drives. That might be a bit of overkill,depending on what you need.
-
blue-cat
i'm just completely clueless, but my original question is still "how to remove zfs from a device?". all i know is that zfs is a filesystem. then a close question would be "how to remove filesystem from a device?". in linux, such questions are usually answered with "use wipefs".
-
blue-cat
But as far as I know zpool labelclear -f does more than wipefs with zfs
-
nomad
what do you mean by 'remove' and by 'device'? What do you want to do with the device (I presume a hard drive, SSD, or equivalent)? If you just want to reuse the drive with a different filesystem then blow away the partition table, repartition it, and on you go.
-
nomad
A very quick read of wipefs(8) manpage looks like wipefs just removes the partition information by default.
-
blue-cat
yeah, but people say wipefs is not enough for zfs
-
blue-cat
its why i am asking
-
jclulow
I mean if you want to be absolutely sure
-
jclulow
you can write zeroes to the whole disk
-
jclulow
Mind you, that's still not a "secure" erase
-
jclulow
But it will at least be casually gone
-
richlowe
the best-in-class approach I believe is still to use ZFS encryption and forget the key
-
richlowe
but not an after-the-fact thing
-
jclulow
If it's an NVMe disk, you can "nvmeadm secure-erase" probably
-
blue-cat
its not nvme and fill with zeroes is so long
-
jclulow
Is the goal here secure disposal? i.e., you want to throw the disk in a rubbish bin and not have someone recover the contents
-
nomad
if it's a hard drive and you want to securely erase it use DBAN (Derek's Boot and Nuke). That won't work on SSDs or such, though.
-
» nomad ran DBAN on a bunch of 48 drive thumpers a few weeks ago. That was ... interesting.
-
richlowe
if the goal is secure disposal, you can also go rent a hammer drill
-
richlowe
and some goggles
-
nomad
thermite!
-
blue-cat
just some linux distributions advise something like `wipefs -a /dev/sda` before installation
-
nomad
This is why we keep asking what you want to achieve.
-
nomad
We're just guessing at what you actually need because you haven't explained your use case.
-
blue-cat
i can't remember exactly, but one of them is chimera linux
chimera-linux.org/docs/installation/partitioning
-
blue-cat
im not sure why they do this
-
nomad
because sometimes installers see an already formatted disk and think they can't use it.
-
blue-cat
but after googling i found that wipefs does not erase zfs properly
-
nomad
but *for that use case* blowing away the partition table (which is what that wipefs command does) is sufficient.
-
blue-cat
installers also can wrongly use already formatted disk and throw error
-
nomad
blue-cat, it looks like you're ignoring what I'm saying so I'm just going to say this then go back to work: Try It. Don't just believe what you read on google. If your reinstall fails because it can't find a usable drive *then* try to fix the problem.
-
blue-cat
okay, you said just create new (gpt) disklabel?
-
sskras
blue-cat: sure, just create a new partition table (gpt)