00:52:43 hello all, which high speed NIC (100G) is available on illumos? any advice? and is there any NVMe of Fabric initiator driver available? 01:07:50 tozhu: probably one of the mellanox nics and maybe chelsio? (though someone else would need to confirm that) 01:08:07 and no, there's currently no NVMe over fabric support in illumos 01:09:20 jbk: do you know if there is plan for Intel E810 support? Thank you very much 01:18:40 Thank you very much 01:25:59 heh.. it helps to stick around for the answer :) 01:33:02 tozhu: as a matter of fact, I've been working on the e810 driver (picking up the work that rmustacc started a while ago) 01:33:31 though still not yet to the point where it can pass packets (though maybe soon depending on other work) 01:37:05 really good news, thank you very much for your work 02:28:14 Yeah, the T6 and various CX-5/6 parts are the way to go for 100G. 02:45:01 i guess there's also a cx-7 now... though no idea if anyone's working on that yet or not... 03:02:50 I believe so. Dan would have the most state. 03:04:13 Let me know if there's anything from me that'd help on the e810 front, jbk. 03:35:20 so far it hasn't been too bad.. i did peek a bit at the freebsd driver to clarify a few things from the programming manual... but thank you for doing the hard/annoying work of packing those context structures :) 17:14:03 [illumos-gate] 16697 krb5: dangling pointer 'stash_file' to 'stashbuf' may be used -- Toomas Soome 22:03:56 rmustacc: I have i40e card and see that it has a lot of parameters in /kernel/drv/i40e.conf. 22:04:35 rmustacc: Could you help to find out current values of rx_ring_size and tx_ring_size ? 22:05:56 rmustacc: And what are symptoms when those values should be increased ? 22:09:38 I'm not going to be able to get you the exact syntax, but simplest form is to use mdb and do something like i40e_glist::walk list | ::print i40e_t ... 22:09:55 If you look at the source it'll show you how those parameters map to the values in the rings. 22:10:12 As for increasing or decreasing it, I've never tuned them so no real recommendations. 22:11:43 https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/i40e/i40e_main.c#L204-L315 will help relate the structures. 22:14:52 rmustacc: Thanks! 22:17:39 [illumos-gate] 16703 types.h.3head has incorrect type for LP64 ino_t -- Bill Sommerfeld 22:17:42 rmustacc: How can we get that rx-ring-size is not enough and should be increased? And the same for tx-ring-size? 22:18:21 I'm not sure I understand the question. 22:18:33 Are you asking how should you know when you need to tune it? 22:19:42 rmustacc: yes. 22:20:25 I don't have a simple answer there. As I said I've never tuned it myself. Ring sizing has not historically been my issue. 23:03:26 vetal: my only suggestion is that if you suspect it's a problem, try a workload you care about with different ring sizes. In a very different context (not Illumos) I've seen a case where it was beneficial to shrink rink sizes and ring counts (as it freed up memory that the workload had a better use for..) 23:19:16 [illumos-gate] 16704 protolist corrupts inode numbers larger than 2^31 -- Bill Sommerfeld 23:20:25 sommerfeld: Is there any way to know that current ring size is small, and incoming/outgoing packets rate overflows ring buffer. 23:26:14 vetal: so you're assuming making the ring bigger would resolve the problem when it might be a rate-matching problem (not processing things out of the ring quickly enough). 23:34:24 sommerfeld: I guess so. However, is there any tool that reports clearly about rings overflow ? 23:37:47 on rx, that would be error counters on the NIC which are nic- and driver-specific.