14:30:45 hello all, what’s this tcp parameter means? tcp_rst_sent_rate_enabled , when should it be set to 0 ? I’m running PostgreSQL rdbms, should it be set to 0? or use default value 1 ? 14:32:17 it is in an isolated network 15:01:46 It's a protection against packet amplification (it rate-limits RST packets). WHat problem are you trying to actually solve? 15:15:06 I *must* be doing something wrong. I'm trying to set a MTU of 9000 (for jumboframes) on an ixgbe device and I'm told it isn't valid. When I check, I see this: 15:15:09 ipadm show-ifprop -p mtu ixgbe4 15:15:09 IFNAME PROPERTY PROTO PERM CURRENT PERSISTENT DEFAULT POSSIBLE 15:15:10 ixgbe4 mtu ipv4 rw 1500 -- 1500 68-1500 15:15:10 ixgbe4 mtu ipv6 rw 1500 -- 1500 1280-1500 15:15:26 Is 1500 really the max MTU? 15:15:44 (looking at the POSSIBLE column.) 15:16:10 yes, but no 15:16:18 do 15:16:29 illmos separates out the data link layer bits from the IP bits 15:16:49 nomad: dladm(8) 15:17:07 danke 15:17:08 so you need to see what the MTU is on the underlying link 15:17:22 * nomad goes to read more manpages 15:17:29 annoyingly though, to change it, you'll need to tear down your IP interface(s) 15:17:39 change it, then recreate them with ipadm / ifconfig 15:17:59 that's ... ungood. 15:19:09 The good news is that when you change it with dladm(8) it persists. 15:19:44 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE 15:19:44 ixgbe4 mtu rw 1500 1500 1500-15500 15:20:27 yeah, i've floated around some sort of 'apply at next 'start' option to dladm as a possible way to deal with it, but haven't really done much more than what I just said 15:20:51 so, I have to ipadm delete-if then dladm set-linkprop then ipadm create-if to reset it? 15:21:46 well, I can test to see if setting jumbo frames impacts iperf3 test results but I'm going to be stuck if it actually turns out to potentially matter. 15:22:18 I guess I'll worry about that after the tests. Thankfully I have two test hosts I can do this on without impacting prod. 15:23:46 the reason for the limitation is basically once you start 'using' the link, (e.g. ipadm create-if, not the only way, but by far most common) pretty much every driver (for performance reasons) creates a pool of MTU-sized buffers for TX and RX (that are pretty much ready to go) 15:24:34 so changing the MTU would mean having to reallocate all of those, which would be rather complex to do if you're also actively using the buffers to pass traffic 15:24:47 (i'm simplifying a bit, but but that's basic gist) 15:28:42 hrm.. i think this has been asked before (and the answer is 'no'), but anyone put any thought into what NVMe over fabric would look like? 15:31:15 well, the good news it doesn't seem to actually matter to the speed tests I'm doing. The bad news is, it's still slower than it should be. https://pastebin.com/TJh7WgNz 15:32:34 danmcd: Thank you very much, I just read the parameters, my application is postgresql, and I hope network could run with best performance for postgresql 15:53:17 RST wont' help you. 15:53:33 There are PG-savvy folks on the developer list who might be able to help. 15:53:57 okay, thank you very much 15:54:21 @nomad ==> I wonder if you're single-CPU-stream bound? Did you try running multiple iperf server processes on multiple ports, followed by clients connecting to those ports? 15:54:26 iperf isn't MT, but your apps are. 15:57:21 For a 10G, two should do nicely. 15:57:43 IF they add up to 10, you're CPU (and yes, possibly driver and/or TCP stack) bound. 15:58:16 danmcd, -P 2 has much better totals. 15:58:34 Okay. 15:58:38 I forgot about that because iperf3 on a different host sees the full 10G (or close enough). 15:58:45 [SUM] 0.00-30.00 sec 33.0 GBytes 9.44 Gbits/sec sender 15:58:45 [SUM] 0.00-30.00 sec 33.0 GBytes 9.44 Gbits/sec receiver 15:59:02 well... in one direction it does. 15:59:05 Responder is still single-threaded IIRC. 15:59:14 the same hosts going in the other direction are still way low. 15:59:29 [SUM] 0.00-30.00 sec 17.6 GBytes 5.03 Gbits/sec sender 15:59:29 [SUM] 0.00-30.00 sec 17.6 GBytes 5.03 Gbits/sec receiver 15:59:34 iperf3 is good for only one thing: Single-stream tests. 15:59:45 what would you use for testing? 16:00:39 Two iperf -s procs with different ports, and then run two iperf clients, one each to each server. 16:00:57 You can use pwait(1) as a starting line: 16:01:08 sleep 3600 & 16:01:26 ( pwait `pgrep sleep` ; iperf -c <... one server port>) & 16:01:41 ( pwait `pgrep sleep` ; iperf -c <...other server port>) & 16:01:42 * nomad nods 16:01:43 pkill sleep 16:01:48 BOTH go off to the races. 16:02:43 Someone should write an iperf server variant that spawns threads upon accept() per connection. 16:02:44 I'll poke at that shortly. 16:18:13 I'm getting a combined total of 8.94Gb/s in one direction and 9.57Gb/s in the other. 16:18:30 so now to tear the interfaces down and reset them to the 1500MTU again. 16:26:47 so, yeah, MTU not making a difference in iperf3 timing. As would generally be expected. 16:27:01 which leaves me confused why it made such a huge difference on the FBSD tests I did over the weekend. 16:44:51 [illumos-gate] 16603 acl_totext(3SEC) can truncate users and groups -- Gordon Ross 16:44:51 [illumos-gate] 16623 Want tests for libsec acl text conversions -- Gordon Ross 16:46:40 [illumos-gate] 16591 nvme_field_validate swallows more specific error messages -- Andy Fiddaman 16:46:40 [illumos-gate] 16592 Cannot update NVMe firmware on Micron 7300 -- Andy Fiddaman 16:46:40 [illumos-gate] 16593 nvme panic when committing partially loaded firmware -- Andy Fiddaman 16:46:40 [illumos-gate] 16596 nvmeadm: some firmware activation controller errors are not -- Andy Fiddaman