00:49:49 richlowe: I'm slowly forcing myself to accept that touch scrolling now works backwards from what I learned all because Apple decided it should and now everyone has matched... 14:26:02 @Kurlon you can swap MacOS-matches-iOS scrolling via system prefs rather easily... if I don't migrate my homedir to a new mac, I have to do this by hand on a new one. 14:26:42 I had been doing that, but now that Windows is also defaulting to the other way, rather than change every system I sit at, I'm forcing myself to adapt. 15:10:44 Similarly... tap to click... gah, hate it, but it's not going away... 15:12:11 if you set -mtu on a route to a value below the interface (that'd be used) mtu, will tcp pick that up for it's MSS for anything using that route? 16:37:28 Continued lab testing, best I can squeeze out of my dual 82599 10Gb aggr between a Westmere pair is 12.4Gb/s with iperf3. 16:38:42 For giggles, switched one node to Ubuntu 22.04, that was horrifically bad, perf sank down to around 6Gb/s with wild swings. 16:53:32 @jbk I think so? Easy enough to test... lemme try. 16:56:52 Yeah... works on my test (latest SmartOS release). gist forthcoming... 16:59:56 https://gist.github.com/danmcd/ac5a58b22412953a5f5801f445b43e58 17:00:00 @jbk ^^^ 17:14:30 Other datapoint for today, DDR3 @ 1333Mhz vs 800Mhz doesn't appreciably alter NFS speed on my Westmere gen crud so worth the downclock for extra arc space. 17:15:29 Now to do some testing against my prod box to see why it's so dang slow. 17:25:03 jbk: yes, definitely. One of the things I did at google used route mtus (on linux) to enable a seamless migration to a larger MTU on google's internal network. Interface gets max, default gets mtu 1500, destinations in between got a topology-dependent mtu 17:31:38 hrm.. wonder if that might work here.. 17:34:06 although, if our interface is already set for 1500, that should already use that, right? 17:34:30 we have a situation w/ a poorly configured customer network where some ports are set to 9000 and some aren't 17:34:52 we can't fix that, they won't fix that for reasons 17:35:31 but some of those systems using 9000 mtu are having issues talking to our stuff.. 17:35:38 (same vlan, but different switches) 17:44:46 jbk: ouch. 17:45:13 yeah, it's annoying to say the least 17:50:49 So what *should* happen for TCP is that interface MTU, route MTU, and peer's advertised MSS all constrain the sender's packet size. I haven't stress-tested this on illumos (haven't tried enabling jumbograms on my home network..) 17:52:11 but non-TCP traffic doesn't have the MSS negotiation. 17:54:23 it should be tcp (thankfully) 18:01:03 Worrying about this is why I've never gone to jumbo MTUs, if I could default to 1500 save for specific local hosts in subnet... 23:24:19 Ooofh, perf with this Mellanox Connectx-5 is horrid, single stream it's struggling to do 6Gbps over a 25Gbit link. 23:28:03 If I use my X540 node as the sender, I can do better. 23:37:08 have you tried multiple connections? 23:39:05 Yup, pushing TO that box I can do about 13.2Gbps with a pair of X540 10Gb links in a lag to my problem box. (Dual CX5 25Gb in a lag) 23:39:21 Sourcing from the problem box, about 6.5Gbps with 8 streams. 23:40:31 Looking at it via NFS between the boxes, problem box as the server, I can read from arc at 1.4GB/sec, over NFS I top out at 200MB/sec. Pushing to it, I can do 300MB/sec. 23:41:34 That's what got me chasing, I've got a much older 2010 era HP box with far worse specs that did a much better job as a VM NFS host. 23:50:08 My suspicion now is swapping to an X520 and dual 10Gb links I'll get much better nfs numbers out of the box, which is not what I anticipated.