00:22:53 nomad: One thing to check is: does it go faster (in aggregate) if you run multiple iperf streams in parallel 05:08:22 It's only letting me run one stream at a time, regardless of trying multiple -c on one host or on different hosts. 05:08:29 I'm guessing the server only allows one stream. 05:10:39 and trying three streams from fs2 (going to hvfs2, fs1, and fs3) concurrently gives an aggregate of about the same bad number I get with just one stream. 06:07:17 nomad, to run parallel you pass -P to the client 09:20:41 hadl : Thank you very much for your help with PPPoE. 09:20:41 I wrote a howto on unitedbsd: 09:20:41 https://www.unitedbsd.com/d/1379-setting-up-an-omniosce-server-as-an-internet-facing-router-with-pppoe 15:04:49 ik5pvx, thanks. Still getting the same low numbers. 15:05:00 [ ID] Interval Transfer Bandwidth 15:05:07 [SUM] 0.00-10.00 sec 2.78 GBytes 2.39 Gbits/sec sender 15:05:07 [SUM] 0.00-10.00 sec 2.78 GBytes 2.39 Gbits/sec receiver 15:16:34 util.c:155:14: error: 'F_OFD_SETLK' undeclared (first use in this function); did you mean 'F_SETLK'? 17:45:26 just to confirm... if I install a new (dual) NIC (x540) in an OmniOS box, touch /reconfigure, and reboot, that device should show up in prtconf -v and dladm, right? 17:45:37 Did I miss a step somewhere or do I have an actual hardware problem? 17:47:25 you don't need to touch /reconfigure if you add a device 17:47:57 just adding it should make it show up in prtconf 17:48:15 Woodstock, that's what I thought but when it didn't show up after first boot I gave /reconfigure a try. 17:48:48 The card shows link when cabled to another device so it has power. 17:48:56 * nomad sighs 17:48:57 prtconf -d should show device names (from the pci id database, perhaps related to your product) 17:49:37 prtconf -dD shows that and the driver which attached (if any). 17:50:08 you may have a software problem (available drivers don't support the device you added). 17:50:25 prtconf -d shows 4 540 entries, which is the same as we had before I added this card. 17:50:35 nomad: if the device isn't seen by prtconf there's probably something else wrong. every device in the system should show up there, even if there's no driver available. 17:50:58 As I feared. 17:51:27 We had a problem years ago with a fileserver that wasn't seeing half the bus because it only had one (of two) CPUs installed but this host has both CPUs so that *shouldn't* be the problem. 17:51:36 when in doubt, power off, pull & reseat card. 17:51:49 sommerfeld, that's what I was just about to say :) 17:52:06 then do a tree walk in bios to look for work/don't work options. 17:52:18 I guess I'm heading back to the server room for another round of opening up the huge host. 17:54:00 If that was a reference to an EFI thing I don't think this host has that. I'll have to check but I'm pretty sure it's old-school BIOS. 17:54:24 Though I will check to see if the BIOS sees the card/offers any enable/disable options for it. 17:54:32 * nomad -> server room. 17:55:34 just meant "walk the tree of menus looking for anything that might possibly cause the device to be ignored" 18:55:19 As I feared, it was in fact a hardware problem. 18:55:32 I swapped the NIC with the HBA and the NIC appeared and the HBA disappeared. 18:55:44 I've managed to find a temporary solution so on with network testing. 19:56:39 well lookie here. 19:56:43 [ ID] Interval Transfer Bandwidth 19:56:49 [SUM] 0.00-10.00 sec 10.8 GBytes 9.26 Gbits/sec sender 19:56:49 [SUM] 0.00-10.00 sec 10.8 GBytes 9.26 Gbits/sec receiver 19:57:06 that's an iperf3 -P 3 run on a single dedicated link between two OmniOS hosts. 19:57:24 When I don't use -P 3 the numbers go down to match the numbers I was seeing yesterday. 20:06:43 no, actually, they don't. They go down to about double what we were seeing yesterday. 20:11:35 there is no consistency between yesterday's results and today's. :/ 20:19:42 Is it cooler in the DC today? haha 20:22:39 jclulow, you laugh but ... well, no, the answer is no. But I get where you're coming from. 20:23:08 performance is challenging because of how many variables there are 20:23:34 I definitely had a network weirdness problem here that went away when I reseated some calbes 20:23:36 *cables 20:23:41 Which is very unsatisfying 20:24:08 indeed. Which is why I dropped in a dedicated card to match another spare I had in the other test server. 20:24:23 I ran cables between them and that was going to be my 'dedicated, controlled' environment. 20:24:36 except now I'm seeing better throughput from hosts and networks I didn't touch. 20:26:13 [ ID] Interval Transfer Bandwidth 20:26:13 [ 5] 0.00-10.00 sec 3.92 GBytes 3.37 Gbits/sec sender 20:26:13 [ 5] 0.00-10.00 sec 3.92 GBytes 3.37 Gbits/sec receiver 20:26:14 vs 20:26:21 [ ID] Interval Transfer Bandwidth 20:26:21 [ 5] 0.00-10.00 sec 8.75 GBytes 7.51 Gbits/sec sender 20:26:21 [ 5] 0.00-10.00 sec 8.75 GBytes 7.51 Gbits/sec receiver 20:27:18 mind you, *that* particular test was between a production host and the host I rebooted to install the new NIC in so... 21:31:57 hmm.. I think I might have found a clue. https://pastebin.com/Rm19SDWC 21:32:13 difference between i40e and ixgbe based aggregates.