03:57:57 First try the usual suspects, MTU and what-not. 04:00:41 Also a usual suspect, what PI are you running? 05:18:05 Depending on the guest OS, also look at the CC algorithm. We use (new?) reno, linux use cubic. 05:18:33 I forgot what freebsd uses, but none of it was matching up at my HDC, swapped it all to cubic seems to have improved things. 12:09:10 Hello all, question about network performance, I have machine run SmartOS with 10G NIC, and run a bhyve vm with vNIC on the 10G phy NIC, however, it work only 1G, may I know if there anything is incorrect? here is the details https://pastebin.com/btiEeVYz 12:10:04 danmcd: mtu 9000, @sjorge bhyve guest is cubic 12:50:53 https://gist.github.com/Smithx10/211f83d1cd8936765b8f935a5d69fe5d 13:07:35 hello! I'm running a VM on SmartOS which has a 600GB volume, but it is occupying around 1.4TB in ZFS. I already checked, and it isn't creating any kind of snapshot so I don't know exactly what is happening and if I can reclaim the disk space. does anyone here have an idea what this might be? 14:32:51 Guest32 (LIBERA-IRC): Is this on a raidz pool? 14:33:03 yeah on a raidz3 pool 14:33:53 Then you're probably experiencing something like: https://github.com/openzfs/zfs/issues/548#issuecomment-3791251 14:37:32 TL;DR the volblocksize is probably 8K which leads to many padding blocks 14:38:36 The last comment in that issue has a summary of various comments on the topic. One possible solution could be creating a new volume with volblocksize=32K for that raidz3 pool 14:39:44 alright, thank you for the information. I will take a look 14:46:12 tozhu: Don't see the MTU anywhere in your gist. 14:46:42 mtu is default, I think it’s 1500 14:46:49 let me check 14:47:33 @Smithx10 of course you're running the `mlxcx` branch of illumos-joyent build. 14:48:06 danmcd: the mtu is 1500 on both machine 14:48:39 I don’t think mtu is root cause that the 10G NIC work on 1G 14:49:13 maybe some limit with smartos or bhyve 14:50:47 So wait are you same-machine NFS-ing? 14:51:04 (Re-read this.) 14:51:31 (I re-read your gist, I mean.) 15:09:40 danmcd: https://us-east-storage.solutions.iqvia.com/bruce_dev/public/iperf3.jpeg. 15:15:09 just running iperf3 from native zone getting the same perf as in GZ at 15gbps but bhyve is about 1/3 the speed at 5gbps. All iperf3 tests are going from Illumos CN -> Switch -> Linux Storage Server. All with the same hardware. 15:30:51 danmcd: yes, the Bhyve VM is created on the host which is run SmartOS and nfs server by zfs create … -o sharenfs xxx ; Bhyve VM is running CentOS 7 with nfs client; 15:32:14 So tozhu you might be getting bitten by https://www.illumos.org/issues/15464 and https://www.illumos.org/issues/13463 15:32:31 Your packets are never going out on the wire 15:34:52 is there any workaround to slove it? 15:35:08 or have to wait for the fix? 16:22:16 @tozhu read the bug reports. 16:22:35 There's an etc/system (or even `mdb -kw`) variable IIRC. 16:22:47 sjorge can give you more details. 18:41:15 hi 18:41:21 thanks for smartos! 18:41:42 I need advice with ipnat.conf ... 18:42:16 I am on a single IP and want to nat an FTP server from a zone ... 18:42:24 the man page has something like 18:42:29 "port" port range port 18:42:36 I've tried lots of things ... 18:42:51 rt" port range port 18:42:51 what exactly does that mean 18:42:51 rdr e1000g0 from any to 144.76.69.252 port 65001-65100 -> 10.0.5.81 18:42:51 rdr e1000g0 from any to 144.76.69.252 port 65001:65100 -> 10.0.581 18:42:51 rdr e1000g0 from any to 144.76.69.252 port 65001 range 65100 -> 10.0.5.81 18:42:52 rdr e1000g0 from any to 144.76.69.252 port 65001 ... 65100 -> 10.0.5.81 18:43:00 I'm simply not getting it 18:43:11 what does the range between the port and port mean ? 18:51:40 Port ranges are colon-separated low:high ranges of ports. E.g. `6001:65535`. 18:52:01 A generic outbound NAT rule, e.g.: 18:52:32 map igb1 10.19.84.0/23 -> X.Y.Z.NN/32 portmap tcp/udp 6001:65535 18:52:55 (That's a made-up example entry for a line in /etc/ipf/ipnat.conf) 18:53:20 FTP gets tricky because of it's wideranging port usage. 18:53:29 (Same with NFS prior to NFSv4.) 19:16:43 This was nice blog and page. Maybe could be bring ip again if asked.. https://web.archive.org/web/20220703031731/https://timboudreau.com/blog/smartos/read 19:31:47 thanks, I'll try that 20:53:17 danmcd: having sender and receive aligned CC algorithm wise just gave me more consistent performance 20:56:57 i just double checked and ended with cubic and ecn active everywhere (host, zone, freebsd bhyve firewall/router, and linux vms) 22:53:43 danmcd: Thank you 23:09:47 danmcd: I’m going to use a new image to resolve this issue, Thank you