10:08:50 Hello everyone, 10:08:50 I am looking for a way to connect a server to the internet via PPPoE. 10:08:50 Is it still possible on OmniOSce/OpenIndiana? 10:08:50 I haven't found how to do it so far. 12:51:36 The only workaround I could find is to use netcat and ppp to simulate a PPPoE connection... it's not really fantastic, I think :-D 12:53:32 majekla: maybe that's helpful https://gist.github.com/jperkin/7717d3e84e93885ab14da3bce3039f4b? 12:59:26 Wow.. thank you very much ! I'm gonna try this immediately 13:18:37 It seems that it will not work because the pppoec lib he is using isn't in any packages available 13:35:22 $ pkg search /usr/lib/inet/pppoec 13:35:22 INDEX ACTION VALUE PACKAGE 13:35:22 path file usr/lib/inet/pppoec pkg:/system/network/ppp/tunnel⊙0 14:03:03 Great ! Now it will be easier. Just to know, when I pkg search /usr/lib/inet/pppoec, I don't get this result, I get nothing... I'm on the last r151050 version 14:03:13 Thank you very much 14:20:01 it works for me when i try it on the latest r151050, maybe a `pkg refresh --full` does help on your end? 14:27:14 yes it works with a full refresh now. Thank you. It's strange, I've just installed omnios on a vm, shouldn't this be automatic after the first pkg update ? Anyway, thank you very much for your help. I'm gonna try to connect to my ISP now ? 17:53:11 Wow I used to use the PPPoE stuff maybe 15+ years ago haha 17:55:59 Thank the Bell-Heads for PPPoE.... 19:34:32 jclulow, I'm pretty sure my ISP is still using it. I just don't feel like asking my firewall right now. It would just be depressing. 19:34:58 speaking of depressing, I've been forced to face the fact that my fileservers have much slower network connectivitiy than they should. 19:35:18 What sort of speeds are you getting/expecting? 19:35:48 getting 19:35:50 [ 5] 0.00-10.00 sec 3.35 GBytes 2.88 Gbits/sec sender 19:35:50 [ 5] 0.00-10.00 sec 3.35 GBytes 2.88 Gbits/sec receiver 19:35:52 expecting 19:36:01 [ 5] 0.00-10.00 sec 10.8 GBytes 9.28 Gbits/sec 8 sender 19:36:01 [ 5] 0.00-10.04 sec 10.8 GBytes 9.25 Gbits/sec receiver 19:36:20 the first number is a test between two OmniOS boxes with dual 10G (LACP) 19:36:41 the second number is a test between two Linux hosts, one a virt, with single 10G connections. 19:36:50 all connections go via the same single Juniper switch. 19:37:19 forgot the header: 19:37:21 [ ID] Interval Transfer Bandwidth 19:37:51 those are all iperf3 using defaults. 20:05:54 nomad, what NICs? 20:10:24 ik5pvx, we've got a mix. i40e, ixgbe for the most part. 20:28:54 nomad: single TCP connection should only use one path (if it's being load-spread across multiple paths, that could result in packet reordering with hurts performance. 20:29:21 so LACP *could* be (part of) the problem? 20:32:08 could be. at the very least it's an apples vs oranges difference. 20:32:20 : || lvd@hvfs1 ~ [505] ; dladm 20:32:20 LINK CLASS MTU STATE BRIDGE OVER 20:32:20 i40e0 phys 1500 up -- -- 20:32:20 i40e1 phys 1500 up -- -- 20:32:20 aggr0 aggr 1500 up -- i40e0,i40e1 20:32:20 ixgbe0 phys 1500 up -- -- 20:32:22 ixgbe1 phys 1500 up -- -- 20:32:24 aggr_nas0 aggr 1500 up -- ixgbe0,ixgbe1 20:32:31 what does dladm show-aggr show? 20:32:53 LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS 20:32:53 aggr0 L4 auto active short ----- 20:32:54 aggr_nas0 L4 auto active short ----- 20:33:12 what load-balancing is configured on the switch end? 20:33:38 IIRC, none. 20:33:54 It's a juniper. Let me go see if I can find the config for it. 20:34:18 (I think it's more likely to be some need for tuning in our TCP stack than switch being wrong, but never hurts to check) 20:34:27 should be some sort of L4 load-balancing policy. 20:34:51 some switches can be configured to do round-robin which is dumb 20:35:24 I am so very much not a network admin. I had Juniper help me with the initial config of this switch so hopefully we got it right. 20:39:17 set interfaces ae2.0 family ethernet-switching vlan members default  20:39:17 delete interfaces xe-0/0/44  20:39:17 delete interfaces xe-0/0/45  20:39:17 delete protocols rstp interface xe-0/0/44  20:39:17 delete protocols rstp interface xe-0/0/45  20:39:17 set interfaces xe-0/0/44 ether-options 802.3ad ae2  20:39:19 set interfaces xe-0/0/45 ether-options 802.3ad ae2  20:39:21 set interfaces ae2 apply-groups LACP  20:39:23 set interfaces ae2 apply-groups JUMBO_FRAMES  20:40:00 I'm a bit concerned right now as I can't seem to ssh into the switch to get the actual config. Those are my notes about how we configured the ports. 20:40:29 whew, I was able to get in. 20:42:26 yep, config matches my notes. 20:43:19 unclear if this is relevant to you but google found: https://www.juniper.net/documentation/us/en/software/junos/interfaces-ethernet-switches/topics/topic-map/switches-interface-load-balancing.html 20:43:42 This can't possibly be right: 20:43:44 set groups LACP interfaces aggregated-ether-options lacp active 20:43:44 set groups LACP interfaces aggregated-ether-options lacp periodic fast 20:44:02 aren't active and periodic contradictory? 20:47:32 no that's fine 20:47:46 lacp can go slow or fast in its detection of faulty links 20:47:58 fast is good unless you're dealing with an arista switch 20:48:19 ah, ok. 20:49:26 as for the load balancing, there could be a knob under forwarding-options, but I'm rusty on juniper 20:49:32 sommerfeld, is there something I can check about the tuning while I'm poking at this? 20:49:52 default should be some l2-l3 hashing 20:50:22 set forwarding-options storm-control-profiles default all 20:50:29 that's the only forwarding-options hit I get in the config file. 20:52:27 mine only has this related loadbal: set load-balance-label-capability 20:52:41 I don't think you need that 20:53:12 We have expensive vendor support. I suspect it's time for me to use that. 20:53:18 hah 20:53:20 yes 20:54:08 I have a cheap ex3300-48p off ebay 20:54:09 nomad: setting tcp connection_control to cubic and boosting the tcp send_buf/max_buf/recv_buf will likely help somewhat. 20:54:32 where are these things done in illumos? 20:54:47 (all this via ipadm {set,show}-prop) 20:55:07 nice, thanks 20:55:16 argh, I of course mean "congestion_control" 20:55:18 sommerfeld, these hosts are fileservers. Their only reason to exist is to serve NFS or SMB packets. 20:55:33 Will these changes negatively impact those services? 20:57:02 nfsv4 and smb both run over TCP. if the changes improve TCP performance it will likely help them be better fileservers. 20:57:17 that's my theory but I am required to ask these questions :) 20:57:28 and are likely to be neutral for nfs-over-UDP 20:58:58 looks like cubic is already set in ipadm 20:59:10 tcp congestion_control rw sunreno -- sunreno sunreno,newreno,cubic 20:59:27 no, it's at the default. last column is "POSSIBLE" 21:00:01 https://pastebin.com/UztwfQi4 21:00:14 ah 21:00:26 well, that host is a dev/test host so I can make changes there. 21:02:00 change one, watch it closely for a while, revert on suspicion of badness, spread to others if it's better. 21:02:29 do you ever get close to maxing out the lan? 21:02:43 I'm changing that one prop on my two test/dev boxes and will re-run the iperf3 tests between them now. 21:02:59 ik5pvx, not with these fileservers, no. I do with my Linux hosts. 21:03:45 well, setting them both to cubic did not change the numbers. 21:03:50 on linux there's also that other policy.... bbn? can't remember. Supposedly it's the most efficient, but can be a little aggressive with other traffic 21:03:51 [ ID] Interval Transfer Bandwidth 21:03:51 [ 5] 0.00-10.00 sec 3.06 GBytes 2.63 Gbits/sec sender 21:03:51 [ 5] 0.00-10.00 sec 3.06 GBytes 2.63 Gbits/sec receiver 21:04:11 my linux hosts don't have LACP enabled, just the file servers. 21:04:29 so cubic didn't seem to have helped directly. 21:05:13 * nomad looks for info on send_buf/max_buf/recv_buf current settings 21:05:23 bbr is the name 21:05:55 : || lvd@fs2 ~ [526] ; ipadm show-prop | grep tcp | grep buf 21:05:55 tcp max_buf rw 1048576 -- 1048576 8192-1073741824 21:05:56 tcp recv_buf rw 128000 -- 128000 2048-1048576 21:05:56 tcp send_buf rw 49152 -- 49152 4096-1048576 21:06:11 that's what it is now. What would you suggest I bump it to? 21:06:42 BBR is good, but we don't have it illumos (yet?) 21:10:40 I doubled the current numbers and tested again...no change in the results. 21:18:01 * nomad swears at non-orthoginal commands. 21:18:13 set-prop diddn't ask for an interface but reset-prop does. 21:19:09 no, that's just me being stupid. Never mind me. 21:21:17 I've reset congestion_control, max_buf, recv_buf, and send_buf to default on both hosts. 21:21:28 I guess tomorrow I'll get to spend quality time with Juniper TAC. 21:23:29 good luck