11:31:46 nahamu (LIBERA-IRC): Wow, I didn't think it would be this easy: `pkgin in tailscale` thank you so much for the effort you put into making WireGuard work smoothly on illumos 13:45:00 teutat3s: I'm glad it's working for you!! 13:46:37 jperkin did the packaging, and jclulow did the original port of wireguard-go, not to mention the original authors of Wireguard and Tailscale etc. etc. 13:47:07 But yeah, that smooth simplicity of use is why I've been willing to keep maintaining the fork and working on getting things upstream. 13:49:10 nahamu (LIBERA-IRC): To me it looks like your upstreaming effort is really close to getting merged, if there's anything I can contribute, like testing, lemme know 13:49:44 teutat3s: Honestly, the biggest help would be exercising the wireguard-go and wireguard-tools bits. 13:50:15 Exercising? 13:50:19 Should also be packaged in pkgsrc. 13:50:32 Try it out. See if it works or if there's an obvious bug I've missed. 13:50:52 Understood, will do and report back 13:51:16 It's not as easy as Tailscale, but even just a "I got it to work" or "I tried X and it should have worked but failed" would be great! 13:51:56 https://github.com/WireGuard/wireguard-tools/pull/17 is the relevant PR. I can also sketch some documentation if you've never used wg-quick before. 13:52:52 There's some generic wg-quick related content in https://blog.shalman.org/wireguard-android-road-warrior/ 13:54:18 The existing code should be at least beta quality. If a few other people don't trip over anything obvious it should be good enough to get upstream to ship it. 17:12:56 That's convenient, I'm working on getting wireguard working on some smartos instances of mine this morning. I've been making linux instances and am cleaning up the number of VMs 17:13:16 nahamu jclulow thanks for all your work! 17:13:22 copec: let me know how it goes! 17:14:32 It just worked, when initially setting up manually (I just saw the packages in pkgin), it's pretty neat that wg works the same with wireguard-go 17:14:52 How do I set up a service with /opt/local/lib/svc/manifest/wireguard-tools.xml for say a tun0 instance? 17:15:25 I would name it something more meaningful and let wireguard-go allocate the specific instance. 17:16:05 https://github.com/nshalman/wireguard-tools/blob/wg-quick-for-sunos/contrib/smf/README.md 17:16:18 Let me know if you need additional detail. 17:16:57 ah, thank you much 17:53:14 nahamu How do you get wg-quick to specify a (sacrificial) remote address? https://unaen.org/pb/7uv 17:54:02 hmmm 17:54:09 let me see what I have lying around 17:54:48 can you strip the keys out of your config file and paste it somewhere? 17:54:55 sure 17:55:42 perhaps just remove the "/32" from the address specifications? 17:56:58 https://unaen.org/pb/74m 17:57:17 Yeah, I would try removing the "/32" 17:57:38 Do you use those on Linux? 17:57:45 yeah, I use them on linux 17:57:51 Thank you for finding a bug! 17:58:25 Can you confirm for now whether things work if you remove them? 18:00:02 yeah, just tried, it is working now 18:00:19 I notice it uses the same ip for local and remote 18:00:36 and adds the route for each individual endpoint, nice 18:01:00 https://unaen.org/pb/3mp 18:03:27 copec: https://github.com/WireGuard/wireguard-tools/pull/17#issuecomment-1456664190 18:09:46 ty nahamu 18:09:53 thanks for kicking the tires!! 18:20:14 copec: I think I have a fix for you to test. 18:21:11 So my link to this instance seems to do about 250Mbit outside the tunnel, and 50Mbit inside the tunnel, and it doesn't appear the wireguard-go process is CPU limited. That is still sufficient for what I am doing, but I would enjoy tracing the performance aspects of it 18:21:23 https://github.com/WireGuard/wireguard-tools/compare/491d58a4bae08bb74e82a2d372af660ee6d968b6..77de1e949f7e754b28f5b258c0176718c63741a5 18:22:11 if you can manually make that change to the wg-quick script and see if that fixes it for you, that would be helpful. 18:22:16 it seemed to fix things for me. 18:24:29 I'll try that now 18:25:08 If there's enough momentum to get this stuff upstream, the fix might not make it to the pkgsrc package until after the upstreaming. 18:34:07 copec: any luck? 18:40:18 yup, it is working, both with the /32 and without 18:41:59 Nice. Thanks! 19:16:09 danmcd: see that race in that gist for deploying to cloudapi ? 19:16:50 I didn't... sorry, hold on. 19:21:00 @Smithx10 it seems the "check-for-same-alias-as-user" is something that isn't concurrent against multiple simultaneous creations. 19:21:38 I know little about triton-go, and even less about terraform. 19:21:59 teutat3s: are you still using boringtun? 19:22:44 papertigers (LIBERA-IRC): yeah, but will test the pkgsrc wireguard-go version this week 19:24:05 I wonder if packaging boringtun in pkgsrc would make it somehow possible to choose the wireguard user space backend with something like wg-quick 19:25:34 I think the way I ported the wg-quick code you could pass it a different binary in an environment variable. 19:26:30 yeah, I'm even doing that in my example SMF manifest. 19:27:12 papertigers: are you still maintaining your boringtun fork and/or is your stuff upstream? 19:29:09 its not, but I was going through my gh notifications and I saw a question from copecog; which I am going to guess is copec? 19:29:21 Did you figure out your issue? 19:30:06 nahamu: I haven't used it in a long time, but teutat3s was keeping the branch up to date every so often 19:30:26 ah 19:30:33 papertigers I haven't figured out why it wasn't building yet 19:30:52 were you building it off of the branch in my fork of the repo? 19:31:53 copec: https://github.com/papertigers/boringtun/tree/illumos-eventports 19:32:09 But I also just noticed there's a PR that needs to be merged from teutat3s that does a more recent sync. 19:33:17 I was using that branch, but I was probably doing something wrong 19:33:28 I'll try it again 19:34:34 fwiw I haven't tried in awhile, if you can gist or put the build output somewhere I can try and find time to look. We probably should merge teutat3s's PR as I believe they have been using that since the PR was open in july without issue 19:37:04 danmcd: Yea, probably have to add that into cloudapi 19:37:44 I tried two VMAPI ones but one failed with colliding-alias. Gonna see if I can make them Go Faster at launch time. 19:38:51 Ahhh, there we go. 19:41:27 [root@shemp (kebecloud) ~]# vmadm list |grep kebe 19:41:28 187fc88f-1cea-4263-9b91-c676bdbaf180 OS 256 running kebetests 19:41:28 992da0fc-63df-4836-9563-38ef917929ed OS 256 running kebetests 19:41:29 [root@shemp (kebecloud) ~]# 19:41:55 I did it with two concurrently-started sdc-vmapi invocations. 19:44:23 So is this *really* going to be a big problem? Protecting against such races would cause some major-league slowdowns (esp in Manta). 19:44:33 Smithx10: Basically the workaround for that is you should name things `myvm-{{shortId}}` (i.e., literally the exact string {{shortId}}). Then {{shortId}} will be replaced with the zone uuid prefix. 19:45:12 Smithx10: e.g.,: 8bc86cde manta-shortener-8bc86cde base-64-lts⊙24 running - 29w 19:45:26 danmcd: and bahamat its not really a big problem 19:45:42 I was wondering it it may break anything down the line 19:45:42 TIL about `{{shortId}}` 19:45:56 I dont think so.... except for same DNS. 19:46:17 Using {{shortId}} is how to guarantee you're not going to race on alias names. 19:46:58 And I think you can only run into this when you have multiple workflow instances (but I may be wrong, you might still be able to do it with only one) 19:47:17 Yea, I just gotta tell the users / send a warning 19:47:21 Just wanted to make sure it wasnt just me 19:50:34 We are planning on centralizing the terraform deployments so I can probably check this before we go to CloudAPI too 19:51:47 Yeah, with terraform especially, I recommend using shortId 19:52:23 But...I don't know, terraform is often very brain damaged when it comes to that kind of thing. It *really* doesn't like the cloud filling in details for it. 19:52:34 Protip for testing: 19:52:37 I had to fight tooth and nail to get them to be ok with network pools existing. 19:52:48 "starting-pistol" protocol 19:53:06 Invoke multiple of whatever you want concurrently running either with: 19:53:33 low-res: (sleep ; CMD )& (multiple times) 19:53:41 pkill sleep 19:54:08 high-res: cat & (will suspend in background on tty input) 19:54:10 One of my main blockers for using terraform was that it wouldn't work with network pools. You'd provision something and on every pass it would destroy/create again because the interface uuid was the network not the pool and for weeks they refused to fix it. 19:54:25 (pwait $PID_OF_CAT ; CMD ) & (multiple times) 19:54:34 fg %1 (cat) 19:54:49 ^D on cat 19:55:13 Ultimately they said if I wanted it fixed it would need to be documented. I pointed them at the existing documentation and they complained about the way it was phrased, so I had to rewrite the doc to appease them, and finally the begrudgingly fixed it. 19:55:26 Wow. 19:55:48 Awfully nice of them. :upside_down: 19:56:00 It was maddening. 19:56:57 It was also very difficult to get them to fix their broken data storage in manta plugin so that we could have a shared terraform deployment. 21:02:48 bahamat (LIBERA-IRC): sadly they dropped support for the manta storage backend in 1.3 21:05:01 papertigers (LIBERA-IRC): yeah no issues here with that PR, been running it since then 21:23:51 teutat3s: cool! I will merge it 21:25:25 done 21:27:39 copec: ^ heads up if you want to redo your experiment with newer bits. Someone should find time to do another sync with upstream at some point. 21:30:30 yay, I'll give it a try 21:39:34 papertigers (LIBERA-IRC): Thanks! I always waited for a tagged release, but yeah, syncing with the master branch would also be possible 21:45:18 Feel free to keep it in sync with the strartegy you have been using 21:45:49 wondering if I should transfer these bits to the illumos github rather than my account 22:14:20 teutat3s: I'm not surprised, with how badly it was broken. 22:15:22 papertigers (LIBERA-IRC): I'd say yes, could make it more discoverable 22:33:34 jclulow: any thoughts on me transferring the boringtun fork to the illumos github org? Instead of having it live under my github 22:33:47 I'm not opposed per se, but who would look after it? 22:34:44 well, teutat3s has been it's primary user I belive and has done the most work keeping it up-to-date. But if they don't want the burden it can just stay under my gh. It's not a big issue 22:35:22 I can continue to look after it : )