00:09:05 yeah.. with enough options, i finally got it to work -- was trying to skip a directory but add another filter 00:19:59 iximeow: there are several github repos that have parts of the website, including https://github.com/illumos/docs.git and https://github.com/illumos/dev-guide.git/ 00:34:05 sommerfeld: right, i mean specifically the source for the page at `illumos.org` - i couldn't find one that had the words on the homepage 00:35:06 the "What is illumos?" paragraph also shows up in docs.git/docs/index.md but it looks to me more like it has heritage in the illumos.org page, rather than a hint as to how illumos.org/index.html gets assembled 00:36:04 I believe there is a separate repo that may be private. 01:10:40 I think that's accurate 01:29:30 well, in lieu of being able to post a change somewhere, i'll idly say, it would be nice if illumos.org linked to gerrit primarily and github as a mirror :P 07:36:13 iximeow: ask and ye shall receive 10:54:24 richlowe - the unreferenced files in bhyve are at least intentional, but we should probably just go through and remove them now. 10:56:03 Once illumos 16890 (fenix) is integrated, I am going to look at changing bhyve so we build it with XPG6 which will change the balance of diffs from freebsd - at least we can stop with all of the uint8_t/caddr_t confusion in places. I can take that opportunity to remove the unused files too. 10:56:09 FEATURE 16890: bhyve upstream sync 2024 November (In Progress) 10:56:09 ↳ https://www.illumos.org/issues/16890 | https://code.illumos.org/c/illumos-gate/+/3809 11:40:26 there are still few with loader too. 13:35:11 So I have an OmniOS NAS (10.0.0.10), which runs Tailscale in a sparse zone with exclusive IP (10.0.0.20 ) plus TUN enabled ( Tailscale IP: 100.103.207.28 ). I have set up this Taislcale node as a subnet router, which advertises 10.0.0.0/24 and also set up exit node with the necessary settings according to the guide https://lightsandshapes.com/posts/tailscale-on-omnios/ 13:37:43 It was working fine, I was able to ping/ssh the NAS using its LAN address (10.0.0.10) from aboard. However since last week I am unable toping the NAS remotely using its LAN IP, but I can still ssh into it using the same IP. There is no active firewall rule as far as I know, which could block ICMP echo. 13:49:46 [illumos-gate] 15854 Typos in section 9f of the manual -- Peter Tribble 14:04:43 szilard: in that blog post, Mike configures ipnat and turns on ipfilter. Please enable ipfilter and confirm your ipnat configuration. 14:09:23 if you don't NAT the packets, then you have to set up routing on the relevant subnet and add ACLs to allow the return traffic back onto the tailnet. It is very unlikely that is how you had it set up. Make sure that NAT is properly configured and try again. 14:13:33 So I have an OmniOS NAS (LAN IP: 10.0.0.10) running Tailscale in a sparse zone (LAN IP: 10.0.0.20, Tailscale IP using tun: 100.103.207.28). I have set it up as a subnet router, and it advertises 10.0.0.0/24 on the Tailscale network. I have also set up it as an exit-node with the necessary config according to the guide here: https://lightsandshapes.com/posts/tailscale-on-omnios/ 14:15:46 I frequently connect to the NAS from remotely using its LAN IP, which worked fine so far, however sicne last week it doesn't answer to pinging its LAN IP over Tailscale. 14:16:35 But I am still able to ssh into it using its LAN IP over Tailscale 14:16:54 See the output here: https://pastebin.com/raw/Z2W7WYkE 14:18:39 As you can see on this client can communicate with 10.0.0/24 through tun0 trough the gateway 100.81.49.59. 14:20:08 There is no ACL configured in Tailscale. Also there is no active firewall rule on the NAS. 14:20:29 But for some reason all packets gets lost. I have no idea whats wrong. 14:27:55 szilard: what did you change since last week? I have tried to state repeatedly that what is wrong ios that you have NAT turned off in the sparse zone. 14:28:04 s/ios/is/ 14:29:04 if you reread the blog post you have now linked twice, you will see that for the exit node, NAT is turned on. That is similarly needed for a subnet router. 14:29:37 lets consider the tailscale zone. It's LAN IP is 10.0.0.20, while it's Tailscale IP is 100.103.207.28 : I can't ping the LAN IP, but I can ping its Tailscale IP: https://pastebin.com/raw/SDnMbujh 14:30:10 szilard: I will not consider anything until you first try my well documented suggestion. 14:30:26 If I would have NAT disabled in the sparse zone, I wouldn't be able to ssh into the NAS, as far as I know, but correct me if i am wrong. 14:30:51 TRY IT. See if it works. I don't mind us discovering that I am wrong. You can always turn it back off again. 14:31:14 Wait. 14:31:16 OK, lemme try. 14:31:30 I may have gotten impatient too quickly. 14:31:48 What else has changed? 14:33:57 But every use of subnet router and exit node I have done has used NAT. 14:37:22 I have enabled both services, ping still not working: https://pastebin.com/raw/Cg8pSJJr 14:37:41 I have disabled IPV6 on the router to which the NAS is attached. 14:37:59 Among other things. I am unsure. 14:38:23 I mean, I did many changes last week, no idea what broke it. 14:38:59 I don1t use IPv6 on my LAN, and it was not working previously, so I simply disabled it, I think this should not be the reason. 14:45:05 szilard: what does `ipnat -l` report? 14:46:45 wait, I see "z_tailscale0" in ipadm but I see "tailnode0" in ipnat.conf that seems wrong. 14:46:55 I think you accidentally broke your nat. 14:47:19 what' 14:47:29 do you see in `dladm show-link` ? 14:48:46 also, shouldn't the NAT be inside the zone, not outside? 14:51:07 nahamu: wohooo! 14:53:01 what was the solution? 14:53:15 nahamu: you have right, I have mistakenly copy-pasted and not adjusted the vnic-name. It should be z_tailscale0 (my naming scheme is z_(service)(number) ) 14:53:20 LEt me correct it. 14:56:07 I have corrected it, and disabled/enabled the referred services, but ping still not working. 14:57:24 the NAT configuration is for inside the zone, not in the GZ. 14:57:37 nahamu: You have right. I have just checked, I have the NAT configuration file in the zone aswell, with the correct content. 14:57:51 But where am I supposed to enable the services? In the zone or in the GZ? 14:58:03 enable the service inside the zone 14:58:11 you can and should remove that NAT rule from the GZ. 14:58:12 Oh, let me try that aswell 14:58:37 (and you can disable ipfilter in the GZ.) 15:01:20 I have disabled ipfilter and ipv44-forward in the GZ, enabled them in the zone. 15:01:29 removed also the NAT config from the GZ. 15:03:05 https://pastebin.com/raw/3LU7aDYa 15:03:24 should I try to reboot the NAS? 15:06:41 let me try to reboot it. please do not answer till i return as my bnc runs on the machine I am rebooting right now :) 15:08:48 I'm back. 15:09:20 ping still not working. 15:10:00 FYI: Ping works ok on the LAN between the machines and the zones, but not over Tailscale. 15:11:21 [illumos-gate] 17234 clean up I32LPx silliness -- Patrick Mooney 15:11:22 So there are 3 "machines" we care about. the GZ which is not on the tailnet at all, the tailscale zone which is a subnet router, and the client machine. what OS is the client machine? 15:11:56 Because for a subnet router to work, you have to allow the route in the Tailscale admin console and you have to accept the route on the client. 15:12:18 And of course if the client is on the same LAN as the GZ, I think the client will prefer to route over the LAN than the tailnet. 15:12:48 None of my clients can ping the NAS and the IP's on the LAN over Tailscale. I have tested using the following clients: Android 15, OpenBSD and Windows 15:13:50 nahmu: the route is created and accepted in the admin console. Without that I wouldn't be able to ssh into the GZ over Taislcale. 15:14:02 [illumos-gate] 17242 Manual formatting present literally -- Peter Tribble 15:14:27 how do you know you are SSHing into the GZ over tailscale? 15:14:34 nahmu: sure, the clients on the lan probably using the direct connection. 15:16:18 nahmu: I am sitting at my GF using her wifi, supplied by a different ISP, while my NAS is at my flat using my own network. around 10 kms away. 15:16:47 and you use the GZ LAN IP to ssh to it? 15:17:09 I use the GZ lan ip (10.0.0.10) to dial in, yes. Tailscale is not installed in the GZ. 15:17:53 hmm 15:21:31 I am using my company provided laptop, where I don1t have admin rights, but I canrun Qemu, so I have installed openbsd in qemu, and in qemu I have Tailscale running connected to my mesh. So I can use the OpenBSD virtual machine as an SSH jump-host to dial into any machine in my LAN over the tailscale running in a zone on my NAS. I know, it is a bit convoluted. 15:22:43 but even this IRC client is runing on my NAS, so I am almost sure I am conneted over Tailscale :) 15:23:09 It just somehow disturbs me to not being able to ping, while it was working previously. 15:23:47 As I can ping just fine on the LAN (like router -> NAS and its zones), I think it is not an issue with my LAN. 15:24:43 I tought maybe I have some strange firewall rule on the router, but AFAIK the communication between the zones and the GZ doesn1t goes over my router running openwrt, but uses crossbow instead (correct me if I am wrong). 15:25:12 considering this, then the issue must be either with the Tailscale mesh, or with the setup of the NAS. 15:25:30 As there is no firewall / ACL active, it can1t be the culprit. 15:26:08 I can ping the hosts just fine using their TS ip. So the issue could be somewhere the tailscale zone. 15:30:44 have you looked with "snoop" on all the interfaces to see where the icmp packet is still visible and where it stops? 15:31:49 wiedi: no, I never heard about this tool, thanks for the hint. 15:32:05 Just for info, here is my zadm config for the taislcale zone: https://pastebin.com/raw/3JD3xyEc 15:42:19 [illumos-gate] 17235 memchr(3C) and memrchr(3C) accept const void pointer -- rilysh 15:46:02 wiedi: I have started snoop in the zone like: "snoop icmp" 15:46:51 While it was running I have logged into my router and pinged the IP of the zone, it showed up just fine, so the snoop is working correctly. 15:47:27 However pinging the zone over tailscale generates no output at all in snoop. I have pinged it with both the LAN IP and with its Tailscale IP. No output. 16:15:54 I might need to recreate a similar setup to figure this out... 16:37:22 jclulow: hurrah! seems good :) 17:09:12 Look, what have I found: https://pastebin.com/raw/eisrDhyZ 17:12:31 is that not just some DNS noise? 17:16:42 Hmmm 17:17:47 Why would it ask for the dns info of that specific ip in the same moment i ping it? 17:18:03 From that exact ip. 17:18:13 I think snoop is trying to resolve hostnames for IPs. 17:18:49 use snoop -r 17:18:51 snoop on illumos accepts `-r` to suppress resolving the IP. 17:18:54 [illumos-gate] 17227 want dcmd to extract/replace bits -- Andy Fiddaman 17:18:55 what tsoome said. 17:31:02 let me see if I can reproduce this on my end... 17:33:18 okay, I lose pings when I use my illumos subnet router, but they work when I use my raspberry pi. 17:33:35 So I don't know that I ever had that working on illumos... 17:40:08 I'll file a bug to track this, szilard. 17:43:17 szilard: so this used to work for you? 17:44:04 https://github.com/nshalman/tailscale/issues/88 to track... 19:13:00 nahamu : yes 19:13:45 It used to work before for me, i am sure. 19:40:51 Are you using the binaries I publish on github, or the OmniOS package? 19:41:34 (if you have time to figure out what version last worked that might help me narrow in on a fix) 19:45:53 Does anybody know what rights I need to give a zone to mount lofi devices? I am trying to ban image-builder into a zone 19:47:00 usual way to find that sort of thing out is by enabling privilege debugging with the ppriv command .. 19:47:31 how do I do that? 19:47:34 ppriv -D -e mount -F lofi .... 19:47:43 ah nice 19:48:51 hmm that reports nothing 19:49:39 I have the UFS mount though 19:49:40 hmm. it could be that it would be "unsafe" to grant to a zone 19:49:41 ppriv -D -e /usr/sbin/mount -F ufs -o nologging,noatime /dev/lofi/1 /images/work/installer/generic-ttya-ufs/a 19:57:34 sommerfeld: It's definitely unsafe to give file system mount rights to zones. I believe there is a per-zone list of allowed file systems 19:57:49 The "fs-allowed" property 19:57:53 as per zonecfg(8) 20:08:03 it depends on the filesystem, but if it touches disk don't trust it 20:08:27 or rather, if it touches user-provided data 20:08:44 jclulow: that worked thanks 20:11:48 would be nice if zone config would allow to use like fs-allow=ufs:onerror=umount 20:12:08 You would have to radically harden ufs anyway 20:12:17 so it'd be nice, but it's the same problem 20:12:31 which is "all our filesystems which touch arbitrary data do it too trustingly by far" 20:13:09 the fact it panics is not the problem, the fact something causes it to want to is 20:15:36 that too, yep. 20:36:37 I think at some point joyent seriously considered a hardened fat32 :) 20:36:47 or maybe it was only "not slow as hell" 20:59:45 just rewrite ufs in rust to make it safe 8-) 21:19:09 richlowe: I think the goal with any imagined future pcfs work was exclusively performance driven 21:19:21 Mind you the performance bottleneck was usually the USB stick 21:23:42 alanc: I had the thought yesterday "ufs has never been portable, thus ufs on aarch64 could have 64bit times and that'd be ok" 21:23:45 luckily I calmed down 21:24:54 shouldn't that be, "thus there is no reason for ufs on aarch64 to exist, since it doesn't need to be able to import old data"? 22:29:14 Indeed haha 22:29:45 I would certainly ditch it 22:30:23 I expect we'll ditch it on x86 as well, rather than put a bunch of effort into 64bit time stuff there 22:31:23 actual decisions like that have not been made 22:31:53 availability is on an it-was-easy-or-necessary basis 22:32:01 Yeah 22:32:08 in this case "easy" 22:32:08 I vote for deprecating UFS by 2035. 22:32:18 danmcd: or 2030 even 22:32:21 I haven't tested it, to my knowledge. I probably made and mounted something 22:32:50 Yeah, to be clear, you should do whatever you need to do in ARM town to get the next piece of the cathedral stood up 22:33:16 But at integration time I suspect "we will just never ship UFS here" is extremely reasonable 22:33:43 right, I expect numerous debates about what is shipped 22:33:52 as I've mentioned, anything I haven't thought worth my time isn't, for instance 22:34:11 mmm 22:38:06 Wearing my SA hat, I would like to add to the "do not ship it" vote. There are times where it is best to just move on and this is one of those opportunities we so rarely get. 22:38:58 jclulow: right, people coming at this from a "do less" perspective is certainly heartening right now, if surprising :) 22:41:13 The only way I got rid of some really, really old hardware was the transition to 64bit when they dropped backwards compat. Even then I had PIs complaining they were 'throwing away perfectly usable computers." 22:41:50 (They might have sung a different tune if their grants were billed for the electricity. ) 22:44:05 [illumos-gate] 17177 want libktest -- Patrick Mooney 22:45:34 omitting UFS sounds great 22:55:18 a related debate we've been having is "if we get rid of ufs, shouldn't we keep ufsrestore, so people can restore old backups to new ZFS filesystems?" 22:56:39 we didn't go as far as "if we get rid of ufs in the kernel, should we keep ufsdump to let people read their old disks and migrate data to ZFS?", instead preferring to suggest they ufsdump *before* upgrading, or reboot to an old BE that still had UFS to do so 23:00:52 well, also ufsdump is not the only tool to read data from ufs;) 23:01:29 it's the one we support 23:02:11 ah, you mean, while the kernel driver is no more?, then sure. 23:04:04 alanc: Yeah, I mean, it seems reasonable to keep those programs around for longer (again, only on systems that ever had UFS to begin with) 23:08:22 those systems I have seen around had nothing to pick from ufs, they were running some ancient oracle on top of s10 and were waiting for some data to get either expired or migrated off before powering of.... 23:08:27 off*