04:06:20 hey all, odd question, if I wanted nfs across multiple muti-user clients, is it possible to manage access even if UIDs mismatch? Would NIS work for that? I can't find much info online 04:12:29 what exactly does nis/netgroups do with nfs? 07:51:02 spuos: NIS is really just a place to get passwd(5) and group(5) data from a central server, like LDAP or whatever 07:51:54 and netgroups are just a sort of basic access control list you can use for some things like deciding which hosts are allowed to mount an NFS share 07:52:08 You can look at the netgroup(5) manual page for more details 07:52:53 None of this really helps with providing NFS service to another system that has a foreign (managed by different people, names and numbers don't line up, etc) NFS client 07:53:16 What _might_ help, though, is the "uidmap" and "gidmap" options; see: share_nfs(8) 07:53:50 Though the interface is not fantastic, you can do some remapping of the numbers on a client-by-client basis 09:46:26 those are a bit different things there. with netgroups one can group up hosts based on their names/ip addresses, but file access in nfs protocol level needs users and groups. NFSv4 does use user and group names there (and not uid/gid as NFSv3/NFSv2 did) - so as long as names do match in your nfs domain, you are good. *EXCEPT* if you are using auth_unix (aka auth_sys) authentication, which is using uid/gid numbers. auth_krb5 (auth_gss 09:46:26 with kerberos 5) does work around that issue, but has stepping stone of its own. 09:49:41 While NIS is very simple naming service to set up, it is rather limited in most aspects and therefore should be avoided (unless you know exactly what are you doing), ldap is better option in that sense. 13:52:54 tsoome: care to elaborate on the nis avoidance thing? I have a vlan made of trusted peers of various unix-y oses and it seems the most compatible 13:55:09 jclulow: ah, that makes a lot more sense, I saw some stuff about NIS in a few man pages relating to nfssec and auth, I guess I was a little confused there 14:02:10 its security is rather weak by todays standards, you can google "nis versus ldap" and you get ton of information. For small home network you dont really need this all anyhow, manual/scripted sync of passwd/shadow/group is quite enough. For learning purposes either is ok, but ldap is more practical (IMO). 15:39:31 https://www.youtube.com/watch?v=Cum5uN2634o 15:39:41 this was one of the best talks i have seen in a long time 15:51:58 i'm just confused by the rockstar intro... 16:08:51 LxGHTNxNG: Its pretty common on tech conferences nowadays 16:24:52 Úf. 17:17:02 Hello! I've installed another server and I'm trying NAT66 - and I'm being hit by this here, too: https://www.illumos.org/issues/17584 17:17:03 → BUG 17584: Null pointer leading to a crash when using NAT66 (New) 17:17:31 On this server I have full control and contains no private data (there are my blogs), so I can help with any tests or information you may need 17:46:17 i think the most useful thing would be is if there's a crash dump (or if not, configure them and recreate the issue) if that's possible and won't be too disruptive 18:53:45 @stefanobsdcafe I mentioned that on the ticket too. 18:54:06 dumpadm(8) is your friend, and if it's a quick-to-reproduce issue you shouldn't need TOO BIG of a `dump` device/zvol. 19:28:22 Sure. I'll reply to the ticket, too, but the dump can be downloaded here: http://45.157.176.175/vmdump.0 19:40:07 stefanobsdcafe: two questions: what is your opinion of singing cowboys, and secondly, where did you get a VM that cheap? 19:47:59 rmustacc: one thing I've noticed (while looking at hte PCI multi-segment stuff) is that the pci_{get,put}{b,w,l} functions always use IO space (all the different implementations ultimately are just doing in[bwl]()/out[bwl]()), do you know if mmio is supported on a system if there's anything PCI related that would require accessing I/O space? 19:48:26 singing cowboys? 19:48:44 spuos: sorry, I don't understand the singing cowboys part - but I understand the second part. It's a netcup piko vps - currently sold out 19:49:22 I'm downloading it now. 19:49:29 1) you'd be the second person today I've seen talking about NAT64, and 2) aww :( 19:49:33 (it seems like what we'd want to do is early on, if the mcfg table is there, map the cfg space(s), and set things up to use that, and if not, fall back to the other methods (with suitable escape hatches to force behavior) 19:50:26 but don't know if there's something we're doing which must always touch I/O space (though if so, I'm not sure how that'd work in a system w/ multiple segments) 19:52:25 @stefanobsdcafe please supply here two lines: 19:52:34 1.) output of 'ls -l vmdump0' 19:52:43 2.) output of "digest -a md5 vmdump.0' 19:52:56 Just to make sure this through-a-straw download succeeds? 19:55:14 -rw-r--r-- 1 root root 95551488 nov 17 19:26 vmdump.0 19:55:16 The digest is: 6732c6a02aa222d41cb34d8e5d517d76 19:55:26 Thank you. Which distro is this again? 19:55:43 (Not that it MATTERS a lot, but picking which machine to inspect it on will be made easier.) 19:56:02 this is SmartOS - joyent_20251113T010957Z 19:57:16 Ahh. Thanks for using the latest. I'm going to assume this is your first use of NAT64 on any illumos or else you'd have bugged a SmartOS list. You did the right thing filing a bug in illumos. Worst case for me is that it's something SmartOS-specific. 19:57:16 Thank you, Dan! 19:58:05 What's the bandwidth on your end? 19:58:35 Yes, it's the first time I'm using NAT64 on illumos. I'm usually routing, but netcup doesn't allow it so I need this. I can try on omnios, too - I can create another VM (or convert this) to try, tomorrow 19:58:51 the bandwidth should be 1GB but being an extremely cheap VPS, I think it's some sort of best effort 19:59:11 I mean 1 Gbit/sec 19:59:15 I have in theory GigE here at home, but it's more like 900Mbit, and of course the latencies, etc. etc. 19:59:58 Yours is a small vmdump (<100MiB), but my curl speeds are awful. 20:01:46 Mmm yes, I just tried to download it - it's coming down at more or less 300 Kbit/sec or even lower. I'll try a scp now 20:02:41 Same performance 20:03:38 cleartext http, I wonder if it's going through some sort of surveillance ? :D 20:03:58 scp has the same performance, so probably not :) 20:04:12 300Kbit/sec sounds high from what I"m seeing. I'll know for sure when curl gives me the finish time. 20:05:32 I'm flashing back to downloading Mac binaries in .hqx from sumex-aim over 56Kbit ARPANET links. 20:06:18 hqx? was that stuffit? 20:06:33 No. BinHex ==> Mac equivalent of uuencode/uudecode. 20:06:38 oh 20:06:49 If your FTP client was doing ASCII-or-other-charset things it was safer. 20:07:43 I just tried a speedtest from a lx zone: ✓ Download: 361.60 Mbps (Used: 511.12MB) (Latency: 6ms Jitter: 9ms Min: 0ms Max: 31ms) 20:07:44 ✓ Upload: 95.89 Mbps (Used: 113.80MB) (Latency: 26ms Jitter: 14ms Min: 11ms Max: 57ms) 20:08:17 600KBit. 20:08:37 20mins 44sec for 95551488 bytes. 20:08:52 mmm 20:08:59 smartos-build(~/cores/nat66-stefano)[0]% echo "(95551488 * 8 ) / (20 * 60 + 44)" | bc 20:09:05 614479 20:09:11 i've been able to move it via scp (different port than 22) in a few seconds 20:09:18 Okay, so "flaky T1 line" instead. :) 20:09:23 some throttling or something like that 20:09:33 Yeah, that. Got it, matched md5 and size. 20:10:32 great 20:25:06 this is a deep stack of IPF mess. I'm not going to have cycles to burn but I'll say this. 20:25:24 - It's a DEEP stack, tickled by an interrupt too, I think. 20:25:41 - it *might* be a reflected packet (NAT out, turnaround-on-host, NAT in). 20:28:02 - It's panicking because fr_tcp_age() gets a timeout quee whose "tqe_ifq" is NULL, so the panicking fr_movequeue() function, whose second arg is that tqe_ifq, holds a mutexon a NULL pointer. 20:33:27 In fact, turning off the nat64 the problem disappears 20:35:00 - The naive assumption is that fr_movequeue may wanna check if its second arg is NULL, and if it is, Just Replace It 20:36:56 It's not going to be that simple. :() 20:37:27 I'll update the ticket a bit but then I have to leave it. 20:39:22 Thank you for looking into it! 20:48:51 jbk: In general, no there's no reason you can't use mmio the entire time. That's what we do on Oxide. The biggest challenge is just timing and mapping during boot. 20:49:10 That is, assuming the mmio stuff has been configured propertly for the CPU, which I would assume it is based on the acpi table. 21:46:19 ok.. 21:52:51 and yeah, the sequence there is a bit involved (possibly for legacy reasons) so I've been trying to trace all that out... 21:54:35 there seems like there might be some stuff in there that might not longer be relevant (e.g. method 2 access from what I read suggests it was only used for 486 and pentium hardware) 21:54:42 though that might be better as a separate effort 22:42:28 arm, it's a function of the pci nexus to know how to talk to its own config space 22:43:05 (because unfortunately, broadcom took pci-e mandating e-cam as optional) 22:44:38 did the `mac_lso` test linger in `git status` for anyone else? 22:44:42 or have I botched a merge? 22:53:36 richlowe: not really sure what you mean, it's a new file, what do you see? 22:56:56 [illumos-gate] 17741 kmdb: potential null pointer dereference -- Toomas Soome 23:12:54 spuos: if NFS didn't provide the required security or is too hard to setup with mis-matched uids then CIFS/SMB might be worth looking at.