15:37:52 [illumos-gate] 17503 mac_sw_lso() should handle mac_sw_cksum() failures -- Patrick Mooney 19:04:52 Hi, is anyone actively working in AMD pci passthrough? 20:07:43 neirac - I'm not aware of anybody working on that. 20:58:42 Hmmm, thought maybe one of @Woodstock or Patrick might have been. Last thing we need is duplicate effort. 21:00:55 if I remember, the effort isn't so much the passthrough 21:01:23 you might want to ask instead if anyone is working on fixing the amd iommu support, but I think the answer is still no 21:01:38 I assume you could cheat like intel does, though, if you had to, but I don't know. 21:02:24 if you were to do the iommu fully though, I'm interested in that, to the degree it'd be nice if the way we all did it was general. 21:02:50 * danmcd_ smells some ARM around here... 21:03:32 all i've ever heard is 'it's horrible and broken' without any explanation of how/why 21:15:14 the way it plugs into the system is pretty shocking 21:15:32 I think that's because the org chart was literally Intel doing the intel one, AMD doing the AMD one, and Sun hoping for the best in the middle 21:16:18 richlowe: Yeah... Intel was notorious for their isolation demands (/me remember Itanic), and I don't know if it got any better/worse in an AMD64 world. 21:16:56 * danmcd was 2nd floor MPK17 or BUR02, not 3rd floor, nor LAX, nor any other building... 21:18:02 i don't think the existing amd iommu support deserves fixing, and neither does the intel one 21:18:43 bhyve has its own private intel iommu driver for passthru, we ported that from freebsd back in 2017 21:18:53 Woodstock: the problem we all have is that _that_ opinion is pretty prevalent, but nobody remembers why or what is broken with it 21:18:55 the same should be done for the private amd iommu bits in freebsd bhyve 21:19:18 and outside of PCI passthrough, the IOMMU has uses 21:19:43 anyway, the tension is that if we use the iommu properly the bhyve support needs to be redone 21:19:54 back then i worked together with the guy who wrote the amd iommu driver, and he just didn't finish fixing all the nasty issues before the lawnmower came 21:20:14 ok, so a collection of them, not one specific? 21:20:17 so even if you fix it, it may not support anything past the AMD K10 21:20:28 if the architecture is bad, and the code is bad, I definitely agree 21:20:39 but still think patching it into bhyve is probably not going to work long term 21:20:53 i didn't know back then what the bugs where, but I have tried it on a system that should have supported it and it failed miserably 21:21:10 perfect! that's literally what I've been trying to talk someone into doing for ages 21:21:22 So PROPER iommu is a burn-it-to-the-ground project? 21:21:35 Woodstock: maybe file a bug saying that? and that we should probably remove the existing code that misleads people? 21:21:39 the intel iommu driver at least kind of works for normal device isolation, i tried that when we were evaluating our options for bhyve. some drivers may have issues, but those are bugs of those drivers. 21:21:47 that seems sane to me at least, maybe wait to see if Dan does. 21:22:05 ok, so intel works but the architecture is still oof 21:22:16 unless that architecture is just what turns out to be necessary 21:22:27 I've no strong opinions here. @neirac (who seems to have disappeared?) was asking me about it, and I sent him here. 21:22:33 so if someone tasked me to get passthru working on AMD, I'd port the bhyve amd iommu bits from freebsd as i have done with the intel bits :) 21:23:05 Woodstock: yeah... I was thinking someone might've been mid-working-on exactly what you just described. 21:23:36 as far as i recall porting the intel bits wasn't terribly hard 21:28:39 i guess i need to clarify: I was around at AMD when AMD wrote the iommu driver for opensolaris for the then-new K10 chips, and that code definitely hasn't been working and hasn't been touched since 2010 21:30:30 the freebsd amd iommu code probably works well, but i have never tried it 21:31:31 I remember when it was you@AMD. 21:32:17 lately i began noticing that i'm apparently getting old. it was 13 years ago that I left AMD to join Nexenta :) 21:33:17 Woodstock: if someone ported the bhyve stuff for passthrough, what would happen if actual system-level iommu support appeared? 21:33:41 they can't cooexist right? (I'm not saying one approach or the other, anymore, just confirming I'm not an idiot) 21:33:51 cooexist, like pigeons. smh. 21:34:17 FWIW, we are probably going to be dealing with the AMD interrupt remapping over the next 1-2q. 21:34:21 Which does touch that. 21:34:49 i'd be surprised if that could coexist 21:36:16 rmustacc: Thank you for that data point. As mentioned earlier, there's some interest in it and we're happy to help if we can. 21:36:36 That's a different part. 21:36:41 Though both IOMMU related. 21:39:17 While we're not immediately looking at the isolation parts (but do need to longer term), we can't have it be passthrough or increase NCPU. 21:39:35 Not saying it will be that way, but something we're going to have to be mindful of here. 21:55:51 in my initial inspection of the code, I assumed it required porting the amd parts https://src.illumos.org/source/xref/illumos-gate/usr/src/uts/intel/io/vmm/amd/amdvi_hw.c?r=32640292 like was done for vmm_vtd