00:00:53 kinda... 00:00:59 at least for the intel iommu 00:01:47 it caches the iommu_t responsible for a device in the device's dev_info_t, but the first time, it does traverse up the tree to find the 'domain' the device is in 00:02:17 which is usually it's own, but then also traverses the tree up to find the iommu_t responsible for the device the first time (then caches it) 00:10:33 apparently, I had the ddi and the rootnex bit swapped in my mind. 00:10:49 I mean the way rootnex_dma* call into iommulib* in that way they do 00:22:10 one of the other things I need to put up is we do traverse too far looking for the top most device in a domain 00:22:24 and blow right by the root complex 00:22:41 and go all the way to the root nexus 00:23:10 though right now my fix looks at the device binding name and terminates the walk if it's 'pciex_root_complex' 00:23:19 but I don't know if there's a better way to handle that 00:48:55 jbk: some sort of attribute on the device indicating that it's a root complex? 01:03:06 that's waht I was wondering, but the only thing I found was create_pcie_root_bus() (in the somewhat confusingly named pcie_nvidia.c) 01:03:15 which just sets device_type and compatible 01:08:22 jbk: that's for an nvidia PCIe chipset, IIRC 01:10:58 ( https://en.wikipedia.org/wiki/Comparison_of_Nvidia_nForce_chipsets ) 01:36:55 it appears to get called from pci_boot.c 02:00:01 yeah, it gets a funny name on x86 02:00:42 npe was originally named as if for nForce PCI-e 02:00:50 but is now the "nexus for..." PCI-e 02:00:54 ahh.. 02:00:56 the "nvidia" in the name is vestigial 02:02:41 pcie_get_rc_dip is probably the best way I know to find an RC in your ancestry to terminate a search 02:03:51 (it checks the bus private data) 02:37:10 :o 03:54:24 hrm... looks like redmine is having issues again... 13:03:19 ugh nforce mentioned