11:21:51 code.illumos.org appears very slow for me... 11:58:30 cd 11:58:34 whoops. 12:22:07 :) 13:48:01 yeah 13:48:20 got a bad gateway first time.. now still trying to load 13:56:16 Ok, so it is not my computer then:D 13:57:55 Some folks here might find this useful. It's a re-creation of `=j` from `mdb`, but as a standalone program: https://github.com/dancrossnyc/jfmt 13:59:36 thats nice, so tired of manually counting bits:) 14:00:27 Great! I hope you find it useful. :-) 14:01:42 um, I'm using ficl-sys as calculator, should probably add this word there:P 14:03:59 heh.. more than once (especially on this work project) I've been firing up mdb just for =j :) 14:09:33 so thanks for that 14:57:14 yep, seems code.illumos.org is dead for good. only getting 502 now. 15:00:30 seems to be back now 15:55:10 code.illumos.org seems to be busted again - more 502 Bad Gateway 15:55:33 or maybe just slow? 15:57:51 never mind, seemed to have been a browser tab that was still befuddled from the earlier outage. 16:04:38 pushing a rebase to gerrit seems to be going very slowly (at the "remote: Counting objects:" stage) 16:09:46 and then I got a "Bad Gateway" 16:21:20 so it's not dead but not healthy either. 16:43:06 it's undead... just in time for halloween :) 16:50:50 Or, alternatively: "It just so happens that your service here is only MOSTLY dead." 18:04:37 so no looking for loose change... 23:46:12 I know this isn't the right place, I do general IT dev/ops, but I became curious how the Grace Hopper programming differs since they are all separate system image nodes, but share "direct" access to memory across the nvlink domain 23:47:01 I wondered if someone in here knew much about it, since I like you'se folks 23:50:12 I would suppose you have a set of processes across different nodes but could share memory pointers on a low level? I'm sorta fishing to be pointed somewhere, because I obviously just know keywords :-P 23:51:41 So, the GH chips have LPDDR5 connected to the Grace CPU and HBM3 connected to the GPU. 23:52:50 So there is hardware coherence within a single "Super Chip" via the C2C NVLink, but I don't believe there's anything coherent about external memory per se. 23:53:35 But I'm not entirely sure which bits you're takling about sharing or accessing per se. 23:53:53 This: https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/#extended_gpu_memory 23:54:52 I suspect that the programming doesn't differ in that from anything that you can already do via nvlink with access to remote memory. 23:56:10 But I haven't dug into this deeply myself. 23:58:50 Each grace hopper combo is its own system image instance, so it just got me thinking about the abstractions that would have to take place for jobs running across multiple nodes that are sharing memory 23:59:47 to theoretically approach using all the gpu cores and memory for a single related job