21:25:09 sommerfeld: I feel like an NFS close() might implicitly become an fsync() on the file server? 21:25:59 If you did, via NFS, a truncate and some writes and a close, then the file server was interrupted in that window, I could see this happening 21:40:04 The same question should be asked of SMB/CIFS too, just in case. 21:40:46 I haven't known this answer since .nfs 21:41:34 but my immediate thought was the XFS port to linux, that had a habit of replacing your data with \0 for years. 21:42:21 it was very strict about the _metadata_ being entirely correct on disk, but when it got sad your data was not 21:43:14 Anyone ever seen this from one of the zvol tests? https://paste.omnios.org/?a0b1b22b0dcb8e78#HHMGmnTm9yUFmQ6k9KEiM59x93ckofJGeuDKnj8xkngi 21:43:28 It has made me realise that the zvol_misc cleanup is not robust enough 21:44:34 I've years of test runs now. Lemme run some tools... 21:48:07 Only found one failure there. Gist coming soon... 21:48:38 I am just checking it's not related to the ZFS change I'm testing. Can't see how it is 21:48:51 https://gist.github.com/danmcd/31fd48591578d5eb04fd84344217b076 21:49:15 Doesn't look like it... the directory I literally labelled WTF, suggesting I screwed up something. Can check notes too.. 21:49:33 Ok, thanks for checking. I'll look at the core. 21:51:13 .. okay my WTF was a case of "apparently zpool export testpool was not yet done… ". (Thanks to @tsoome for that diagnosis back in April.) 21:54:28 There are definitely some opportunities for improvement in this. In my case that failed pool creation (which was expected to fail but not like that) left the pool used as a dump device and broke all of the subsequent tests. 21:55:17 i don't recognize that one, but it's been a while since i've run any zfs tests 21:57:07 and yeah, there were several errors (I think tsoome might have fixed at least a few of them) where some tests didn't clean up properly which lead to subsequent tests to fail (but would succeed when you re-ran them since the earlier tests that passed weren't re-run) 22:00:38 Well it's reproducible, so I just need to find where in ZFS land the EDOM is coming from. 22:01:48 fortunately you can track SET_ERROR ;) 22:02:01 maybe a channel program? 22:08:30 It's a brand new VM I just set up for running the zfs testsuite. I'll look into it. 23:00:32 hm. if you `mkdir("/dev/zvol/rdsk/", ...)` you get an ENOSYS. that really feels like it should be an EEXIST, no? 23:01:12 i haven't looked all that much, just enough to see that vn_createat gets all the way to fop_mkdir and ENOSYS comes from fs_nosys, but that seems pretty late for a path that clearly exists! 23:06:42 That will get handled by the dynamic plugin in sdev. 23:23:24 you mean that changing this would end up in sdev_zvolops.c, right? i also notice that it gets past lookuppnat without an error, which seems curious. but if the ENOSYS seems remarkable i'll at least open an issue