00:22:47 is there a way to like... reboot into a read-only mode? 00:29:14 if I've set all the datasets in the zfs pool to `readonly` is it theoretically safe to dd if=/dev/dsk/ of=wherever-really and expect the resulting data in wherever-really to be reasonably consistent? 00:32:29 MelMalik: what's your goal? backup? replicating a running system? 00:33:07 and do you want to do this to the root/boot pool or to a data pool? 00:33:31 this is about the root pool... 00:33:39 this is probably a case of "you shouldn't want to do that" is it 00:35:08 there are other things you could do that would be less disruptive and more likely to produce the desired result. 00:35:21 (if we knew what your actual desired result is, that is..) 00:37:32 the desired result is to have a copy of the disk somewhere else, in a state that is reasonably consistent; the resulting disruption from having everything be readonly is, if uneasily, acceptable. 00:38:01 zfs snapshot -r rpool@snapshot_N ; zfs send -R rpool@snapshot_N | ssh elsewhere zfs receive someotherpool 00:38:38 and you can do that online where the only disruption to the running workload is added I/O contention ... 00:38:58 that does not take the partition table with me. 00:39:50 i guess I'm not really asking for alternatives, but more of a "will what I plan on doing have the intended effect or will the result on the other end be unusable" 00:40:44 well, after the dd, you can import the resulting pool and run zpool scrub and know if it will be usable by the end of the scrub. 00:41:41 right 00:41:46 gotcha. think i'll do that then 00:42:07 another option would be to add the other disk as a mirror and then use "zpool split" 00:42:27 (assuming you're starting with a single-disk or mirrored rpool) 00:44:51 if that were practical 00:45:25 anyway, so, "and we're off" - I'm going to make a coffee, or probably go to bed, and see how things end up going 01:16:05 update: that was a disaster, but what, I suppose, did I really expect 01:16:56 I guess I'll have to reboot the machine into something not running off the disk, then 04:03:37 How going the making of r151054? It supposed to get released in a couple of days... 04:34:36 MelMalik : idk if the zfs retention feature is available on Illumos, but if yes it can be interesting for you: https://www.c0t0d0s0.org/blog/preventaccidentaldestruction.html 04:37:49 nothing was destroyed on the source machine, it was just unreadable on the target for whatever reason 05:49:13 copying as described with ssh but by using a live cd running freebsd worked in the end 09:19:45 szilard - r151054 is on track for release on the 5th of May (first Monday in May) 09:20:11 Great! 09:22:50 This will be my first revision-update, I will see, how hard it is! 14:43:01 is realtek rtl8125 2.5GbE NIC supported? 14:45:27 it's not listed on https://illumos.org/hcl/ so I guess not. 14:47:54 It is not currently, sorry. 14:58:42 Let me buy another NIC. Thanks. 15:53:20 If you *need* 2.5Gbit, get an Intel I225 or I226 that supports 2.5Gbit, or if you have deeper pockets, get an Intel 10GBaseT that also supports 5 and 2.5 like the X550 or X710. (If you see an X722 DO NOT GET IT for illumos#13230 reasons.) 16:05:37 danmcd: for 13230 the patch you posted looked good though, so theres hope, right? :D 16:20:10 It needs SO MUCH TESTING but folks who've used it have noticed no more dups. My BIG concern is that it might break on other i40e parts, but NGL, 13230 is what the Linux i40e appears to use. The test matrix to confirm 13230 is quite large, unfortunately 16:21:47 I have wondered if 13230 should only be applied (using device PCI ID and media information) to X722 using BaseT only. 16:27:44 Hi ^ 16:28:40 danmcd: do you have adequate testing for your draft fix for http://illumos.org/issues/13230 on the X710? ("pciex8086,15ff"); I've got a system with a couple of them.. 16:29:45 I ran it a while ago on X710-Base-T-equipped `curly` in Kebecloud, but it was a while ago. 16:30:55 I didn't see any problem, but I didn't exercise the crap out of it with, say, multicast. `curly` has both BHYVE and multiple-native zone types. 16:31:19 The draft fix is tiny: https://kebe.com/~danmcd/webrevs/13230-newtry/ 16:32:42 You can see how (relatively) much if ifdef'ed out in that. 16:34:29 I also worry about if, say, gz uses `i40e0` but there are vnics on top of it too, what happens if GZ does something, will it affect the VNICs? ANd vice-versa (that is what 13230 is all about at its beginning). 17:29:04 MelMalik: Thought of one more thing you could try if you still want to do things the hard way - a "zpool checkpoint" prior to the copy might make the copy more recoverable. 17:31:39 what I ended up doing, with the dd method but using a different OS in a livecd, ended up working 17:37:17 yeah, that would avoid updates to the disk while the copy is in flight giving you a consistent snapshot 17:49:13 Sorry, I need some help to understand this: FreeBSD writes at almost 500 MBytes/s on a ZFS pool built with six Seagate Ironwolf 8TB SATA2. On the same machine, OmniOS writes at 160 KBytes/s... how can I identify where the problem lies? 18:06:03 wow, I just discovered that the cause lies in the way I test write speed: I'm running the command 'dd if=/dev/random of=/export/test.im bs=1204 count=1024'... it looks like that reading from /dev/random is around 120 KByte/s on this system with OmniOS!! :S 18:39:36 warden: the other thing to check is how the data is being written (or not) you may also find that data is being buffered in ram so initially the speeds look good. 18:41:11 thanks, anyone can tell me why /dev/random is so slow in generating random data compared to FreeBSD on the same hardware? 18:42:08 from memory when i did some testing like that many years ago that tested write speeds I had a script that would create some example type 'files' in ram and then write the to disk with random dirpaths/names for a sustained period (so that you're trying to write several times the ram capacity to disk) and watch the write speed. 18:43:08 check how random the sources are and what they do if there's not enough entropy. I don't know FreeBSD or Illumous for that but... 18:44:00 ... on Linux I know you have /dev/random and /dev/urandom. I forget which way around but one i truely random and will block if there's not enough entropy 18:45:33 the will give pseudo random data but if there's not enough entropy it'll keep going but the quality of the randomness may drop. 18:46:51 m1ari: thanks! In the mean time I did a test similar to yours: I saved a big file generated by FreeBSD's /dev/random in an OmniOS's ramdisk, then I run 'time cp ...', writing to the same ZFS pool. The result is comparable with FreeBSD performance: around 480 MByte/s! :) 18:47:26 a freshly booted system may not have as much entropy as something that's beem running a while, there may also be daemons that can help provide entropy (again something I've looked at more on Linux than other systems) 18:48:06 how are your drives configured in the ZFS pool ? 18:48:08 warden: the original distinction between /dev/random and /dev/urandom was that the former was stronger "real" randomness and the latter was fast "pseudo" randomness. the distinction turns out to be harder to nail down than you'd like from a mathematical basis 18:48:33 but solaris and derivatives continue to make the distinction 18:49:55 You should use /dev/urandom for most things and save /dev/random for those times when you're generating a public key you want to last for the next decade... 18:49:56 I created the pool with this command: 'zpool create -O atime=off -O compression=on export raidz c3t0d0 c3t1d0 c3t2d0 c3t3d0 c3t4d0 c3t5d0' 18:53:04 * danmcd catches up on #illumos and see @sommerfeld answered the question faster. Just like the good old days! :D 18:53:23 s/#illumos/#omnios/g 18:54:07 I wonder if you've got some ram caching going on as well, 500MBytes/s from spinning rust in raidz sounds quite fast (It looks like those drives also have a lot of cache) 18:54:21 for the records, I'm using this SATA controller: ASM1166 (https://www.asmedia.com.tw/product/45aYq54sP8Qh7WH8/58dYQ8bxZ4UR9wG5), which I discovered not behaving properly at hotplug... you can only hot-swapping disks connected since boot time, and plugged only on channel lower than 5 (tested also with FreeBSD)! :S 18:55:35 m1ari: yes, the disks I'm using have 256 MB of cache 18:56:36 mlari: sequential write to raidz should perform quite well (should approach N-1 disks worth of sequential performance) 19:01:04 I stand corrected, it's been a while since I had to do much drive performance stuff, and havn't got to play with zfs at that scale as much (too many people still seem to prefer Linux) 19:06:20 warden: depening on your data you might also want to consider raidz2 rather than raidz. Especially if the drives are all about the same age. You may find that if one fails another may go shortly after and you wouldn't want that during resilvering. 19:48:19 mlari: replicating your experiments on my system: /dev/urandom rate limits me to ~64MB/s. file to file goes more like 500-1000MB/s as reported by dd for short durations; watch the disks with iostat and you'll see each one going at ~50-150MB/s for a bit. 19:48:40 did run into this oddity from dd: 268435456 bytes (256 MiB) transferred in 0.250209 secs (??iB/sec) 20:04:10 aha, NN_NUMBUF_SZ is one character too small for that value. 20:05:11 268435456 in 0.250209s is ~1023MB/s 21:14:12 m1ari: I’ve a bit of experience with ZFS (max. 30 TB pools until now), and I definitely prefer FreeBSD to Linux speaking about ZFS. Getting my first steps on illumos because I feel that OpenZFS is getting too “Linuxed”! ;) 23:55:13 warden: bad news: illumos.org links to openzfs :p