-
antranigvandyf isn't that dangerous ?
-
aeonjeffjAny suggestions on how I could profile a 'zfs send' process (OmniOS) to see what the send is doing, where it is spending the more cpu time?
-
aeonjeffjBeing a Linux monkey and banging around in OmniOS having not touched Solaris since the 1990s is a hoot
-
aeonjeffjIs there a way to push an OmniOS box to run processors at higher or highest p-state (clockspeed)? I'm watching powertop while running a 'zfs send' that is very slow and the cores never kick up beyond the lowest p-state (800Mhz). I've Googled my *ss off and I cannot find a command within OmniOS to stick a processor at a higher p-state. Linux has tuned-adm profiles as well as cpupower and even writing to sysfs cpu pointers.
-
antranigvaeonjeffj believe it or not, ChatGPT is pretty good. lemme ask GPT4
-
aeonjeffjChatGPT told me poweradm which is a Solaris command that doesn't appear to exist or be an addon package to OmniOS or IllumOS
-
antranigvsame here
-
antranigvand there's pooladm
-
antranigvbut not sure about that
-
antranigvmaybe others know better.
-
lgh127001Hello! Yesterday's "guest72" and "guest28" here, thought it'll be more consistent if I create an account. Sorry for asking some questions again, I tried to boot up the SCSI image on the virtual machine we talked about yesterday, but unfortunately I encountered some weird behavior I couldn't track down. After turning on verbose mode and booting the
-
lgh127001system normally (option 1 in bootloader screen) instead of cloud-init initializing the instance, I got the x86_feature: list normally, then at the end this is the last message after which the system just hung without any further message, for about an hour when I killed it: "mem = 1042984K (0x3fa8a000)" since there are no error messages I couldn't
-
lgh127001even get some ideas what could go wrong. Does anyone have any idea what could be done to figure out what's causing the problem? Thank you in advance!
-
antranigvhellllo
-
antranigvhow is everyone? :)
-
aeonjeffjI asked a few questions here yesterday. Didn't get answers but I eventually figured it out so I'm posting them here in case others come looking. ZFS send/recv over 25GbE link had terrible performance. Turns out OmniOS kernel was controlling PCU p-states and keeping clock speed at lowest, 800MHz. Removing that control by editing /etc/power.conf and changing "cpupm" to "disable" and adding /etc/system.d/zfs file with ZFS tuning
-
aeonjeffjzfs_vdev_(async/sync)_(read/write)_max to a higher number equalling number of drives in the zpool tripled ZFS send/recv performance while using mbuffer as a network data pipe. /etc/power.conf: cpupm to disable, cpu-threshold to 10s. /etc/system.d/zfs: set zfs:zfs_vdev_async_read_max_active = 0xf (repeat for async/sync and write/read values).....and of course reboot to take effect.
-
aeonjeffjApologies if any of that is rudimentary to anyone here. I Googled and used ChatGPT4 and could not find this information.
-
antranigvaeonjeffj maybe you should blog about that :)
-
aeonjeffjmight
-
rmustaccThat feels like a surprising default and set of behavior about being kept low while in the cpupm mode. Probably something we should dig into a bit more.