04:35:35 <_Random> I hope you guys are not offended, may I ask for some support on a Truenas core Dasboard issue. I'm new to the freebsd from linux, I hope that you guy's who have experience with Truenas can help me, please! 04:35:50 <_Random> the disk "/dev/da0" is present I've creted it with gpart using the command " gpart add -f freebsd-zfs /dev/da0 ", but its not showing up in disks in dashboard. https://dpaste.org/eTtci please are you guys able suggest why this maybe the case and a possible solution, thanks kindly! 04:35:51 Title: dpaste/eTtci (Python) 04:50:26 _Random: does the dashboard have logs? 04:54:15 can someone suggest a way to performance test my connection between two hosts? I've got a zfs send | receive that's been running for like a week or more 04:58:36 crb, maybe use iperf3 on both hosts to test the network speed? 05:00:19 ok so that seems to show 8.5 Gb/s 05:01:39 when ZFS prints out a line for a send receive: 21:59:38 188K rz2_pool/homes@2023_04_21 that means 188K transferred per second right? That would be like 1 mbit/sec WTF? 05:09:05 <_Random> meena: truenas core ix system R C 2023 05:13:59 <_xor> RhodiumToad: https://github.com/JosephLai241/nomad 05:14:01 Title: GitHub - JosephLai241/nomad: 🌳 The customizable next gen tree command with Git integration and TUI. 05:14:04 <_xor> RhodiumToad: Any interest in using that? 05:21:40 _Random: that's not a log, that's an OS Version 05:23:14 _Random: the dashboard is a (Web) application, and it has to have logs. you should look at those logs to figure out what it's doing (wrong) 05:25:39 <_Random> I just rebooted the machines & swwaped the drives over, now all3 are showing in the dashboard. sorry to trouble you, but your help is appreciated ! :) 05:26:53 crb: yeah that looks a bit slow 05:27:31 over ssh? ssh -C ? nc ? 05:28:08 rtprio, just plain ssh 05:29:09 zfs send -v rz2_pool/homes@2023_04_21 | ssh crb@eclipse recv -uv rz2_pool/homes 05:29:31 when did bmake add support for the != shell macro assignment operator? 05:34:44 preyalone: that feels like it should have been long long ago 06:19:02 crb: perhaps sysutils/pv might help confirm the speed of the copy. you will have to do some meth to feed pv optionw to let it estimate better, 06:20:50 crb: also, `ssh crb@eclipse recv -uv rz2_pool/homes` doesn't appear to include a zfs command. 06:22:06 ghoti: you're correct, but killing it and adding zfs before recv doesn't seem to change the output or rate 06:24:26 I would expect a rate of 0 if there is no valid processs to receive data. In a pinch, perhaps try `zfs send foo/bar | ssh crb@eclipse "cat > zfsrecvtemp"`, then see if you get data in that file.' 06:27:28 ghost so that last one was interesting, I don't see the file zfstmp even being created on the destination 06:29:18 Can you at least `ssh crb@eclipse hostname` ? 06:29:53 yep, I get the right hostname back 06:30:15 but that was very helpful 06:31:04 it waits for the password, whereas since ZFS was printing that line continuously every second I thought when the password prompt came up, it was picking it up from my .ssh or something 06:31:14 Thank you, that's help a LOT! 06:31:21 woot! 06:34:14 crb: to make things easier for this task without setting up keys, you might want to add this to your ~/,ssh/config: http://sprunge.us/wkVCrZ 06:34:31 ps -A -U joe9 -o start,time,etime,command | sed -ne '1p;/[f]irefox/p' 06:34:35 (and of course, make the directory too) 06:34:37 ops 06:35:05 ghoti, thank you now I just need to understand why it won't accept the snapshot 13:44:23 I have been googling the heck out of getting intel quicksync firing on my server for ffmpeg use. Not sure anything i am finding is current info.. maybe someone here has clues... 13:45:41 Have the drm-kmod package installed. The i915kms kernel module is being loaded.. and i find nothing in /dev representing it. Ffpmeg compiled from ports with the quicksync bits enabled. It fails to locate a device when run with hwaccell.. 13:48:09 New server upgrade going from an anxhient xeon system to an i7 9700k. Figured itd be handy putting that iGPU to use. Its supposed to be pretty capable in 9th gen core processors. 13:48:47 does the driver attach (check dmesg)? is your user in 'video' group? 14:07:42 User is in video group, lemme check on dmesg 14:09:24 I do not see anything video related in that output. 14:10:11 Module being loaded but isnt happy i guess. 14:13:06 if you do pciconf -lv and look for the video device, what do you see? 14:20:53 Its in there. 14:21:11 is the driver listed as none@, or...? 14:21:26 and what's the pci ids, and what freebsd version? 14:21:37 https://pastebin.com/Ktrq3U0S 14:21:39 Title: vgapci0@pci0:0:2:0: class=0x030000 rev=0x02 hdr=0x00 vendor=0x8086 device=0x - Pastebin.com 14:22:10 Vga 14:23:13 Im on 13.2 14:23:16 and which driver exactly is loaded? 14:24:16 I915kms.so 14:24:37 kldstat -v will show the full path 14:25:43 also the version of the drm-kmod packages? 14:27:31 lcraft@persephone /u/h/lcraft> kldstat -v |grep -i 915 14:27:31 8 1 0xffffffff83214000 1858b8 i915kms.ko (/boot/modules/i915kms.ko) 14:27:31 520 i915kms 14:28:39 lcraft@persephone /u/h/lcraft [1]> pkg info drm-kmod 14:28:40 drm-kmod-20220907_1 14:30:11 there should be another drm-* package 14:30:23 and also a gpu-firmware* package 14:31:48 Dont have those. This is kab6nlake 14:31:56 Kaby lake? I think 14:31:59 gpu-firmware-intel-kmod-kabylake-20230210_1 Firmware modules for kabylake Intel GPUs 14:32:25 drm-kmod should have installed them 14:32:37 pkg info drm\* says what? 14:33:21 Pkg said the gpu package is already installed., 14:34:11 pkg info drm\* says what? 14:34:38 Ah there is another one 14:34:41 drm-510-kmod-5.10.163_7 14:34:42 drm-kmod-20220907_1 14:36:18 ok, well the code definitely seems to have an entry for your pci id 14:37:09 Shouldnt that be generating an entry in /dev then? 14:38:21 were there any messages from the module when it loaded? 14:38:35 and have you tried booting in verbose mode? 14:39:30 I have not tried verbose mode. It runs headless. 14:40:16 So i havent seen if its popping up a notice when the kld loads - i keep reading that happens. 14:40:36 are you loading it in kld_list in rc.conf? 14:40:47 Yes 14:40:53 any messages would be in dmesg or /var/run/dmesg.boot 14:45:00 It is showing vgapcie0 in there 14:45:06 https://pastebin.com/2Guyruik 14:45:07 Title: ---<>---Copyright (c) 1992-2021 The FreeBSD Project.Copyright (c) 1979 - Pastebin.com 14:46:44 that log seems to show the driver loaded ok 14:47:23 what's in /dev ? 14:52:41 https://pastebin.com/t1EEqDt8 14:52:42 Title: lcraft@persephone /dev> ls -ltotal 6crw-rw-r-- 1 root operator 0x2d May - Pastebin.com 14:53:52 anything in /dev/dri or /dev/drm ? 14:55:33 ./dri shows a card0 and a renderD128 14:55:50 there you go then 14:57:26 I must be missing the memo on how to access that device with software then. But this confirms i am having an ffmpeg problem not a freebsd problem. 14:58:14 More likely a between keyboard and chair issue. 14:58:28 right. I have less experience with that, I only use GPUs for actual displays rather than other software use 14:59:05 but the existence of that device, and the log messages from boot, confirms that the drm driver has recognized the GPU and attached to it 15:02:55 This is the first time ive had any use for a gpu on a server or a gpu up 15:03:26 Worth using 15:04:06 the devices should be accessible to the video group, btw, and the user accessing them obviously also needs to be in that group (or root) 15:06:53 I tried TrueOS for a while a long time ago. Not having Lightroom and Photoshop was a deal breaker for itTrying to run that under Bhyve was painful. 17:36:45 <_xor> Does x11/nvidia-driver not build the drm module yet? 17:37:22 huh? 17:37:29 <_xor> nvidia-drm.ko 17:37:53 dunno 17:38:25 <_xor> 525.xx was the first version used to add drm support on FreeBSD for the nvidia driver. 17:38:54 <_xor> I built it before previously based on https://github.com/amshafer/nvidia-driver and it did work (though it was a bit flakey) 17:38:55 Title: GitHub - amshafer/nvidia-driver: Fork of the Nvidia FreeBSD driver to port the nvidia-drm.ko module from Linux 17:39:52 <_xor> 525 is in main now, so I wanted to build it, but no nvidia-drm.ko was produced and the Wayland DE I was trying to start failed with the same message it had before when it couldn't use drm. 17:40:43 <_xor> Before assuming that the main port hasn't yet been updated to produce nvidia-drm.ko (didn't see it in OPTIONS), I wanted to make sure it wasn't something on my end or if it was combined with nvidia.ko or something. 17:42:24 <_xor> Oooh 17:42:36 * _xor just noticed the recent commits 17:43:07 <_xor> Though don't see anything related to the drm kmod :/ 21:48:13 I have a zfs pool on debian with some features that FreeBSD doesn't have yet 21:48:22 can I still zfs send receive it, without those features ? 21:56:50 "zfs-recv" may show a warning, never tried that. The "dry run"option, of zfs-recv, is also useless 21:58:53 There was either a PR or a comment in bugzilla about the uselessness IIRC 22:08:21 * parv remembers the context: "zfs recv -n" could not show what would have been done due to lack of existing child(ren) dataset(s) 22:18:02 last1: i'm going to go with "i doubt it" ' 22:28:15 I have two 13.1 systems with ZFS. On one when I type zfs list I get a list of all file systems, on the other I get filesystems AND snapshots. What am I missing is there a zfs property or environment var that is set? 22:30:23 i wasn't aware zfs-list reacted to environment variables 22:30:29 crb, you have set the "make snapshot generally visible" option 22:30:46 parv, ah thank you! 22:31:39 where is that setting? on the pool? 22:32:44 Check dataset "snapdir" option 22:34:14 Also, zfsprops(8) 22:34:47 so if it's not hidden, it reveals .zfs/snapshots and clutters up zfs-list with them? 22:35:57 Nope, "snap(dir|dev)=visible" did not cause snapshots to appear in "zfs-list" output 22:36:11 ... on 13/stable 22:37:24 crb, all I could suggest is to compare the all the properties to find the difference 22:37:42 it's probably snapdir 22:38:09 That did not work for me 22:38:21 oh, ok 22:39:40 crb, Try: for pool in one other; do zfs get all "$pool" >"$pool.opt" ; done ; diff -u one.opt other.opt 22:40:26 What about snapdev 22:41:12 See my comment of 5 minute old. Also, you could try yourself 22:43:45 crb, Are you sure that you did not use "-t snapshot" option? 22:43:54 part yes 22:44:01 Ok 23:02:59 Rebooting 13-stable after setting "snap(dir|dev)=visible" also did not cause plain "zfs list" to list snapshots 23:41:40 rtprio: would it be just a matter of disabling the unsupported feature on the debian side ? 23:41:51 or the fact that that zfs has other features would make it not work