01:29:25 it was indeed an issue with 2TB+, maybe the new data was in the 2TB+ area 01:29:39 user decided to add a small NVMe for boot. 01:29:46 tsoome_ thank you very much for your help. 01:30:02 I'm acting as a bridge between discord and IRC questions and knowledge bases :)) 01:33:37 i thought a bridge displayed something in brackets on each message. e.g. [Discord user #123] hello there 07:11:08 FYI there seems to be a public log of this channel. I'll not disclose the URL. 07:19:56 how to make sure if the gpu driver is loaded or not? 07:33:44 ghodawalaaman, what's the GPU? 07:34:11 grahamperrin: it's skylake (in-built GPU) 08:02:25 ghodawalaaman, kldstat | grep i915kms 08:12:20 grahamperrin: it returned nothing 08:13:11 here is the kldstat output: https://termbin.com/ud89 08:14:22 Which version of FreeBSD, exactly? Packages of ports from latest, or quarterly? 08:15:03 it's 14.2-RELEASE 08:15:11 freebsd-version -kru ; uname -aKU ; pkg -vv | grep -B 1 -e url -e priority 08:16:53 https://termbin.com/vwvg 08:21:26 OK, AMD64. 08:21:28 pkg query '%o %v %At:%Av' drm-61-kmod 08:24:57 I have no idea how new or old Skylake is, I guess it's Intel 08:25:23 that command returned nothing 08:25:27 antranigv you are welcome. 08:25:27 yes it's intel 08:25:41 pciconf -lv | grep -B 3 -A 1 display 08:26:37 https://termbin.com/ap6xl 08:26:39 a 08:30:52 worked with i915 with 14.0-RELEASE-p5, so 08:31:08 you should install two things: 08:31:22 a) drm-kmod 08:31:36 then 08:34:03 I have installed drm-kmod, it was pretty big package 08:34:27 I also added kld_list="i915kms" in /etc/rc.conf 08:34:43 pkg add https://pkg.freebsd.org/FreeBSD:14:amd64/kmods_quarterly_2/All/drm-61-kmod-6.1.92.1402000_3.pkg 08:36:04 the most recent version of drm-61-kmod-6.1.92 is already installed 08:36:17 pkg add -f https://pkg.freebsd.org/FreeBSD:14:amd64/kmods_quarterly_2/All/drm-61-kmod-6.1.92.1402000_3.pkg 08:37:10 done 08:37:57 Now restart the OS, hopefully you'll see a flicker when the module loads. With or without a flicker: 08:38:17 kldstat | grep i915kms 08:38:27 ohk, thanks 08:38:37 BBL, maybe. Bye. 11:34:08 Hello, I want to just add a message saying "make sure GPU driver is correctly installed" in sway/wayland section. how would I edit the handbook? 11:38:31 also this link on the handbook is broken (https://github.com/freebsd/freebsd-doc/blob/main/documentation/content/en/_index) 11:39:44 I have to make a Github account to contribute to the documentation. 11:50:28 ghodawalaaman: you don't have to do that, you can create a bugzilla account and send patches/issues there 12:02:56 Folks, I have a iSCSI server (NetApp), I am the iSCSI client (FreeBSD), the block device is formatted as EXT4. I can connect to iSCSI, I see the device, I can mount EXT4, but it's VERYYYYYYY SLOWWWW. Thoughts? 12:07:42 dch: sending patches this way is also new to me but I will try doing this way 12:08:11 antranigv: you're doing it via fuse I guess? 12:08:27 ghodawalaaman: no worries, feel free to ask here for help any time 12:08:29 dch mount -t ext2fs. I think that's *not* fuse. 12:08:36 aah yes 12:08:39 ghodawalaaman thank you for contributing. 12:09:53 antranigv: I would compare throughput of unmounted block device with dd .. vs mounted and look a bit at tcpdump in case there's any stupid MTU or similar things obviously not right 12:10:07 but I guess its probably just not a very optimised implementation on freebsd 12:10:58 dch I'll have a look with unmounted dd if=/dev/mydevice right now. If that's the case then I can copy the iscsi over dd, and then boot a linux vm for the ext parts 12:11:10 thanks for the tip uncle dch <3 12:11:14 ;-) 12:11:34 can you just pass the iscsi device through to a linux vm directly? 12:28:05 why is data deep-copied when you move it from one data set to another on the same pool? 12:38:44 with dd I'm getting 10MB/s, but I have a 10Gbps link, and iperf shows 9.9Gbps and disk in dmesg shows 150MB/s for the speed. 12:38:54 faster than EXT4 mount, slower than what's needed. 12:39:37 jemius because datasets are indeed different filesystems. they start at inode 0, etc, basically a different filesystem indeed. 12:42:00 makes sense 13:06:40 https:/github.com/freebsd/freebsd-doc/pull/443 13:06:51 could someone verify this pull request? 13:07:05 I have just made the pull request for adding a warning while installing sway 13:09:03 antranigv: 150MB/s is not far off a single NAS-level SATA disk throughput. whats the backing vdevs on the netap? 13:16:46 so is it common pracitce on FreeBSD/ZFS to give each user his own dataset as home directory? I guess it's handy for example for limiting maximum homedir size 13:20:44 jemius: very common, not just for quotas. some users have multiple datasets, with different retention or replication (snapshot) policies, or different performance / tuning settings 13:29:45 I see. 13:29:47 Defragmentation is not a thing on ZFS, is it? 13:41:37 zfs datasets are awesome 15:24:39 if i have zroot on mirror and physically remove one drive before powering up, will it boot normally? 15:25:01 (both drives have the bootcode) 15:25:13 futune: yes, as long as your boot environment (EFI partitions, etc.) are redundant -- FreEBSD doesn't set that up automatically 15:25:19 the pool will just import in a degraded state 15:26:28 hmm, that is true, and they are, but there is also a non zfs geom mirrored swap partition 15:27:03 will that assemble in a degraded state too? or halt the boot in some way if it fails? 15:28:05 and as for the zfs part, if i then power down afterwards, reconnect the drive, and power up, will it be easy to resilver it again? 15:28:48 i would strongly recommend not putting zfs on a geom mirror, because geom doesn't understand checksums, which means ZFS cannot fix any data corruption in the mirror 15:28:59 no, no, zfs is not on a geom mirror 15:29:11 the swap partition is, because i dont want swap on zfs 15:29:12 ah, i see what you mean 15:29:39 yes, as far as i know, gmirror will still come up if one side is missing, although i've never actually tried that -- so i don't know how you recover that off hand 15:30:05 (from what i remember though, you just use a gmirror command to overwrite one side of the mirror with the other) 15:30:30 that seems fine, as long as i can recover - its just swap, no permanent data 15:31:04 right, mirrored swap is just about having the system not crash if a root disk dies, but it should still boot if a disk is missing 15:31:28 i have done this extensively on Solaris but sadly not so much on FreeBSD 15:32:33 yeah exactly 15:33:27 i am mainly worried about the zfs resilver after putting the drive back 15:33:54 that should be pretty straightforward, resilver does not interrupt service at all other than using a few iops 15:34:07 awesome, thank you ivy 15:34:50 although, "a few iops" can actually be quite a hit to performance -- if that's a concern there are sysctls you can tune to adjust how fast the resilver runs 15:35:28 if it's just the root being slow it should be fine for me 15:36:35 this whole adventure is actually about replacing disks in a much larger array, but I don't have enough ports, and I don't want to temporarily degrade the critical data 15:38:31 dch I am okay with the iSCSI showing itself as 150MB/s, but I still get 10MB/s when actually doing dd :D 16:17:48 antranigv: maaybe check block size of dd vs whatever the network can support? 16:17:53 sounds like a mess anyway 16:19:58 if you set copies > 1, does the file system then ensure that the redundant blocks are spread across the partition? I suppose it does, because having them next to each other would be very dumb 16:20:01 dch indeed. we're moving to the NetApp to the other supercomputer datacenter, so I need to take of my files. My FreeBSD and NetApp are 1KM afar, but the fiber has been giving us stable ~10Gbps. 16:39:14 jemius: as far as i know, yes, but it's not as useful as you would think -- if you have two striped vdevs with copies=2, and one breaks, you can no longer import the pool at all 16:39:39 or in other words, copies is not a way to just mirror certain filesystems 16:40:35 ivy, what does "two striped vdevs" mean? 16:43:08 antranigv: I expect its the latency then. can you try the perf at the local DC for comparison? 16:43:35 jemius: nobody relies on copies>1 in practice. best to use mirrors or raidz combos, drives tend to fail in unexpected wats 16:44:15 I had bit errors on certain sectors on a dying drive once. 16:44:22 Anyways, why provide a feature that is not useful 16:44:40 jemius: a vdev is the lowest level of of zfs building blocks - it can be a physical block device like a disk partition 16:45:13 you can stripe 2 vdevs together, which yields a faster vdev but with less redundancy 16:45:33 you can mirror 2 vdevs, which results in redundancy but slower writes, and faster reads 16:45:35 etc etc 16:47:32 You're simply talking about how you can organize your pool. A RAID of mirrors and so on 16:48:56 jemius: the feature is useful, just only in very specific circumstances -- it's intended for the situation where you have, say, a laptop with a single disk, and you want to protect against some data being corrupted by bad blocks 16:49:36 "two striped vdevs" mean a zfs pool configured as a stripe across two disks (akin to a raid0) -- some people think they can use copies=2 in this situation to provide redundancy, but you cannot 16:50:38 Ah. No no, I'm not doing that. I have a 2-mirror 17:21:19 I have four MBR partitions, in the first one a 36G-sized FreeBSD partition (created with Fedora's fdisk), then I created the BSD partition scheme in that partition with "gpart create -s bsd" but when I try to add the slices with "gpart -t freebsd -s 10G" I get "gpart: invalid argument", why? 17:32:10 asarch: what is the exact command that fails ? 17:36:25 Well, it is: "gpart -t freebsd -s 10G" but it seems because I did not add the MBR bootcode to the partition with "gpart bootcode -b /boot/mbr" 17:36:34 ...first 17:37:43 how would gpart know where to add this slice ? 17:38:10 moreover there's no type "freebsd" 17:38:53 asarch: take a look at this nice gpart primer by W. Block http://www.wonkity.com/~wblock/docs/html/disksetup.html 17:38:55 I mean, freebsd-ufs 17:39:41 "gpart -t freebsd -s 10G" - the syntax is wrong, please take a look at gpart(8) 18:04:49 Bingo! 18:04:54 Thank you! 18:04:56 asarch: I would not normally use BSD partition type these days, just `-s gpt` 18:05:02 Thank you very much! :-) 18:05:19 I know, but actually is an old PC 18:05:19 theres nothing wrong with it per se, just gpt is basically standard on all OS now 18:05:26 make sense 18:05:28 With no UEFI support at all 18:06:26 asarch: UEFI and GPT partition types are IIRC orthogonal, but its if its a very old box then that makes sense 18:17:20 A very old HP box 18:17:28 It is still alive 18:18:01 yes, using GPT if you have no MS Windows there will be best, both FreeBSD and Linux will boot fine 18:27:40 but if the disk has already MBR laout and place for one primary partition for FreeBSD, they follow old MBR/BSD partiotioning scheme 18:27:56 s/partition/slice