-
antranigv
it was indeed an issue with 2TB+, maybe the new data was in the 2TB+ area
-
antranigv
user decided to add a small NVMe for boot.
-
antranigv
tsoome_ thank you very much for your help.
-
antranigv
I'm acting as a bridge between discord and IRC questions and knowledge bases :))
-
johnjaye
i thought a bridge displayed something in brackets on each message. e.g. [Discord user #123] hello there
-
grahamperrin
FYI there seems to be a public log of this channel. I'll not disclose the URL.
-
ghodawalaaman
how to make sure if the gpu driver is loaded or not?
-
grahamperrin
ghodawalaaman, what's the GPU?
-
ghodawalaaman
grahamperrin: it's skylake (in-built GPU)
-
grahamperrin
ghodawalaaman, kldstat | grep i915kms
-
ghodawalaaman
grahamperrin: it returned nothing
-
ghodawalaaman
here is the kldstat output:
termbin.com/ud89
-
grahamperrin
Which version of FreeBSD, exactly? Packages of ports from latest, or quarterly?
-
ghodawalaaman
it's 14.2-RELEASE
-
grahamperrin
freebsd-version -kru ; uname -aKU ; pkg -vv | grep -B 1 -e url -e priority
-
ghodawalaaman
-
grahamperrin
OK, AMD64.
-
grahamperrin
pkg query '%o %v %At:%Av' drm-61-kmod
-
grahamperrin
I have no idea how new or old Skylake is, I guess it's Intel
-
ghodawalaaman
that command returned nothing
-
tsoome_
antranigv you are welcome.
-
ghodawalaaman
yes it's intel
-
grahamperrin
pciconf -lv | grep -B 3 -A 1 display
-
ghodawalaaman
-
ghodawalaaman
a
-
grahamperrin
-
grahamperrin
you should install two things:
-
grahamperrin
a) drm-kmod
-
grahamperrin
then
-
ghodawalaaman
I have installed drm-kmod, it was pretty big package
-
ghodawalaaman
I also added kld_list="i915kms" in /etc/rc.conf
-
grahamperrin
-
ghodawalaaman
the most recent version of drm-61-kmod-6.1.92 is already installed
-
grahamperrin
-
ghodawalaaman
done
-
grahamperrin
Now restart the OS, hopefully you'll see a flicker when the module loads. With or without a flicker:
-
grahamperrin
kldstat | grep i915kms
-
ghodawalaaman
ohk, thanks
-
grahamperrin
BBL, maybe. Bye.
-
ghodawalaaman
Hello, I want to just add a message saying "make sure GPU driver is correctly installed" in sway/wayland section. how would I edit the handbook?
-
ghodawalaaman
-
ghodawalaaman
I have to make a Github account to contribute to the documentation.
-
dch
ghodawalaaman: you don't have to do that, you can create a bugzilla account and send patches/issues there
-
antranigv
Folks, I have a iSCSI server (NetApp), I am the iSCSI client (FreeBSD), the block device is formatted as EXT4. I can connect to iSCSI, I see the device, I can mount EXT4, but it's VERYYYYYYY SLOWWWW. Thoughts?
-
ghodawalaaman
dch: sending patches this way is also new to me but I will try doing this way
-
dch
antranigv: you're doing it via fuse I guess?
-
dch
ghodawalaaman: no worries, feel free to ask here for help any time
-
antranigv
dch mount -t ext2fs. I think that's *not* fuse.
-
dch
aah yes
-
antranigv
ghodawalaaman thank you for contributing.
-
dch
antranigv: I would compare throughput of unmounted block device with dd .. vs mounted and look a bit at tcpdump in case there's any stupid MTU or similar things obviously not right
-
dch
but I guess its probably just not a very optimised implementation on freebsd
-
antranigv
dch I'll have a look with unmounted dd if=/dev/mydevice right now. If that's the case then I can copy the iscsi over dd, and then boot a linux vm for the ext parts
-
antranigv
thanks for the tip uncle dch <3
-
dch
;-)
-
dch
can you just pass the iscsi device through to a linux vm directly?
-
jemius
why is data deep-copied when you move it from one data set to another on the same pool?
-
antranigv
with dd I'm getting 10MB/s, but I have a 10Gbps link, and iperf shows 9.9Gbps and disk in dmesg shows 150MB/s for the speed.
-
antranigv
faster than EXT4 mount, slower than what's needed.
-
antranigv
jemius because datasets are indeed different filesystems. they start at inode 0, etc, basically a different filesystem indeed.
-
jemius
makes sense
-
ghodawalaaman
-
ghodawalaaman
could someone verify this pull request?
-
ghodawalaaman
I have just made the pull request for adding a warning while installing sway
-
dch
antranigv: 150MB/s is not far off a single NAS-level SATA disk throughput. whats the backing vdevs on the netap?
-
jemius
so is it common pracitce on FreeBSD/ZFS to give each user his own dataset as home directory? I guess it's handy for example for limiting maximum homedir size
-
dch
jemius: very common, not just for quotas. some users have multiple datasets, with different retention or replication (snapshot) policies, or different performance / tuning settings
-
jemius
I see.
-
jemius
Defragmentation is not a thing on ZFS, is it?
-
ketas
zfs datasets are awesome
-
futune
if i have zroot on mirror and physically remove one drive before powering up, will it boot normally?
-
futune
(both drives have the bootcode)
-
ivy
futune: yes, as long as your boot environment (EFI partitions, etc.) are redundant -- FreEBSD doesn't set that up automatically
-
ivy
the pool will just import in a degraded state
-
futune
hmm, that is true, and they are, but there is also a non zfs geom mirrored swap partition
-
futune
will that assemble in a degraded state too? or halt the boot in some way if it fails?
-
futune
and as for the zfs part, if i then power down afterwards, reconnect the drive, and power up, will it be easy to resilver it again?
-
ivy
i would strongly recommend not putting zfs on a geom mirror, because geom doesn't understand checksums, which means ZFS cannot fix any data corruption in the mirror
-
futune
no, no, zfs is not on a geom mirror
-
futune
the swap partition is, because i dont want swap on zfs
-
ivy
ah, i see what you mean
-
ivy
yes, as far as i know, gmirror will still come up if one side is missing, although i've never actually tried that -- so i don't know how you recover that off hand
-
ivy
(from what i remember though, you just use a gmirror command to overwrite one side of the mirror with the other)
-
futune
that seems fine, as long as i can recover - its just swap, no permanent data
-
ivy
right, mirrored swap is just about having the system not crash if a root disk dies, but it should still boot if a disk is missing
-
ivy
i have done this extensively on Solaris but sadly not so much on FreeBSD
-
futune
yeah exactly
-
futune
i am mainly worried about the zfs resilver after putting the drive back
-
ivy
that should be pretty straightforward, resilver does not interrupt service at all other than using a few iops
-
futune
awesome, thank you ivy
-
ivy
although, "a few iops" can actually be quite a hit to performance -- if that's a concern there are sysctls you can tune to adjust how fast the resilver runs
-
futune
if it's just the root being slow it should be fine for me
-
futune
this whole adventure is actually about replacing disks in a much larger array, but I don't have enough ports, and I don't want to temporarily degrade the critical data
-
antranigv
dch I am okay with the iSCSI showing itself as 150MB/s, but I still get 10MB/s when actually doing dd :D
-
dch
antranigv: maaybe check block size of dd vs whatever the network can support?
-
dch
sounds like a mess anyway
-
jemius
if you set copies > 1, does the file system then ensure that the redundant blocks are spread across the partition? I suppose it does, because having them next to each other would be very dumb
-
antranigv
dch indeed. we're moving to the NetApp to the other supercomputer datacenter, so I need to take of my files. My FreeBSD and NetApp are 1KM afar, but the fiber has been giving us stable ~10Gbps.
-
ivy
jemius: as far as i know, yes, but it's not as useful as you would think -- if you have two striped vdevs with copies=2, and one breaks, you can no longer import the pool at all
-
ivy
or in other words, copies is not a way to just mirror certain filesystems
-
jemius
ivy, what does "two striped vdevs" mean?
-
dch
antranigv: I expect its the latency then. can you try the perf at the local DC for comparison?
-
dch
jemius: nobody relies on copies>1 in practice. best to use mirrors or raidz combos, drives tend to fail in unexpected wats
-
jemius
I had bit errors on certain sectors on a dying drive once.
-
jemius
Anyways, why provide a feature that is not useful
-
dch
jemius: a vdev is the lowest level of of zfs building blocks - it can be a physical block device like a disk partition
-
dch
you can stripe 2 vdevs together, which yields a faster vdev but with less redundancy
-
dch
you can mirror 2 vdevs, which results in redundancy but slower writes, and faster reads
-
dch
etc etc
-
jemius
You're simply talking about how you can organize your pool. A RAID of mirrors and so on
-
ivy
jemius: the feature is useful, just only in very specific circumstances -- it's intended for the situation where you have, say, a laptop with a single disk, and you want to protect against some data being corrupted by bad blocks
-
ivy
"two striped vdevs" mean a zfs pool configured as a stripe across two disks (akin to a raid0) -- some people think they can use copies=2 in this situation to provide redundancy, but you cannot
-
jemius
Ah. No no, I'm not doing that. I have a 2-mirror
-
asarch
I have four MBR partitions, in the first one a 36G-sized FreeBSD partition (created with Fedora's fdisk), then I created the BSD partition scheme in that partition with "gpart create -s bsd" but when I try to add the slices with "gpart -t freebsd -s 10G" I get "gpart: invalid argument", why?
-
mzar
asarch: what is the exact command that fails ?
-
asarch
Well, it is: "gpart -t freebsd -s 10G" but it seems because I did not add the MBR bootcode to the partition with "gpart bootcode -b /boot/mbr"
-
asarch
...first
-
mzar
how would gpart know where to add this slice ?
-
mzar
moreover there's no type "freebsd"
-
mzar
asarch: take a look at this nice gpart primer by W. Block
wonkity.com/~wblock/docs/html/disksetup.html
-
asarch
I mean, freebsd-ufs
-
mzar
"gpart -t freebsd -s 10G" - the syntax is wrong, please take a look at gpart(8)
-
asarch
Bingo!
-
asarch
Thank you!
-
dch
asarch: I would not normally use BSD partition type these days, just `-s gpt`
-
asarch
Thank you very much! :-)
-
asarch
I know, but actually is an old PC
-
dch
theres nothing wrong with it per se, just gpt is basically standard on all OS now
-
dch
make sense
-
asarch
With no UEFI support at all
-
dch
asarch: UEFI and GPT partition types are IIRC orthogonal, but its if its a very old box then that makes sense
-
asarch
A very old HP box
-
asarch
It is still alive
-
mzar
yes, using GPT if you have no MS Windows there will be best, both FreeBSD and Linux will boot fine
-
mzar
but if the disk has already MBR laout and place for one primary partition for FreeBSD, they follow old MBR/BSD partiotioning scheme
-
mzar
s/partition/slice