01:13:17 what cmd lets me hash the contents of a file? i wanna manually compare local file contents match the file on a remote server 01:14:49 sha256, sha512, and their friends. 01:21:50 ty! 02:52:22 <_xor> Good lord. 02:52:25 * _xor needs to take a breather 02:52:30 ? 02:53:24 <_xor> Writing a raw disk image to a drive. Need to do it with a live CD. Was looking for a generic Linux or BSD based live CD that's modern which I can re-use for future purposes. 02:54:24 <_xor> Found Ventoy, got my USB drive prepared with it (and supposedly "supports" FreeBSD, but the build instructions were way more time than I was willing to invest, so I rebooted into Windows and installed it from there). 02:55:18 <_xor> Copied the image (Home Assistant, which comes as a raw disk image apparently) to the drive, along with mfsBSD, mfsLinux, UBCD, TuxPE, and Lubuntu for good measure just in case. 02:56:48 <_xor> Tried to boot TuxPE first, but it failed. Then said screw it and booted mfsBSD, which worked...but then I remembered that the partition on the USB drive is exFAT, and I didn't want to spend the extra time getting that installed in mfsBSD after it was live booted. 02:58:42 <_xor> So I rebooted again, this time into Lubuntu since I figured there was a greater chance of Linux-land having a bunch of extra tools coming with the distribution (so wouldn't have to install), but turns out...that's not the case. But installing it was pretty easy so I'm thankful for that. I could have probably installed it using mfsBSD, but a lot of 02:58:42 <_xor> customized BSD-based systems I've used tend to mess with pkg repos and I was getting annoyed as to how long this is taking. 02:59:53 <_xor> So I'm now at the point where I'm looking at a boot live system of Lubuntu with exFAT mounting available (I think). Now I need to remember / figure out how to list USB drives, mount the exFAT partition, and then dd (or whatever tool) it to the system's hard drive. 03:00:33 <_xor> dd the raw disk image on the USB drive to the local system hard drive I mean. 03:01:06 <_xor> ...though I wonder how well just using curl + dd to stream the image and write it to disk would work out. 03:01:09 <_xor> Thoughts? 03:01:46 worth a try? 03:03:17 <_xor> I know this isn't the case, but some strange (and illogical) reason my mind feels like that's not a good idea because I'm reminded of back in the 90s when CD writers needed a continuous stream from the buffer due to the fact that the optical write heads couldn't really stop and resume (the first generation hardware anyway). 03:03:41 <_xor> Don't know why I'm associating it with that. 03:04:48 <_xor> Hmm, blocksize on dd only matters for the target, right? Is there any kind of input buffer setting for it or would it have to be paired with mbuffer for that? 03:27:12 dd has three blocksize parameters 03:27:27 if the input is slow, you do have to be a little careful 03:28:14 what you probably want is dd obs=128k or similar 03:34:52 <_xor> Ah that's right, I forgot about the other two. 03:36:26 note that using bs= is usually wrong when the input reads might come up short, since it'll try and pass through each input block 04:12:23 <_xor> Hmm, interesting. 04:13:28 <_xor> I was just looking into optimal bs values a couple of days ago. Had to write a 6gb image to a USB flash drive and wasn't sure whether I should omit bs or set it to 4m or 16m or something. 04:14:25 <_xor> But that was from a USB3 source, so I wasn't really thinking about input throughput. Was more wondering about writes that would be ok while also not taking forever. 04:17:21 <_xor> Is fuse included in GENERIC? 04:20:15 Hey all, had a quick question about virtualizing FreeBSD on Hyper-V. The VM has two disks, one is the boot disk that is unencrypted and is where FreeBSD is installed and boots off of. This works great. However, the other disk is a 60GB disk that is mounted as a ZFS zpool within the OS and will contain email data since this is going to be a mail server. This disk is encrypted and any time I restart the server, and type my password to unencrypt 04:20:16 it, I notice that the VHDX file size increases in Windows. Additionally, I wgetted a 40GB DD file from the Internet, and deleted it upon download. However, in Windows the VHDX is still showing up as a 42 GB file on my hard drive. Keep in mind that the upper max limit for the VM is a 60GB disk and so while FreeBSD shows it as 0% used, the actual disk on file (on physical drive) is still that 42GB. What should I do? 04:20:24 sorry for the long wall of text :) 04:24:34 <_xor> Check if the VHDX is sparse? (or whatever it's called in Hyper-V-land, can't remember anymore) 04:24:53 it's thin allocated 04:25:01 meaning not all 60GB is allocated all at once 04:25:14 so when it hits 60GB, it won't grow any more 04:25:35 I'd imagine that it would cause all kinds of issues when it happens 04:29:22 because then I'd have a ZFS drive that is 40% full with emails, and new mail will bounce because the underlying virtual disk can't take it 04:29:32 IDK if that makes sense? 04:40:26 hyper-v on what architecture? 04:41:42 Windows 11 Pro x64 04:41:53 you want the model of the CPU too? 04:41:56 no 04:42:06 yep didn't think so 04:43:32 so what you're seeing is that the backing file size reflects the amount of non-erased data on the volume 04:43:49 I don't see why you think there is any issue here 05:06:40 RhodiumToad: that's the amount of non-wiped data... ok that makes so much more sense 05:06:53 thank you for your help :) 05:07:48 so I'd be using the 'wipe' command to I guess write 0's over that data 05:09:07 but that's not necessary I don't think as even though the backing file size becomes 60GB it won't prevent new writes I wouldn't imagine 05:13:24 trying to use py39-ansible pkg. so i type ansible-galaxy collection install ommunity.general and it says error importlib_resources is not installed and is required. any clue how to fix that? 05:13:35 on 13.2 05:15:13 <_xor> Hmm 05:15:18 * _xor is confused 05:15:36 <_xor> I booted mfsBSD and it shows tmpfs is mounted at /rw, but /rw doesn't exist. 05:15:44 <_xor> `df -h` does I mean. 05:17:24 <_xor> I was going to create a mem disk, but saw tmpfs was already created and it looks like it's sized to use remaining ram (which is fine). Going to copy an image into the disk. 05:17:59 * _xor wonders what the underlying differences are between mdfs and tmpfs (aside from the former using the latter as the backing storage, according to the man page) 05:24:52 RhodiumToad: sorry to be annoying but if I download a 30GB file and then rm -rf it, it's normal for that gain in 30GB to not be reflected on the underlying virtual drive, yes? 05:25:40 If you're looking to shrink the VHDX but it's encrypted, I don't think writing zeros would allow the hypervisor to shrink it. 05:26:34 If it's encrypted, I don't believe the hypervisor would be able to identify any of the data is redundant, as it'll be using a block cipher. 05:27:26 I'm guess VHDX just looks at what blocks get touched. If it can dynamically shrink from the hypervisor, I assume it'd look for contiguous regions of zeroes, but anything touched/written from the guest will appear to be just random data, whether replaced with 0s from the guest or not. 05:28:09 There is likely no benefit to making a sparse file that backs an encrypted filesystem, since the hypervisor can't shrink it. I think it could only possibly worsen performance in such a case. 10:22:52 Any idea why the guest does not get net on vm-bhyve if vm switch created and activated in ifconfig? No pf of ipfw or ipfilter applied. 11:43:22 about the weechat version... the ports version was at 3.8 in january, but 13.2 release was made in april and the included ports is 3.7 11:46:59 is that just part of how ports is maintained? 11:57:02 ports has a main branch and every quarter a quarterly branch gets cut off; weechat 3.8 was committed to main in january, so q1 still had 3.7, no idea why q2 wasn't in there with freebsd 13.2 when it was in april (q2 should have weechat 3.8) 12:13:26 what jail helper framework do the cool kids use these days? 12:23:12 I play more and more with bastillebsd 12:26:39 ok, thanks 12:30:21 there are new alternatives too, https://github.com/illuria/jailer and https://github.com/DtxdF/AppJail 12:30:23 Title: GitHub - illuria/jailer: Minimal, flexible, and easy-to-expand FreeBSD jail manager. 12:31:06 but I didn't really tried them yet 12:32:51 *try 12:53:09 "Jailer is heavily under development and not yet ready for production use." 12:56:16 * gjn tunes in 13:58:17 i think best approach is no jail manager at all. pure manual setup. numerous jailers is just a mess. hiding the very guts of what jailing is. well fast and handy, maybe automated but meh. 14:44:06 signalblue: deleting files does not, on most filesystems by default, do anything that would cause the underlying storage to be released (it'll just be reused as applicable). 14:48:25 signalblue: on backing stores that support TRIM, which may or may not be the case in your scenario, there's usually a way to cause freed space to be trimmed. for zfs this is via 'zpool trim' or the autotrim property. 15:12:32 signalblue: from what i've seen before, you would need to zero the deleted space then use a VM tool that trims the image 15:15:26 depends if the hypervisor is supporting TRIM 15:15:44 yeah, if it has the tool 15:16:07 no, I mean if it supports TRIM requests on the virtual block device 15:16:20 it's not a tool, it's a type of command to the disk 15:17:01 well, i'm not qualified to talk about it, but you need to zero the data because otherwise it doesn't know what's in use or not 15:17:12 no you don't, that's the point of TRIM 15:18:01 it's a request from the filesystem layer that effectively means "I don't need this data anymore" 15:18:10 the hypervisor would have to be specific to the file system 15:18:15 no 15:18:24 TRIM is a block device command 15:18:26 how would it know what it could trim? 15:18:39 changing the actual file size of the image 15:18:43 suppose you're using zfs with autotrim enabled, 15:19:11 then any time zfs knows that a given block is no longer needed, it issues a TRIM command to the underlying device 15:19:43 the hypervisor can, if it wants, detect that and do whatever it wants with it 15:19:43 it's through a VM image though, the VM doesn't know anything specific about zfs 15:19:56 so then it would have to know about the file system 15:20:18 it doesn't need to know about zfs or any file system, it only has to know about TRIM commands, which are filesystem-independent 15:22:14 well how about this then 15:23:02 if i have a .img file that's 16G, and i fill it full, then delete everything, and make a clone of it, the clone is 16G 15:23:54 what allows the TRIM command without knowing about the file system 15:24:20 it's the file system which is responsible for issuing the TRIM command (if configured to do so) 15:24:59 TRIM is just a request to the underlying block device, just like READ or WRITE 15:25:15 so either way then 15:25:38 enable TRIM, or manually zero data 15:26:59 and the hypervisor would have to support both or either 15:29:07 there's no reason why the hypervisor would check for zeroed data. 15:36:19 only to do a 'manual trim' 15:36:25 to resize the image 15:39:40 i don't know, but i don't think the TRIM command is issued whenever anything gets deleted, for example 15:40:14 that depends on the filesystem and its configuration. 15:40:39 zfs has an autotrim property for zpools, which causes TRIM to be issued for blocks no longer needed 15:40:59 zfs also has a zpool trim command, which issues TRIM for all blocks not currently in use 15:41:18 ufs has a flag that sets whether TRIM is issued when deleting files 15:41:40 all i'm saying is there is a reason the hypervisor would check for zerod data 15:41:58 no there isn't - it would be silly overhead given that TRIM exists 15:42:08 not all filesystems use TRIM 15:43:35 given the increased use of SSDs and assorted flash media, pretty much any filesystem that anyone cares about has the option to use it 15:44:37 well, hey, the ":" key centers the view 15:45:42 RhodiumToad: Thank you once again for that 15:46:20 basically I just didn't think this one through it looks like :) 16:01:24 signalblue: I also don't see any obvious indication whether the freebsd hyper-v disk driver supports trim. 16:01:56 There's no need for TRIM in my environment as everything is running off of spinning disks 16:04:57 diskinfo -v says Yes for TRIM/UNMAP 16:06:01 my ":" comment is in reference to a chat from yesterday in #cataclysmdda, so i'm leaving the discussion 18:00:40 greetings! I created a raidz2 pool with 8 disks displaying "512 bytes logical, 4096 bytes physical" sector size in smartctl, but I didn't set the ashift manually and it is now set to 0 for the pool. How can I make sure the 0 (autodetect) translates to 'ashift=12' in practice? 18:01:07 (I already have data on the pool so I would love the ability to not have to recreate it) 18:22:19 veg: Did you zpool get yet? 18:23:45 ted-ious: yes, I did, but it shows 0 18:24:03 I just figured out to get the result of the autodetect through zdb though! 18:24:20 and I'm relieved, for I see 'ashift=12' when doing `zdb -C` 18:26:21 That's a relief. :) 18:44:53 yesss 20:07:58 from the docs, I've got ipnat.rules with map dc0 192.168.1.0/24 -> 0/32 portmap tcp/udp auto 20:08:44 I can't do outbound ping from a windows machine on the LAN 20:09:43 I have no firewall ruls in place yet... but how would I enable outbound ping? is icmp a valid target in the map statement? 21:40:53 .vim 22:09:08 re jails: is unionfs for /usr and stuff still the way to go or each with their own filesystem 22:19:51 afaik unionfs is still tagged "beware of dog" 22:34:30 Is "Here be dragons" not in vogue no more? 22:36:21 bah, someone removed it from the text 22:36:26 it used to say this: 22:36:45 "THIS FILE SYSTEM TYPE IS NOT YET FULLY SUPPORTED (READ: IT DOESN'T WORK) AND USING IT MAY, IN FACT, DESTROY DATA ON YOUR SYSTEM. USE AT YOUR OWN RISK. BEWARE OF DOG. SLIPPERY WHEN WET. BATTERIES NOT INCLUDED." 22:37:02 now it just ends after "OWN RISK." 22:37:26 Ok, warning sign is still there 22:37:55 fbsd 13.2, latest pkgs, when trying to use py39-ansible pkg, i type `ansible-galaxy collection install community.general` and it says error importlib_resources is not installed and is required. there's a py39-importlib-resources pkg that i assume i need to install, but i never needed to in the past. is that expected? 22:38:21 I'm vaguely interested in where the issues are in unionfs, I could have a bash at fixing them 23:16:16 it's very unprofessional to be removing valuable contributions like that from the documentation :P 23:17:14 But then, everyone knows dogs aren't slippery when wet, they're smelly. :-) 23:24:40 i can't wait to get so high later 23:24:56 doing a half dozen reinstalls to 13.2 then celebration time my G's 23:35:19 k doing a fresh 13.2 install. trying to finish up but when i do pkg install pkg-provides it lists 2 pkgs, pcre and pkg-provides. i say `y` to continue and it fails. for both it says cached package missing or size mismatch 23:35:25 why it do that on fresh install? 23:46:24 does pkg update -f fix it? 23:46:41 ya it did 23:49:34 first rule of pkg seems to be "if anything weird happens, try pkg update -f" 23:50:59 :/ 23:59:09 RhodiumToad: nullfs maybe?