00:53:03 im trying out rust. i got it working to build bins for freebsd, but now i'm trying to get it to build bins for friends on debian and i can't figure that out. https://termbin.com/llew thanks for any help figuring out cross compiling! 01:39:54 https://termbin.com/qhmh has new output, i tried forcing use of gcc with RUSTFLAGS. i think it got me further but now it's saying undefined reference to x... some extern functions couldn't be found 01:49:02 looks like you need to build lld with zlib support 01:50:35 gotta be another way 02:12:05 https://termbin.com/i31f has more info in it 02:14:13 which gcc native lib has "open64" in it? then i can see if my FS has it 02:14:32 and if so, i gotta pass the -l path in RUSTFLAGS, and if it doesn't exist i gotta find the pkg that has it and install it? 02:27:32 for example an error it gets is undefined reference to mmap64. i searched web and it says that's defined in gnu c lib sysdeps/unix/sysv/linux/mmap64.c. so i searched my FS for that file and it doesn't exist. is there a pkg that i can install that has it? 02:27:43 i already installed gcc pkg but i don't think that's enough? 03:23:25 can anyone get crosscompiling for linux from freebsd working using rust? using any combination of targets and whatever else 03:24:49 You could always create a container (podman), jail (bastille), virtualisation (bhyve) session, or linux compat layer area for linux compilation 03:25:32 ya i just wanna get it working natively with rust's cargo tool. it works for other OSs so we should be able to use it too 03:26:18 my prob is i'm weak with cross compilation, fixing lib path issues, general debugging related tot his 03:58:21 using the musl target i can get compilation to succeed, however the generated file doesn't run on linux. https://termbin.com/z8in has details 03:58:28 any clue how to fix that up? 04:28:10 nvm, installed cargo-zigbuild and got it working in .1 seconds 04:28:17 fuck me zig is badass 11:20:16 trying to clone an entire disk with "dd if=/dev/da0 of=/dev/da1 bs=1M", things has been running for over 10 hours, barely written 50GB and I'm not even seeing a partition table on da1… i feel like i'm missing something obvious. 11:22:09 If it has written so little in so much time, I'd start doubting the health of the source disk 11:22:35 But I prefer using ddrescue (on Linux, at least); that shows a lot more information on speed of syncing, problems, etc 11:23:07 Alver: it's complicated. this is ona a hyperv vm i'm trying to migrate to a physical disk (which has been passed through to the vm). 11:25:06 i'm pretty sure the source disk (i.e. the vhdx file and the disk it lives on) is healthy, didn't have any problem with reading/writing the virtual disk, got 25MB/s IO even with AES+HMAC and only a single core and the disk the .vhdx lives on being connected through USB. 11:25:37 Oof, passthrough 11:25:52 but to make things even more incongruent, the target disk is an SSD connected via USB. my throught was that inside the vm both just look like any normal disk and a dumb clone should do away with the differences. :P 11:26:26 So VM with disk living on USB, writing to a passthrough disk which also happens to be on USB 11:26:37 Yeah, that's going to be slow as sucking a turd through a straw 11:27:39 Alver: yeah, but if it's not inherently broken, i should see a partition table after the first couple MB, right? writing 50GB and no partition table seems to indicate the entire process is somehow not working to me. 11:28:11 That's what I would expect too, yes 11:28:19 Although you 11:28:40 Although you're technically using the same disk between two different OSes, so one may not be aware of the other until you force it to really look 11:29:15 (on Linux you would be doing a 'partprobe' so the OS checks what really is there, rather than rely on what it knows about the disk by its own actions) 11:29:23 mh? which one? the target disk is unmounted/offlined on the windows host. 11:29:35 Ah, my bad. 11:29:58 you can't select it for passthrough to the vm unless it is. 11:30:21 Well, even then. If I dd a disk on my laptop to another one, I also need a partprobe before the OS realises the dd has added something significant to the partition structure 11:30:51 so either the process isn't working and i'm wasting time waiting for it or the metadata is at the end of the disk for some damn reason (geli, gmirror, gpt, ssd, wtf?) 11:30:52 HyperV doesn't allow concurrent access? Silly. Probably possible in some tucked away setting. But irrelevant here 11:31:48 Alver: no idea, not an MS person. maybe it also allows passing through unmounted partitions, i don't particularly care ^^; 11:31:49 GPT has a header at front and secondary at the end of a disk, iirc, so yes 11:32:35 Any particular reason for the action in the first place? Because I Can (TM)? 11:34:06 Alver: spent all day yesterday creating a rather involved setup with gmirror and geli, didn't want to redo the entire thing on native hardware today. 11:34:41 phryk: what is the underlying filesystem on these drives? 11:34:55 voy4g3r2: ufs. 11:35:06 doh, was hoping zfs.. nevermind :) 11:35:14 zfs send / recv 11:35:30 which reminds me, that i just yesterday stumbled onto the dump command, but i'm not sure it can be leveraged to pull this off. 11:36:05 it should be able to.. i used gpart to prepare a drive 11:36:10 voy4g3r2: zfs on a 1-core vm with 2G memory? yeah, the 1MB/s the dd is doing right now would seem fast then… 🙈 11:36:26 then did that zfs send / recv setup 11:36:54 yeah, but the gpart setup is like half the work. 11:37:20 or rather, gpart, gmirror and geli. and probably some other thing i forget. 11:37:47 my exposure to that is limited to local machines, all physical, so YMMV 11:38:08 but the network connection from one building to another does SUCK and only dd like 4mbit.. dang trees 11:38:48 i tried the dd route with cloning a zfs filesystem and then was told.. that is crazy, use zfs send / recv .. did not want you to have that same experience 11:40:17 Alver: did "camcontrol reprobe da1" which ought to be the equivalent of partprobe, i think. still no partition table. i think this is doomed^^ 11:41:05 voy4g3r2: why would it be crazy tho? seems like a straightforward way to skip the work of manually reproducing the exact same partitioning and geom setup. 11:41:57 i'm also pretty sure that i've used dd to clone entire disks before… tho that was physical hdds of the same build/size, if memory serves. 11:43:34 yeah well, my situation they were not the same size and while it can achieve the same goal... the approach they take is diferent 11:43:44 why do assembly when you could do the same thing in c 11:43:48 type situtation 11:47:00 not quite the same. assembly is usually more verbose, but here, the "high level" approach (i.e. zfs send/recv) is the more verbose one because you have to manually replicate the partitioning :P 11:47:24 anyhow, thanks for the feedback, at least i know i'm not completely off the mark in my thinking. :P 11:48:12 no you are not.. i just saw the horus and like.. maybe they did not try that 11:48:46 analogy may have been "off" in my mind. assembly is more verbose and more cumbersome (dd) and c is cumbersome but more abstracted away from those details (zfs) 11:48:55 but will work on that mental model more in the future. 11:50:12 in my mind, one `dd` just looked way less involved than like 3 dozen gpart/gmirror/geli calls. :P 11:50:46 yup, same approach here 11:51:11 then i remember all the times, i had to expand code and make more of it to be more efficient with less of it. it is a mind f*** at times 11:53:25 yeah, i probably should've gone the same route as when i last set up my homeserver: write a shellscript you run from the installer image that does the entire installation. if anything fails, readjust the script, re-execute. 11:53:45 not immediately faster but reproduction of that system sure works within less than 5 minutes :P 11:54:33 that is possible but then i would ask myself the question "How much time would it take to make the script? Then subsequently how often are you doing that?" If the answer is less than the time to do the work.. in a year duration.. manual it is.. write down the steps in a txt file 12:01:14 i do this playing around as a hobby and keeping my skillset up-to-date and this is not my career. It has a side effect of helping me call BS on technology people, saying it is hard and you do not udnerstand 12:31:12 get all if I had a usb pen mounted in /mnt/blah, with it still mounted is there anyway I can can a directory listing for the folder below where it is mounted, I just want to rule out I did not accidentally do a rsync to what I thought was the pen, then remount it over it (I seem to have some disk space missing :)) 12:33:45 no i guess 12:33:54 ok dokey 12:33:58 maybe nullfs mount it 12:34:00 ? 12:34:11 oh yeah because It wont pickup the mount 12:34:16 nice cheers ketas 12:35:07 :) 12:42:23 UNIX gurus I need your wisdom ... 12:42:42 I need to replace a WORD in an existing file with the *contents* of another file. 12:42:59 the other file is raw quoted JSON so things like sed get very confused 12:43:09 is there another tool that could do this? 12:44:27 i'm not aware of anything good 12:45:05 maybe some templating thing helps :/ 12:45:38 yeah trying very much to stay within base tools (this ends up as part of freebsd release generation tooling so we can't have nice things) 12:46:03 I have a sweet simple version already using textproc/jq but now I need to take it back to sed or perl or something 12:46:44 <[tj]> dch awk 12:47:21 [tj]: that is probably the right answer, I have gone 30+ years without reading the awk manpage but perhaps today is the day 12:47:24 thanks! 12:47:36 <[tj]> give me a mo 12:48:58 <[tj]> no actually I don't think I can be distracted writing awk for you today 12:49:35 np, I will have a dig and see what I can find 12:50:38 sed has \r i found 12:50:47 to read i a file 12:50:49 in 12:51:40 or r i mean 12:52:36 <[tj]> dch: it should be easy, but its awk so it won't be 12:53:46 awk too 12:57:25 ketas: r in sed makes this almost easy 12:57:38 sed '/REPLACE_ME/r raw' < json 12:57:57 and today again I avoided learning awk by using sed and perl 12:58:02 the perl version is not nearly so short 12:59:05 but how to get rid of original? 13:00:24 I will modify the json to put the REPLACE_ME on a single line 13:00:33 and then I can swap it in and remove it afterwards 13:00:37 -d ... 13:01:07 <[tj]> good work 13:01:14 yay team BSD++ 13:01:17 <[tj]> awk would have been more fun 13:01:32 you only say that because there are nappies to be changed as an alternative 13:01:51 where to put the d? 13:02:04 i can't seem to get it to work 13:02:17 it inserts file after that 13:06:26 sed -e '/1/r b' -e d < a 13:06:43 sucker replaces entire linr 13:06:45 line 13:07:52 sed -e '/REPLACE_CAPABILITY_DATA/ {' -e 'r raw' -e 'd' -e '}' 13:08:56 just use a block 13:09:33 doesn't do ig 13:09:34 it 13:09:37 :/ 13:11:19 ketas: which freebsd version are you trying this on? 13:11:25 echo raw > raw; printf 'before\nREPLACE:garbage,\nafter\n' | sed -e '/REPLACE/ {' -e 'r raw' -e 'd' -e '}' 13:11:33 works fine here on CURRENT 13:11:45 awk '/1/{while(getline<"b"){print}}' < a 13:11:46 same 13:11:48 13.4 13:11:59 aah that might be the reason 13:13:18 same in current 13:13:52 what 13:16:04 none of this works 13:16:14 in 13.4 nor current 13:16:46 :garbage is also gone 13:17:18 yeah thats fine, sed is line processed so i expect the garbage to go 13:17:30 ketas: what does `which sed` tell you? 13:17:45 I switched to /bin/sh and using /usr/bin/sed this works fine here 13:18:40 echo raw > raw; printf 'before\nREPLACE:garbage,\nafter\n' | sed -e '/REPLACE/ {' -e 'r raw' -e 'd' -e '}' 13:18:48 even works on a 13.0 (best I can get) here 13:19:19 it's one from /usr/bin 13:20:50 echo raw > raw; printf 'before\nREPLACE:garbage,\nafter\ 13:20:50 n' | sed -e '/REPLACE/ {' -e 'r raw' -e 'd' -e '}' 13:20:50 before 13:20:50 raw 13:20:52 after 13:20:55 ? 13:21:22 you say it works there? 13:22:20 exactly like that, /bin/sh 13:24:09 same result in sh 13:24:38 you get raw:garbage? 13:24:45 that would be fun then 13:24:48 no, just raw 13:24:59 the garbage is dropped because we are operating on lines 13:25:13 that's what i was saying 13:25:25 https://www.irccloud.com/pastebin/11pZO18x/replace 13:25:31 how to replace within line tho? 13:26:07 I will get the releng bit working, and then code-golf this evening on that bit. it will definitely come in handy in future. 13:26:34 file a has "-1-", file b has "333", how to get "-333-" ? 13:26:36 :} 13:26:41 :) 13:27:31 hah code golf is actual term 13:39:53 if perl is available, \Q$str\E 13:40:07 meh this problem has no solution? 13:40:25 surely i could get it done but... 13:42:10 here we go.. signal-desktop has been eluding me... 15:19:04 what's the *clean* way of copying over an entire freebsd to another disk, mostly worried about special flags like noschg. otherwise, cp -a seems fine… 15:24:30 when there's a non-unique gpt label, the partitions with those labels disappear from /dev/gpt – how do i refresh that after fixing the name collision? "camcontrol reprobe" on the affected disks doesn't work. 15:26:04 service devfs restart also doesn't do the trick… 15:26:28 cpio perhaps? Not sure if it retains everything 15:29:32 vkarlsen: if i read the man page of cp right, -a implies -p which *should* preserve "modification time, access time, file flags, file mode, ACL, user ID, and group ID, as allowed by permissions". 15:30:09 guess i'll just do a test copy with noschg 15:30:34 phryk: Yep, I'd test it first too :) 15:33:12 schg seems to be copied over fine. maybe my brain is just mixing up memories from trying to mv something. ^^ 15:39:21 phryk: if you're on UFS (IIRC you are) use dump/restore, `tar xvpf` so you get hardlinks as well. there's probably a cpdup or cpio . 15:39:30 I don't think cp will do hardlinks 15:41:54 Yeah, to cp hardlinked files are just separate files. 15:42:50 yeah, just man cp confirms. thanks dch. the -P of dump/restore looks a bit sus, but i'll guess i'll read up on that. 15:43:20 dump/restore is quite reliable and robust. 15:43:43 yeah, i'm kinda baffled i just stumbled onto it yesterday for the first time. 15:44:02 sounds supremely practical. 15:44:20 holy shit i am an idiot. 15:45:13 phryk: any issues with backups / 15:45:14 ? 15:45:37 i already set the system up with gmirror. and geli is *on top* of the gmirrors. so i can just do the partitioning and add the partitions to the mirrors and wait for the sync to complete. how did i not see that? :D 15:45:47 mzar: ironially while setting up a system for backups :P 15:46:05 phryk: https://docs.freebsd.org/en/books/handbook/book/#backup-basics will have complete examples 15:46:19 oh yeah gmirror would do it too 15:46:20 nice move 15:46:35 you could also do this with iscsi over the network 15:46:53 yeah, i already set it up with gmirror even tho it's just one disk because i'm very much advocating for disk redundancy for the official company backup server :P 15:49:34 dch: btw, a couple days ago i was already writing you a query and then answered my own question: prometheus belongs on the host and not a jail so it can report jail failures. :P 15:49:54 lol 15:55:16 Generally I am a huge fan of rsync which is an excellent general purpose tool. 15:55:38 But on FreeBSD you /should/ be using zfs and in that case you have zfs send|recv available to you and that is the best. 15:57:52 +1 if you're serious about data integrity, then zfs is the only choice. 15:58:41 * dch thats from somebody who has been responsible for ~ 7PB of data across 10_000+ servers over 160 customers. 15:58:43 rwp: i actually (mostly, still have do redo one setup) migrated away from zfs. too much pain, too many issues. 15:58:55 I have the best data recovery stories btw 15:59:16 I have a data recovery story where zfs pulled me out of the fire too. 15:59:40 phryk: I still don't get how you ended up with this opinion, zfs is so much better than any filesystem I've used at scale that its not funny. 15:59:42 i've done some crazy heart surgery too, both with and without zfs. i guess i'm a bit wasteful with just outright disk mirroring but it's saved me numerous times. 16:01:11 Definitely adding disk mirroring or any RAID redundancy level is significantly better than running on one storage device and counting on recovery from backup. That's no doubt. 16:01:16 dch: that might actually be the difference. i'm managing *counts in head*… 11 disks overall in my personal infra. zfs just had entirely too many footguns for me. 16:01:31 And definitely gmirror for mirroring is a good thing. It's the default for swap partitions and works solid. 16:01:32 ("that" being scale) 16:02:08 But for archival data and for large arrays then zfs is another significant step up in reliability, scaling, and features. 16:02:51 rwp: interesting re swap. i haven't used swap in a long time, but this machine is probably gonna be so small it needs it, so i was wondering: two disks, two swap partitions: gmirror them for faster io or keep them as separate swap entries in /etc/fstab? 16:03:12 As far as footguns zfs really tries hard to be safe. I have screwed around with things quite a bit and zfs has often avoided being a footgun for me. 16:03:59 swap is also useful for coredumps. phryk with that in mind I go for encrypted mirror swap, under the assumption swap is rarely used 16:04:21 Regarding mirrored swap, if you want the system to keep running through a storage device failure then you must mirror swap. If you are always going to power down then it does not matter. But I want machines to keep running /through/ a storage failure. So I always mirror swap. 16:04:45 so performance is less important than integrity 16:04:48 There is almost no performance penalty on today's faster cpus for using .eli for swap so always use ephemeral encrypted swap. 16:05:30 rwp: i have had around a dozen zfs setups over the years and the one i currently have at phryk.net is the first and only one that didn't completely suck. i've had sooo many performance issues. like ridiculous <1MB/s read speeds on fully mirrored disks and oh so many weird issues with automatic mounting not working or randomly stopping to work… 16:06:25 yeah, with aesni, everything except /boot is encrypted. and that will follow as i have a vague memory that encrypted /boot has been possible for like a year now. 16:06:26 Maybe you are simply cursed? Have you pissed off any wizards or devils? 16:06:57 rwp: same disks worked extremely fine with ufs. wd reds mostly. 16:07:33 Definitely a curse then. 16:07:34 I wonder if you got those shitty shingled disks 16:07:35 good points about disk failures on non-mirrored swap tho. gonna mirror it, thanks for the hint. 16:07:44 they are complete pants with zfs 16:08:00 Even the WD Red disks slipped in some SMR versions before they got caught doing it. 16:08:10 shingled? you mean the ones where they cram more bits into the bits? 16:08:28 https://arstechnica.com/gadgets/2020/04/caveat-emptor-smr-disks-are-being-submarined-into-unexpected-channels/ 16:08:38 i have all traditional write disks, 1 bit per bit, don't recall what it's being called. 16:09:01 CMR Conventional Magnetic Recording 16:09:03 ah yeah, CMR vs SMR. 16:09:51 yeah, my wd reds are all CMR. the ones i used when i still did zfs where older 2TB ones. got 2 newer 4TB ones installed a year or so ago and 2 more of the same laying around waiting to be installed. 16:10:28 It seems that the quarterly binary pkgs have updated today. Which includes the binary pkg radeon driver that I use on my desktop. I get nervous because I know those are decoupled from the kernel and I have tripped on that snag before. 16:11:47 Saying older 2TB makes me think of another snag which is an incorrectly matching ashift because before the 2TB disks most were 512 byte physical sectors and after were 4K AF Advanced Format sectors. 16:12:09 Having an incorrect ashift=9 when ashift=12 is needed for 4K AF can be a huge performance hit. 16:12:45 And with new SSDs that ashift might need to be ashift=13 or perhaps even ashift=14 in order to avoid the write amplification problem. 16:12:58 rwp: yeah, went through the entire dance and reformatted one disk with the fixed ashift, let it resilver then reformated the other disk, no big improvement. 16:13:54 I haven't done the testing myself but I need to become familiar with that NAND flash benchmark tool (I forget the name) which can be used to deduce the internal block size of NAND flash storage. 16:13:56 also my disks definitely tend to get full and fragmentation is a bigger performance hit on zfs than other filesystems but zfs people constantly dismiss this being a problem ("just buy more disks lol") and thus there isn't even a defrag tool. 16:16:21 i mean, i'm already a bit sad that ufs has no defrag tool, but i haven't noticed any performance hit from the disk just getting full. 16:17:53 <[tj]> ufs does defragmentation in the buffer cache 16:18:26 hmm you can't have vdevs with different ashifts, so the only way is send|recv of the whole pool 16:18:31 i have no idea what that even means. fs internals have always been black magic to me. 16:19:11 <[tj]> phryk: then why make the first comment? 16:19:15 dch: yeah might not have been a resilver. i vaguely remember figuring all this out was a week or two full of frustrations. 16:19:38 phryk: yes we have all been through this, its very frustrating to hit it 16:19:41 [tj]: because i noticed the difference without knowing the internals? 16:20:11 and I still don't really know how to tell the actual blocksize of storage from vendor spec sheets. no idea why they make it so opaque. 16:20:29 [tj]: maybe i'm just fundamentally misunderstanding fragmentation. my understanding is that over time fragmentation accumulates and at some point will definitely affect performance. 16:22:26 "at some point" might take many years, but eventually the head will have to jump to a different position on the disk after every block, chunk, inode or whatever the right term is for a unit of read/write. 16:46:31 phryk, I've heard it it often repeated that fragmentation on a multi-user OS can actually be a good thing, as it provides an artificial interleave to help spread disk access between processes. 16:48:35 CrtxReavr: I'm not sure I understand. What would a disk with 0% fragmentation mean in this context? That it takes longer for cpu-threads to be switched between processes? 19:27:36 mzar: thanks 19:30:08 for what ? 19:30:42 mzar: audio/virtual_oss reference 19:31:18 has it solved your issue ? 19:35:54 still working on it 19:39:22 Ober: some somftware like internet browsers tend to use pulseaudio 19:40:51 pulseaudio allows you to switch input/output with pacmd(1) 19:41:14 hey all where can I get a list of all possible kernel options devices etc for a stock 14.1 RELEASE, I thought LINT+NOTES but apparently not :) might have changed over the years 19:41:53 Ober: https://wiki.freebsd.org/Sound#virtual_oss_.28advanced.29 20:41:24 When I see something like this in the logs it makes me wonder if there is an atomic write problem. "Jan 10 09:24:12 madness kernel: <6n>fns fsse rsveerrv er hhyysstteerriiaa:://hhoommee:: nnoott rreessppoonnddiinngg" 21:05:58 if signal uses electron why does it even have an EOL clause 21:06:11 and man .. it only uses 1 cpu to compile in ports.. boo 22:00:27 voy4g3r2: I agree it’s a pain as it breaks too often because of it 22:01:50 Mine is also broken right now 22:04:11 Periodically breaking is making me consider alternatives 23:04:18 phryk: one of my rootfs copy scripts does: chflags -vv noschg /dst/var/empty && tar -cf- -C /src . | tar -vxpf- --clear-nochange-fflags -C /dst 23:51:26 \sfqsl 23:51:39 oops. Wrong layout