00:24:10 V_PauAmma_V: try it and see. i don't see why not 00:24:29 elliot@phil:~$ sysrc -f /tmp/blah foo.bar="baz" 00:24:29 sysrc: foo.bar: name contains characters not allowed in shell 00:27:28 bah 00:27:46 now that I recall, I already ran into that one, and obviously forgot about it 01:37:35 forgeg about ig 02:00:37 bah im trying to select an iso image and it wont let me with cbsd bconstruct-tui 02:00:41 any idea why 02:01:40 i can run the command and the interface opens, but when i hit enter in the "vm_iso_path" it attempts something and goes back without letting me change the iso image 07:34:04 i can't do it myself, but it's easy! https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270530#c2 07:34:06 Title: 270530 – sysutils/iocage: missing jail parameters 09:25:02 https://freshbsd.org/freebsd/src/commit/e315351fc7af11 woo! 09:25:03 Title: FreeBSD / src / e315351 / Add the mfi(4) ioctl support to mrsas(4) - FreshBSD 10:13:19 Freaky: looking at freshbsd.org, "committer" selection says "100+" while actually listing 1000+ (i understand that as commits count?) 10:29:17 <_xor> Hmm, apparently there are restrictions as to the name of a netif when attaching to a bhyve vm? 10:30:10 <_xor> bhyve kept giving me, "device emulation initialization error: No such file or directory" when trying to use tap interface "vm_debian-11" 10:30:21 <_xor> Worked when I changed it to tap0 though. Wonder if it's the hyphen or something. 11:02:19 _xor: can you please submit that as bug 11:08:43 zlei is currently working on making renaming interfaces more consistent and less… fraught 11:09:24 <_xor> Is there an on-going issue related to this or is it going to be a new one on bugzilla? 11:09:55 try truss'ing it first and see what exactly happens 11:12:15 _xor: there's a bunch 11:13:24 might be good to collect them in a meta issue actually 11:14:50 <_xor> Busy at the moment, but can do it later today or tomorrow. I have two issues I can raise, this one with tap as well as one regarding netgraph node name limitation (which was undocumented last I checked, so it should either be documented or the limit increased). I noticed that bhyve can use netgraph nodes, and since I already use netgraph, I might 11:14:51 <_xor> use that instead of tap interfaces. 11:50:12 yuripv: "100+" just means the facet is showing only the top 100 with everyone else grouped as "Other" 11:50:34 and yes, it's commits matching that facet 11:51:28 if you click on the heading the facet will expand, so your "100+" becomes "980" for all FreeBSD committers 12:21:42 that's a lot of committers 12:24:00 I reckon we may have a few more authors than committees 12:24:10 committers, even 12:24:40 I wish SVN and CVS had had that concept 12:24:47 contributors? yeah, there's a lot of them 12:25:11 https://docs.freebsd.org/en/articles/contributors/#contrib-additional here's a list that's been built up over the years 12:25:12 Title: Contributors to FreeBSD | FreeBSD Documentation Portal 14:42:35 good morning everyone 14:42:38 freebsd is so awesome 14:42:48 finally have bhyve working the way i want it 14:42:49 :) 14:55:06 I've got FreeBSD 13.1p7 running on an HP mini-pc type system. FBSD works fine, but even with no activity, the fan is constantly running. Is there a way to control that in FBSD ? 14:56:40 neither htop nor btop show any CPU related activities nor any I/O activities 15:23:44 librehardwaremonitor could presumably be integrated into a kernel module, which could live in ports, that could present the devices for a userspace daemon like bsdfan by claudiozz - but i don't know of anyone who's working on that 15:40:52 trying to grasp something here. testing write speeds to disk. so doing a simple cat /dev/random to file.. how is it possible for the file to grow in size after i kill the process? I assume that it is data getting written to disk from cache on zfs? 15:44:15 are you trying to benchmark zfs or the disk? 15:46:13 your comment implies both, but that's not how benchmarking works;`diskinfo -cit ` will benchmark individual harddisks, but you'll need something like benchmarks/fio in order to generate synthetic filesystem loads 15:47:11 synthetic filesystem workloads also aren't very useful for realworld performance, so it's doubtful you'll get much out of it 15:50:23 Hi, I'm trying to use mpd5 (as a client to a pptp server) from a Jail and I'm getthing "MppcTestCap: can't create socket node: Operation not permitted". You can see my config here: https://forums.freebsd.org/threads/can-i-use-mpd5-in-jail.13925/ 15:50:24 Title: Can i use mpd5 in jail? | The FreeBSD Forums 15:52:03 martinrame: you're probably missing some devfs.rules(5) combined with a ruleset in rc.conf(5)? 15:52:48 My best suggestion is to use truss (or dtruss from sysutils/dtrace-toolkit) to find out where it's failing and work your way from there. 15:56:30 debdrup: thanks. I'll try to see what happens. 16:11:17 debdrup: I the issue posted here: https://forums.freebsd.org/threads/can-i-use-mpd5-in-jail.13925/ I attached the output of truss, there are some "Operation not permitted", but I cannot see how to fix those. 16:11:18 Title: Can i use mpd5 in jail? | The FreeBSD Forums 16:15:43 Looks to me like you haven't loaded all the relevant modules. 16:16:02 mpd5 normally does this, but since you're in a jail, it can't. 16:16:57 debdrup: I thought it should be possible in a VNET jail 16:17:09 There's no kernel in a jail. 16:18:33 Mm, I don't want to connet to that vpn from the host, any alternative? 16:18:48 s/connet/connect/ 16:21:43 martinrame: find out what kernel modules are needed, load them, and present them to the jail via devfs.rules(5) like I described before. 16:22:24 debdrup: thanks! 16:41:20 among others, when I run truss mpd5 I get: kldload("ng_socket.ko") Operation Not Permitted 16:41:33 How can I add that module to my devfs.rules? 17:07:23 I'm not sure jails can load modules, you might have to preload it before the jail starts 17:10:27 Deja vu. 17:11:06 I didn't scroll enough 17:11:36 I think you need to speak in smol sentences 17:12:49 ˢᵐᵃˡˡᶜᵃᵖˢ ⁱᵗ ⁱˢ 17:14:35 martinrame: first: https://man.freebsd.org/rc.conf(5) kld_list 17:14:36 Title: rc.conf(5) 17:15:14 debdrup: yea, what I really am trying to do, is to test the writespeed to my zraid1 setup. 17:15:38 so, yea, im trying to bench zfs with my current disk setup. 17:16:21 drobban: what are you trying to accomplish with those benchmarks? 17:16:36 except from figuring out writespeeds? 17:16:55 What do you hope to learn from knowing write speeds of a synthetic workload? 17:17:15 im trying to learn what the writespeed is. 17:17:25 dude 17:17:29 Yes, and I'm asking what you think that'll help you with. 17:17:30 they'll probably be roughly the same as when you calculate it 17:18:01 meena: well as for now, the calculated writespeed and the actual write speed differs. 17:18:13 by how much? 17:18:47 80% 17:18:52 =) 17:19:09 80% difference or 80% of the calculated speed? 17:19:16 so Im probably calculating it wrong or missunderstood how zfs is working with the drives 17:19:34 Also, what sort of technology is backing ZFS? 17:20:22 zraid1, freebsd, disks with 500MB/s write speed 17:20:34 64gb ram, ryzen 9. 17:20:44 Is that according to the specification sheet, or from something you measured on the disks? 17:20:59 from spec. 17:21:04 Right. 17:21:49 SATAIII is 6Gbps which works out to ~550MBps, but that doesn't account for 8/10bit encoding, so the real bandwidth isn't as high as all that. 17:22:35 I would recommend using `diskinfo -cit ` on the actual raw devices before you create the pool, to get real-world speeds rather than specifications - since they're basically never true. 17:23:24 yea, is there a "correct" way to meassure the pool? 17:23:37 benchmarks/fio like I also mentioned before. 17:23:53 debdrup: thanks, will take a look 17:23:55 The better way is to do custom tooling that matches your realworld workload. 17:24:19 debdrup: for now im just interested to see where the bottleneck is. 17:25:20 with a pool with three disks in zraid1, I would assume the load is split onto 2disks and 1 disk for "parity". 17:25:50 but that isnt what Im seeing. But will test fio 17:26:38 That's not how raidz works, no. 17:26:54 The stripes are written to all disks in a raidz vdev, and parity is distributed. 17:27:16 yea. but how does the calculation differ do you mean? 17:27:18 Err. 17:27:42 The stripes are written to each disk in sequence in a raidz vdev, and parity is distributed across all of them. 17:28:03 It's done via an XOR, the details are here: https://people.freebsd.org/~gibbs/zfs_doxygenation/html/da/dc9/RaidZ.html 17:28:03 in sequence? 17:28:04 Title: FreeBSD ZFS: 17:28:19 debdrup: yea, I know how "xor" works 17:28:38 I'm not sure what calculations you're talking about. 17:30:04 yea, the maximal bandwidth for one disk is one thing. But writing the total load onto multiple disk in parallel gives an entirely different number, and sure. one third of the data is the sum of the xor operation that can be used to restore "lost" data 17:33:53 I think you need to forget what's on the specs, and instead measure what you get out of it. 17:34:27 debdrup: yea, that is what Im trying to figure out how to x) 17:35:03 By measure what you get I mean use diskinfo and fio. 17:35:34 drobban: do you have any data on that pool that's of value? 18:45:17 meena: yes =) 18:48:04 does zfs send | receive have the potential to show maximum transfer capabilities of the pool ? 18:57:53 I'm not sure what you're asking. 19:05:28 last1, Yes. send | receive should be the fastest way to transfer data. If the transfer is over the network then of course network bandwidth limits apply. 19:12:48 I'm over a 10Gbps network and I'm transferring from a 6Gbps enclosure ( 72 x hdd ) to another enclosure 19:12:57 hitting about 2.6Gbps using multiple rsyncs 19:13:13 although the limiting factor there is the 6gbps backplane 19:13:25 and possibly the rsync method. I'll see if zfs send/receive is faster 19:18:46 The main difference is that rsync needs to deal with individual files. It will be open, read, close, open, write, close, file by file. 19:19:11 Whereas zfs deals with it at the storage block level and with the aggregate of the raw data. 19:19:17 also rsync does a lot of other stuff depending on options. 19:19:41 That does not mean that zfs send|recv will always be faster than rsync. I can see some conditions where multiple parallel rsync can be faster. 19:20:05 My point is that they are operating at much different layers of the system and will have different benefits. 19:22:03 With ~40 TB of data on SATA 3 disks in RAIDZ3 (forgot the number of disks or vdevs) over 1 Gb/s network took ~16 days with zfs-send | zfs-recv; rsync took ~17.5 days 19:22:26 So overall that was a wash 19:23:35 hmm 19:24:18 40TB (assuming binary units) is about 35*10^14 bits 19:24:20 The transfer data -- size / hour -- followed a quadratic regression after 3 days 19:25:18 Sorry cubic 19:26:23 oops, 35*10^13 bits 19:26:41 assuming 700Mbps to make the calculation easy, that's about 5*10^5 seconds 19:27:10 which is just under 6 days, so the network wasn't the limiting factor 19:29:14 "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." --Andrew S. Tannenbaum 19:29:36 RhodiumToad, you made me to check my fuzzy memory; will update if I would find the numbers ... 19:31:09 Oh I guess that is a paraphrase of a previous saying that was already in common use at the time. I will amend my attribution of it in the future. 19:31:51 now imagine a cargo plane full of microSD cards :-) 19:36:14 I started this transfer about 3 days ago, but it's not always full speed @ 2.6gbps due to power constraints 19:36:29 the 72xhdd draw a lot of current during business hours so I keep it lower 9-5 19:36:34 but so far I did 22Tb 19:36:42 another 100Tb to go 19:36:55 I'll stop being OT after this one: https://what-if.xkcd.com/31/ 19:36:56 Title: FedEx Bandwidth 19:37:16 Found ~2 year old mail: 47.9 TB took 17.88 days (to 2x vdev of RAIDZ3, 7 disk/vdev of Seagate Exos 14 TB disks): "47.9TB stream in 1545339 seconds (32.5MB/sec)" 19:39:35 that's slow 19:40:04 I'm transferring about 250 million small files, but it's going onto a 45xhdd 12gbps enclosure 19:40:30 I wasn't sure how zfs send / receive would have behaved especially since I need to throttle the speed during certain hours 19:41:33 mbuffer can speed up zfs send | receive a lot. 19:42:37 yeah, isn't ssh limited @ like 125-150MB/sec due to encryption 19:43:03 I was going to use netcat 19:54:28 mbuffer can do networking too, but the primary advantage is that you can specify a sizable memory buffer that acts an intermediary for the data between the compute and transfer steps of zfs send 19:55:08 those are done synchronously, so without a sizable memory buffer, it takes time to move things 20:47:05 debdrup: thanks, I'll look into that 22:59:39 Greetings. I have a PC and a laptop. Both are running the same linux distribution. I have downloaded the FreeBSD-13.2-RELEASE-amd64 and two CHECKSUM files. On my laptop it says the two files(CHECKSUM and the img file itself) are "OK". But on the PC, it says "OK" only for img.xz file, and for the img file: "FAILED". 23:00:55 I mean, on PC-> 1) FreeBSD-13.2-RELEASE-amd64-memstick.img: "FAILED" 2) FreeBSD-13.2-RELEASE-amd64-memstick.img.xz: "OK". 23:01:37 on laptop-> 1) FreeBSD-13.2-RELEASE-amd64-memstick.img: "OK" 2) FreeBSD-13.2-RELEASE-amd64-memstick.img: "OK". 23:01:54 I do not know why, any help/advice would be appreciated. 23:06:22 what is saying OK or FAILED? 23:10:21 sha256sum command. 23:10:26 I'm doing checksum. 23:10:36 sha256sum and sha512sum. 23:11:19 what exact outputs do you get? 23:11:26 just show the sha512 23:12:01 Output of sha512sum? OK. 23:12:51 RhodiumToad: https://bsd.to/ok6i This is the output from my PC. 23:12:52 Title: dpaste/ok6i (Plain Text) 23:13:26 On my laptop everything is OK. 23:13:35 But I do not know what's wrong with my PC. 23:14:38 any chance you truncated the file due to lack of disk space or whatever? 23:14:52 what's the actual length and actual sha512 of the .img file on both? 23:15:50 any chance you... -> I do have enough space. I just downloaded and did the checksum on my PC. 23:16:03 On both systems? 23:16:09 yes 23:17:16 File sizes are the same, but the sha512 signatures are different. 23:18:11 if you uncompress the .xz again, is it correct or not? 23:19:03 What do you mean? I have already uncompressed the xz files on both systems. 23:19:38 you still have the xz file on both, though, going from your output? 23:20:15 the question is whether you get the same problem if you uncompress it a second time; e.g. is it a deterministic error, or is it some random memory or disk corruption? 23:20:24 Yes I still have the xz file on both systems. 23:20:34 the question is... -> Uncompressing again on my PC? 23:21:51 yes, move the .img file aside to .bad or whatever, and uncompress it again with xz -d -k ... 23:23:20 It said OK 23:23:21 :| 23:23:23 Why? 23:23:52 transient error, then 23:24:06 how confident are you in the condition of your hardware? 23:24:32 maybe compare the good and bad images to see how different they are? 23:25:12 I'm not even good in software :-( 23:25:20 How to compare? 23:25:25 With diff(1)? 23:25:38 not diff, that assumes text 23:26:29 RhodiumToad: One important thing 23:26:50 The bad image shows different signature every time I do checksum it. 23:26:56 Why is it like this/ 23:27:09 uh 23:27:38 disk or memory problem 23:28:10 or some linux bug, I don't use linux myself so I wouldn't know 23:28:21 But computer is working fine. How can I make sure to know if something has happened to my disk or memory? 23:29:37 run memtest86 on it? 23:30:03 Do we have it on the FreeBSD? 23:30:17 Let me check the ports. 23:30:28 it's independent of OS, it's a bootable image 23:31:17 Thank you RhodiumToad. 23:31:31 I hope this time FreeBSD will be installed on my PC again. 23:31:50 there is a port for it, but it just downloads the image 23:31:53 > The bad image shows different signature every time I do checksum it. 23:32:00 look like hardware issue 23:32:41 VVD: But everything is OK. Only this SHA512/SHA256 signature was like this. 23:33:26 if nothing is writing to the file, and the checksum changes, then it's either a hardware problem or a very strange OS bug 23:33:41 and checking the hardware is probably the better first step 23:33:57 I should use disk health tools? 23:34:09 or memory health tools? 23:34:12 how big is the image and how much RAM do you have? 23:35:27 this kind of error from the actual disk isn't common, though if it's a dodgy SSD that might make it more likely 23:36:01 I have 4GB RAM. And the bad image size is: 1.1GB(1048712), the good one: 1.1GB(1048712) 23:37:08 No, I do not have SSD, it is HDD. 23:37:13 I'd check the RAM first. 23:42:03 RhodiumToad: I have no knowledge of memtest86 program. Let me test it. 23:44:20 RhodiumToad: OH, it is not a program. I have to boot it? 23:47:31 yup, it needs to get at the hardware without an OS in the way 23:47:57 you run it from a bootable usb drive (or cdrom or ...) 23:48:02 By the way, my PC is old, it uses Legacy(MBR). 23:48:15 that should be fine 23:48:32 RhodiumToad: So what's the package memtest86+ in Void Linux? 23:49:45 I believe it checks for corrupted memory 23:50:21 memtest86 and memtest86+ do the same basic things in the same way, I don't recall what the differences are 23:50:38 it's probably best to use memtest86plus, not memtest86 23:50:42 but either'll do. 23:51:26 Do I need a USB? I installed the memtest86+ package, and I have it under /boot directory. 23:51:29 they're unrelated projects, one of them (memtest86) is commercial with a free version 23:51:38 pr-asadi: Should be able to fire it up from Grub. 23:52:12 Great. Thank you. 23:52:31 it looks like the freebsd port of memtest86+ can be run from loader 23:53:22 so if you got it from linux, run it from grub; if you got it from freebsd, run it from loader 23:53:52 (but I don't use linux myself, so I can't help you with the details of that) 23:55:11 I have booted into memtest86+ 23:55:47 It is showing a box with red color in the bottom of the screen. 23:56:02 saying what? 23:56:58 It is showing many things. 23:57:38 can you take a photograph 23:57:52 Yes. Let me.