01:12:06 freebsd13 enable ipv6, could get ipv6 address, can't ping6 ipv6.com(no route to host), any idea? 01:17:57 kyonsalt: ipv6.com has no ipv6 adress in the dns 01:18:53 try "ping6 www.kame.net" 01:22:31 :),same response:"no route to host", "netstat -nr" shows ipv6 gateway and could ping that ipv6 gateway address. Confused... 01:24:02 what does "traceroute6 -n www.kame.net" say? 01:25:45 $ traceroute6 -n www.kame.net 01:25:45 traceroute6: Warning: mango.itojun.org has multiple addresses; using 2001:2f0:0:8800:226:2dff:fe0b:4311 01:25:45 connect: No route to host 01:25:45 01:26:30 any fw active? 01:27:05 also what says route -n6 get 2001:2f0:0:8800:226:2dff:fe0b:4311 01:28:28 $ route -n6 get 2001:2f0:0:8800:226:2dff:fe0b:4311 01:28:29 route: route has not been found 01:29:04 can you paste the output of netstat -6nr 01:29:05 no firewall 01:33:23 too long, but there have 2 ipv6 global address with "link#4 U/UHS em3" 01:33:44 use a paste service like https://bsd.to 01:33:45 Title: dpaste 03:55:48 I have a server with one SSD and 40x hdds configured with zfs in 20 mirrored vdevs 03:56:10 when I do rsync from/to the SSD, I can hit at most 400MB/sec 03:56:28 the hdds are on sas3 backplane rated @ 12gbps 03:56:47 shouldn't I be seeing close to 1GB/sec or more ? 03:57:10 rsync with what options? 03:58:40 ah it's a local copy, so just rsync --info=progress2 src dst 03:59:22 to destination files that don't already exist? 03:59:32 it's a single 38Gb file 03:59:35 which doesn't exist yes 03:59:48 25,682,083,840 66% 416.80MB/s 0:00:30 03:59:55 ok, and if you run gstat -p while it's running, what do the busy percentages look like? 04:00:36 ah, hang on a second 04:00:41 SSD limit is 400MB/sec lol 04:00:47 not sure why I was expecting more 04:00:51 well there you go 04:02:07 however, I also have a zfs ssd pool made up of 5 x 2 mirrored vdevs 04:02:25 and there it's also 400MB/sec to/from the 20 x hdd vdevs 04:02:52 ok, and gstat -p shows what? 04:05:29 bursts of 11% busy on all drives, but it's not constant 04:06:06 https://bsd.to/Qwic 04:06:08 Title: dpaste/Qwic (Plain Text) 04:06:32 https://bpa.st/MFFWS 04:06:33 Title: View paste MFFWS 04:06:35 this is better 04:07:29 the drives don't seem to be all that busy 04:07:58 I tried disabling the sync on the target zfs pool, didn't make any difference 04:08:07 now do top -CSH -s10 -c2 (it'll sleep 10 secs and redisplay and exit, paste the final output) 04:08:46 top: invalid option -- c 04:09:46 -d2 sorry 04:10:12 interestingly enough I'm see more activity on the ssd pool than on hdd pool 04:10:31 https://bpa.st/B7I54 04:10:33 Title: View paste B7I54 04:11:22 smaller pool will generate proportionally more activity for a given load 04:11:51 this is the top output: https://bpa.st/NCWEE 04:11:52 Title: View paste NCWEE 04:12:22 er 04:12:39 true, but I was expecting 5 x 400Mb = 2GB/sec output for this ssd pool 04:12:43 that's a lot of cpus, can you do it on a larger screen? 04:12:55 or maybe add -I to the flags 04:13:07 no, that won't help 04:13:43 want to see system threads in the output, but the per-cpu idle threads are getting in the way 04:14:58 Why could "idle" threads not be filtered out? 04:15:29 What controller is driving these devices? Might it be the bandwidth limit? 04:16:14 ah, -z is the flag needed 04:16:18 It's a standard LSI 9308-8i + 9308-8e HBAs 04:16:35 the 8i drives the internal SSD pool, the 8e drives the external HDD expansion enclosure 04:16:36 Never mind 04:16:36 last1: top -CSHz -s10 -d2 04:16:46 it's all supposed to be 12gbps 04:17:03 Should be good then. It's good that you are looking into this performance side. 04:17:51 https://bpa.st/C5HVS 04:17:52 Title: View paste C5HVS 04:18:23 rysnc busy raking the disks 04:18:37 * kevans chuckles at a message in a non-freebsd list: "I'd suggest not writing in all caps, as that is typically interpreted as angry shouting." ... less than 5% of the e-mail being responded to was in all caps, and it went on to explain the warranted anger 04:19:31 haha 04:19:42 * RhodiumToad was responsible for the all caps 04:19:53 the cpus are CPU: Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz , 18 cores each, 36 for both, I'm guessing with HT it shows as 72 total cores 04:20:53 I am not a fan of HT and have benchmarked tasks where things are faster without HT than with it. But not file system tasks. So, don't know here. 04:21:15 rsync is using enough cpu that it might be the limiting factor, depending on the exact pattern of system calls it's doing 04:22:02 not in terms of total cpu usage obvious;y, since the box is >96% idle, but in terms of single thread usage 04:22:33 interesting 04:22:50 so if I start two in paralell it might show total transfer of 800MB/sec but it will be bound on different cpus 04:23:01 and then I can tell if the limit is IO or cpu 04:23:01 might be interesting to try a few completely independent rsyncs in parallel to see what the aggregate i/o is like 04:23:11 cool, let me try that 04:24:45 kevans: also I was especially peeved since this is the second time for lua 5.4 alone, though the first time I was keeping up enough to catch the problem in a release candidate and argue them out of actually releasing an ABI break 04:25:35 this time I was just too late :-( 04:25:50 https://bpa.st/JOXQS 04:25:51 Title: View paste JOXQS 04:25:55 yep, it seems the CPU is the limiting factor 04:26:00 two in parallel I hit 800MB/sec 04:26:21 well, either CPU or rsync single thread performance 04:27:11 last1, Another thing to try is --while-file option (& --sockopts) 04:27:11 rsync isn't necessarily going to be the fastest tool for this, since it's written for a much more general use-case 04:27:33 s/--while-file/--whole-file/ 04:27:51 parv: when the destination file doesn't already exist that should make no difference 04:28:02 since it'll do whole file mode then anyway 04:28:19 Right 04:28:43 so what's the fastest way to copy a file then ? 04:29:24 weirdly enough, from another server I can hit 500MB/sec using scp 04:29:37 with: -c aes256-gcm⊙oc 04:30:00 no I can't 04:31:08 Replace all the disks? I read about "Infiniband" 04:32:00 ha, weird stuff, check this out 04:32:04 (replacing the disks in terms of moving a set of disks from one enclosure to another) 04:32:19 from backup server, I start a copy from remote, I get 300MB/sec using scp 04:32:34 from the remote server I start a copy to backup server, also using scp , I get 500MB/sec 04:33:12 why would it matter where I initiate the transfer frmo 04:33:14 *from 04:33:35 So rsync is good for *incremental* transfer. And also for plain copying. But for plain copying just plain cp is also okay. 04:33:46 it may affect the distribution of encryption vs. read/write workload between threads 04:34:10 I might try a test using "pv" (a utility I like with nice progress indication) and then try a test without rsync. 04:34:25 I'm not sure how scp divides things up on the initiation end, but on the destination end I think it'll end up using separate processes for decrypt and write 04:34:27 RhodiumToad: that may indeed be the case, the source server has a more powerful cpu 04:34:30 last1, When you would chance, also try a "zfs send | mbuffer | zfs recv" 04:35:42 last1: again, if you investigate using top -CSHz you may be able to tell whether certain specific threads are a possible bottleneck 04:35:46 .oO( Heh that works even without "have" :-) 04:40:32 I guess what's needed is something like torrents 04:40:43 split the file automatically in x chunks and transfer those in parallel 04:42:33 RhodiumToad: oh yeah, totally get it... and roberto's responses have left a lot to be desired 04:43:24 very "no one cares" vibes, when it's the promise they made. you can't just de-weight some APIs like that because it's convenient 04:43:41 bad wording, some parts of the API 05:06:03 silly question, when copying, even with cp a file from a different pool, why does it show 99% cpu usage ? 05:06:14 isn't copying mostly IO ? what's the cpu doing ? 05:08:01 often the answer to that is "figuring out what to write where" 05:08:12 at least if the device has no DMA, then processor needs to handle transfer of each byte in I/O 05:08:32 as the raw i/o becomes faster, the software overheads become more of an issue 05:08:51 well, cp is about 3x faster than rsync 05:08:59 completes in 27 seconds 05:09:04 so roughly 1GB/sec I think 05:11:01 yeah that makes sense 05:11:03 I think in recent versions, cp uses copy_file_range 05:11:33 which may bypass some of the CPU cost of doing the obvious read/write loops 05:11:44 on ZFS, it is also dependent on fragmentation 05:11:45 yeah, and iirc rsync always forks and execs an rsync receiver to copy to, even for a local copy; can't really optimize it in that way 05:12:06 er, forks 05:12:11 would the zfs compression show up under the cp cpu usage ? 05:12:27 I don't think so, I'd expect to see that in a zfs thread 05:12:29 it is more disk iops 05:12:37 but my zfs knowledge is weak 05:12:50 check it with gstat 05:13:07 nerozero: last1's case involves a lot of disks and a high-speed controller 05:13:10 see disk "busy" status and ops/s 05:13:31 yeah, we already determine the issue is the cpu 05:13:40 RhodiumToad, sorry, just joined dont see previous conversation 05:13:43 maybe a faster cpu can do faster copes than 1GB/sec 05:13:48 *copies 05:14:01 ok 05:14:15 what is the CPU specs then ? 05:14:37 Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz 05:14:51 yeah 05:15:24 thing is, 36 cores is a lot of aggregate CPU power, but you're still limited in how much speed you can deliver to a single thread 05:16:09 so I'm hitting a pretty basic limit here, I can't imagine even with more powerful CPUs hitting more than 2-3GB/sec 05:16:27 probably not when you're bottlenecked by 1 thread, no 05:16:32 THere is 18 cores, the rest is hyperthreading which performance is iffy 05:16:40 but still it is alot 05:16:45 2 packages, I think? 05:16:49 it's 2 cpus, with HT it's 72 'cores' 05:17:06 and yeah, HT won't be helping at all in this context 05:17:42 hard to tell whether it's just not helping or whether it's hurting at all... I would guess it probably isn't hurting, but that'll depend on other factors 05:17:55 still thiking this is an issue of disk iops, in case of zfs raid, had same issue after having a lot of volume type zfs entries 05:18:13 we ran two rsyncs in parallel, got twice the speed 05:18:18 it's really a single thread speed issue 05:18:33 nerozero: we got gstat -p output before, the disks are a long way from being saturated on ipos 05:18:35 *iops 05:18:47 so I wonder, how do big data companies transfer giant files that are like 100-200Tb 05:18:53 if it's sequential it will take forever 05:19:24 by switching hard drives :D 05:19:30 heh, true 05:19:48 alas, it's been a good many years since I worked with anything considered "big" 05:19:53 like mirror pool - take half, u got a second copy :D 05:20:07 ;-> 05:20:17 how do you know which to take though ? 05:20:33 it is mirror, dont care :D 05:20:35 this time I'm taking photographs of each disk's serial # before putting it into its final bay for production 05:20:39 Mirror! 05:20:58 yeah, but the order they give you after isn't related to their position 05:21:13 though then I often had to transfer significant data volumes between San Jose and Amsterdam, and part of how that was done involved avoiding having single huge files and instead using larger numbers of moderate size files 05:21:35 https://bpa.st/NFCGC 05:21:36 Title: View paste NFCGC 05:22:01 ^^ now that is impressive 05:22:51 cheap mans backup server. 280Tb total capacity 05:22:55 at this many drives the RAID controller itself can cause a performance issues, bus speed limitations 05:24:33 My experience limited by 10 drives max so... I'm a bad advisor here :( 05:24:56 the raid controller is fine, it can do 12gbps 05:25:09 it's an hba actually 05:25:27 ah wait, another click, 12gbps = 1.5GB/sec which is pretty close to cp's speed 05:26:01 possible 05:26:24 plus some overhead 05:27:09 guess I have to look @ 24Gbps sas 05:27:32 Imho moving data by replacing a part of the drives by bringing them offline is still a solution ... 05:28:28 this is an ongoing backup, I can't shuffle physical disks 10 times per day 05:28:39 but I'm happy with my answers 05:28:47 3 way mirror - not loosing redundancy 05:28:48 rsync is cpu limited, cp is io limited 05:29:03 thank you everyone for your inputs & help 05:29:17 good luck !!! 12:02:55 In regular RAID 5, I know that drives can be hot swapped in RAID 5, which means a failed HDD can be removed and replaced without downtime. Is this fact valid for also ZFS/raidz1? 12:03:01 as RAIDZ1 = RAID 5 12:04:14 Is hot swapping possible with ZFS and raidz1? If so, would simply replacing the failed disk with the new one, be enough? Or some set of commands is needed? 12:05:33 https://docs.freebsd.org/en/books/handbook/zfs/#zfs-quickstart 12:05:34 21.2.3 12:05:35 Title: Chapter 21. The Z File System (ZFS) | FreeBSD Documentation Portal 12:05:48 tercaL: ^ 12:06:48 hotswapping needs zfsd(8), that you enable the autoreplace zpool property, and that the disks have an identifiable enclosure path 12:07:07 depending on what you mean by hotswapping, i gues 12:07:24 RAIDZ is roughly RAID5 12:07:38 debdrup: In cases like, getting a drive rerror, shutting the server down, replacing the disk and re-starting the OS. 12:07:38 if you mean the actual electrical hotswapping, that needs a backplane with capasitors to handle the inrush current 12:07:54 *a drive error 12:08:00 tercaL: it's not just a question of software, though 12:08:11 Hmmm 12:08:49 electrically, there's a risk of arcing due to inrush when hotswapping 12:09:04 that's why you need a capasitor 12:09:38 as far as software goes, you can use zpool offline and zpool online to remove and re-add a disk 12:09:57 that's what zfsd can do automatically, using enclosure identifiers 12:12:01 as an example, in my SAS enclosure, if i pull a disk, and insert a new disk (usually bigger, but at least the same number of LBAs of the same size), my zpool will automatically begin the resilver process 12:12:37 this isn't really a good idea if you're booting from the pool though, because boot pools require partitioning 12:13:51 but yes, you can probably do what you want, it's just a question of how 12:14:23 if you can schedule downtime, that's absolutely the best idea, unless you're working with a backplane that's made for drive hotswapping 12:15:33 but zfs itself doesn't really care, so long as there's sufficient blocks, which is a question of array layout 12:27:22 auto resilver also will not work when you use geli encryption 12:28:00 true 12:28:03 but yeah, I have replaced 6 out of my 8 disks in the last year, without taking down my nas 12:28:56 using zfsd with autoreplace and autoexpand enabled does feel like magic, though 12:29:16 yeah, I should move to zfs encryption instead of geli on my home nas 12:29:37 i wouldn't. 12:29:51 https://docs.google.com/spreadsheets/d/1OfRSXibZ2nIE9DGK6swwBZXgXwdCPKgp4SbPZwTexCg/view 12:29:52 Title: OpenZFS open encryption bugs (public RO) - Google Sheets 12:30:17 ok, maybe I should stick with geli 12:30:19 thanks :) 12:32:58 debdrup: Very strong and super cool info, thanks. Final question, when I created a pool with: zpool create tank raidz1 da0 da1 da2 da3 da4 da5, the pool was created but the console had errors output like; GEOM da0: the primary gpt table is corrupt or invalid. And it goes the same messages for all da1, da2.. disks. Should I worry about this one? 12:33:39 and it continues: "using the secondary instead -- recovery strongly advised" 12:39:43 This helped: https://forums.freebsd.org/threads/corrupt-gpt-table.77585/post-483348 12:39:44 Title: Solved - Corrupt GPT Table | The FreeBSD Forums 15:05:19 tercaL, Since you have six drives I recommend raidz2 (effectively raid6) instead of raidz1 (raid5) as it reacts better in the presence of fault conditions. 15:24:28 Speaking of 13.1-RELEASE to 13.2-RELEASE, would you sooner update a base jail so that its respective thin jails update "automatically," or would you rather bootstrap a fresh base jail and edit the `fstab`s of the thin jails to mount the filesystem from the fresh base jail? 15:25:38 With constrained resources on a VPS, the latter is easier for me. It has worked thus far 15:28:09 I don't use base jails because storage space isn't the premium it used to be. 15:28:16 Base jails and thin jails, rather. 15:32:59 True. I appreciate the response. It's much to better to hear that than "the two are not equivalent and you are asking for trouble." 15:34:21 There's another option, too. 15:34:39 VPSes tend to be overprovisioned (YMMV, i suppose). My issue is compute. I forgot to reboot prior to upgrading last night, and it took forever. Rebooting tends to get cores with much more headroom initially 15:35:13 Another option? 15:37:19 If you set it up so that third-party software configuration is handled via unionfs(5), instead of updating each jail sequentially, you can install a new jail, zfs snapshot then zfs clone that snapshot into a new dataset that's used for yet another jail where you install third-party software into, then mount the configuration files into the jail. 15:38:05 That way, you're only spending diskspace for the third-party software, as zfs snapshot+clone will cause all the unmodified data to simply exist as referenced data. 15:38:59 It needs ZFS to work, and is fairly labour-intensive over simply running freebsd-update, but saves a couple of GB (or however much diskspace the base system takes up). 15:39:37 I suspect some/most of it could also be done via poudriere-image(8) using the zfs snapshot+boot environment feature, but I haven't looked into that yet to make sure. 15:40:28 It also requires careful control over the jails, so maybe it's more effort than is worth it. 15:41:38 It's mostly just something I've been thinking of playing around with, if I ever get the time/energy. 15:43:45 As far as tinkering goes, I like the idea. Skinning a cat in a new way helps to either cement prior knowledge or realize shortcomings. But, yeah, I think I lack the time (and need) for this type of setup. 15:44:12 Not sure I'm comfortable with that phrasing. 15:47:56 There's more than one way to knit a sweater... 15:48:21 Hi, if I allowlisted a dynamic IP via a dns name that's auto-updated will pf still allow the new IP in? 15:49:19 like does it resolve my dns often? 15:49:47 Nameserver lookups should never be part of firewall configuration. 15:50:10 That risks a catch-22. 15:51:01 ok, what would be the best way to whitelist a dynamic IP that has a dns entry? 15:51:12 (I'm new to freebsd) 15:51:40 o/ 15:52:01 how can I find CPU and / or motherboard temperature ? 15:53:38 the only thing so far I have found is: sysctl hw.acpi.thermal.tz0.temperature 15:55:25 ha ... kldload coretemp, then sysctl -a | grep -ie temp 16:32:11 esselfe: ip range with cidr notation 16:32:44 do a whois on your ip to find out what range it's in, then figure out the vlsm 16:33:11 ok, thanks 17:43:54 how can I install this? https://man.freebsd.org/cgi/man.cgi?query=ag&sektion=1&manpath=FreeBSD+13.2-RELEASE+and+Ports 17:43:55 Title: ag(1) 17:45:16 the "ag" command is not available on the system ... 17:45:53 freshports doesn't seems to have listed it too 17:47:23 found it under textproc/the_silver_searcher 17:47:27 https://www.freshports.org/textproc/the_silver_searcher/?branch=2022Q4 17:47:29 Title: FreshPorts -- textproc/the_silver_searcher: Code-searching tool similar to ack but faster 17:47:53 dvl :) 17:47:55 thanks 17:47:58 ^ ignore the branch, I blindly copy/pasted after searching for Silver Searcher 17:49:53 i'd be cursious how ag performs compared to ripgrep 17:54:31 otis need it for vim 18:46:27 they are all compatible. 19:22:41 Hej! Might anyone have ideas for this boot issue on a Ryzen 7 Pro-based ThinkPad? https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270707 19:22:44 Title: 270707 – Installer media doesn't boot on Thinkpad T14s Gen 3 (Ryzen 7 Pro 6850U) 19:23:00 Hi VimDiesel! I've missed you! 19:23:13 <3 <3 19:31:53 heh 20:00:47 VimDiesel, you tried with ACPI disabled? 20:01:25 CrtxReavr: you mean michaeldexter 20:01:39 Um. . .. yeah. 20:01:44 whoops 20:04:57 I remember a painful period in the transition from PS/2 to USB devices. . . had to side-load a kernel with fdc(4) support removed. 20:16:03 ugh, ok the root issue of scli not working wasn't my jail or freebsd, it's that signal-cli is a year out of date and the ssl certs expired. signal is such a pita. 20:18:41 michaeldexter: try: set debug.acpi.disabled="acad cmbat" from the loader prompt 20:19:56 Trying that now... Thanks. 20:23:06 yuripv: That now stops earlier at psm0: model Generic PS/2 mouse, device ID 0 20:24:02 what if only cmbat in that line? 20:25:54 yuripv: That now stops at the next message, acpi_acad0... 20:29:54 yuripv: I tried including only "acad" but the same result. 20:30:51 yuripv: One type of of the space bar gives: AcpiOsExecute: task queue not started 20:30:55 usually when boot hangs it's not the fault of whatever did the last message, but rather the thing _after_ that, which of course you never see 20:40:35 hrmmm 20:40:47 do we have a fixed boot oder? 20:40:55 no 20:41:01 well 20:42:28 maybe try disabling all the acpi modules, or just enabling them one at a time? 20:53:55 Trying that... 20:55:28 there's also a debug.acpi.avoid option to tell it not to try and parse parts of the acpi data, but figuring that out would probably require getting a dump of the acpi data from another OS 21:00:51 RhodiumToad: set hint.acpi.0.disabled="1" gives a panic: APIC: Could not find any APICs. 21:01:43 Has anyone had luck with getting a FreeBSD working on proxmox? With all the defaults, I can only acheieve a maximum screen resolution of 1280x1024. Wondering if anyone has had success with getting 1920x1080 21:02:06 *** getting a FreeBSD DESKTOP working on proxmox 21:04:49 michaeldexter: debug.acpi.disabled="all" 21:07:42 RhodiumToad thank you we got some progress! :D 21:08:33 RhodiumToad: Indeed! Now I get mountroot> with no disks, but I investigate further. Thank you both! 21:11:24 maybe try disabling "acad button cmbat cpu ec lid mwait quirks thermal timer video" and if you get anywhere with that, try selectively removing those until it breaks 21:14:02 Will do! 21:22:07 RhodiumToad: That stops at whatever is after psm0 21:22:10 But thank you! 21:40:25 michaeldexter: there are also debug output options in acpi(4) 21:41:07 could try that with ACPI_ALL_DRIVERS and ACPI_LV_ALL_EXCEPTIONS 22:07:29 I'll try that yuripv 22:22:01 yuripv: No luck with ACPI_ALL_DRIVERS and ACPI_LV_ALL_EXCEPTIONS and them individually, plus a few more from the manual page, but huge thanks! 22:45:20 michaeldexter: it does not print any additional information? 22:50:18 looks like ACPI_DEBUG is needed in kernel config for that.. 22:51:29 or there's also debug.acpi.enable_debug_objects (try setting this to 1 with previous debug settings?) 23:35:12 yuripv: No, that did not provide any more debug information. I'll try debug.acpi.enable_debug_object 23:37:19 Same behavior. 23:39:50 michaeldexter: add sysresource to the list I suggested 23:40:18 michaeldexter: if that doesn't help, try isa 23:40:46 the problem is that you likely need bus, children, pci and pci_link to get at important hardware 23:40:59 probably need isa too for that matter, but try it 23:43:31 Will do. 23:48:28 Just to confirm RhodiumToad: At the loader prompt: set debug.acpi.disabled="acad button cmbat cpu ec lid mwait quirks thermal timer video sysresource isa" 23:48:30 Same behavior 23:49:37 I guess I can build opions ACPI_DEBUG into a kernel.