08:34:22 recently I have installed one ARM64 vServer and I am seeing ping times to localhost in the rage of >20ms ... whereas on a virtualized AMD64 it's more like 0.020ms 08:34:40 does anyone else see that problem? could it be specific to ARM64? 08:35:36 gustik: i've never seen that on native arm64, but i've never used virtualised arm64 08:37:58 yes, that's the clue 08:38:12 which virtualization technology? 08:38:20 maybe this virtualization is not that ready... I did not try this on Linux before I reinstalled 08:38:24 it seems to be KVM 08:38:30 NetCup is the provider 08:39:07 https://www.netcup.com/de/server/arm-server 08:39:12 they say it's KVM 08:39:40 and I see VirtIO (RedHat) all devices, so my guess it's that 08:40:45 looks like this: 08:40:49 ping localhost 08:40:49 PING(56=40+8+8 bytes) ::1 --> ::1 08:40:49 16 bytes from ::1, icmp_seq=0 hlim=64 time=0.137 ms 08:40:49 16 bytes from ::1, icmp_seq=1 hlim=64 time=26.078 ms 08:40:49 16 bytes from ::1, icmp_seq=2 hlim=64 time=10.273 ms 08:40:51 16 bytes from ::1, icmp_seq=3 hlim=64 time=14.480 ms 08:41:13 same on IPv4 localhost 08:41:44 I have tried to disable PF and run it as bare as possible now and it's still there, so it was there from the start I suppose 08:47:43 since it is not that stable, always jumping up and down, it may not be a bug but may be correlated to the load there is on the host 08:48:44 second what came up to my mind is, since this has 18 cores, it may be that it tries to distrubute the ping/pongs among the cores and that the communication between them takes variable time 08:49:08 <[tj]> I don't think that should be a problem 08:54:50 me neither 08:54:57 that's why it is very strange 09:30:02 OK I SOLVED THE ISSUE 09:30:43 I rebooted with control panel setting of ONE CORE ... effectively disabeling SMP and it came up with one core (1CPU) and now the ping times to localhost are stable at 0.05 ms 09:31:03 <[tj]> huh, can you create a bug? 09:31:38 no 09:32:02 I think it has something to do with the speed it is transferring among the sockets 09:32:14 I will then set it back to 18 cores, but ALL on one socket 09:32:27 <[tj]> 14ms is a long time 09:32:55 <[tj]> ping 1.1.1.1 09:32:55 <[tj]> PING 1.1.1.1 (1.1.1.1): 56 data bytes 09:32:55 <[tj]> 64 bytes from 1.1.1.1: icmp_seq=0 ttl=56 time=543.336 ms 09:32:55 <[tj]> 64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=815.054 ms 09:32:55 <[tj]> 64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=19.799 ms 09:32:55 <[tj]> 64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=91.905 ms 09:33:13 <[tj]> I was about to say its almost as long as my rtt to the internet, but theres something up with my ethernet here 09:33:25 because it seems that this hold-offs which the server is suffering (unexpected stop and goes) are due to the context switching among the packages 09:33:48 now with one core enabled I have localhost worst care of 0.2 09:33:52 case 09:34:09 OK, maybe it is a bug, but not a freebsd specific one 09:34:18 this seems to be a KVM or even hardware issue 09:34:27 with one core it works fine 09:34:31 <[tj]> ideally we would have a tool users could run 09:34:38 <[tj]> so you could point the finger at netcup 09:34:45 I am going to eat now 09:34:54 I will remember to check on it later 09:35:00 <[tj]> bon essen 09:35:03 yes 09:56:45 i really wish make update-packages could be faster 09:57:05 my last build, 52 seconds buildworld, 25 seconds buildkernel, 455 seconds update-packages 09:58:21 that's on 2x SATA SSD in zpool mirror 11:05:28 is gcc for linux amd64 gone? 11:05:35 i remember using it in the past for cross compile 11:08:19 sopparus, amd64-*-* is an alias for x86_64-*-* 11:08:31 https://gcc.gnu.org/install/specific.html#x86-64-x-x 11:08:51 yes, but amd64-gcc14-14.1.0 Cross GNU Compiler 14 for FreeBSD/amd64 is the only thing i found 11:08:58 that is not for making linux binaries 11:08:59 right 11:09:00 ? 11:09:12 in the fast there was this gnugcc-linux-thingy 11:09:16 past 11:11:36 not sure, haven't cross compiled decades. 11:12:08 container with linux is easier 11:12:18 or virtual machine 12:56:03 I have spotted interesting glitch, the clock was set to 2225 year and in this situation autodealay in during boot doesn't work, the counter will not decrease 12:56:37 tsoome_: ^^^^ - it's probably not any bug, and I am not goint to submit any PR 12:57:16 aww. 12:57:27 bios or uefi boot btw? 12:57:36 UEFI 12:58:02 the counter is really about time we do get from firmware. 13:00:13 I wondered why one of 16 machines behaves this way until it had to update packages, https and certs will let you know when the clock is set 200 years in the future - so the mystery of not decreasing conuter got demystified 13:10:01 Would it make sense for bectl to set a property indicating the current zfs/zpool version, and for zpool/zfs upgrade commands to check and warn if you have boot environments from older versions? Since a zpool upgrade will make older BEs useless.. 13:12:06 you can still use them, especially data from them 13:12:14 Data yes, but you can't boot 13:12:28 Howdy Friends 13:12:40 sometimes, you can rewind and boot, sometimes not 13:12:44 the XFCE is neat DE but it wont let me change the Wallpaper 13:13:11 XFCE is one of favourite DEs of FreeBSD users 13:13:25 if you run zpool upgrade, you also need to update your boot blocks, as simple as that 13:13:42 Well zpool upgrade could be smart enough to see if the different features have been enabled/used, thereby rendering boot impossible. 13:13:46 i want to rebuild all noauto packages installed but from ports to suit my needs. I *know* poudriere is de-facto a standard tool for this but i do not want to use it. I just want to go through all the steps. What be be a way - also maybe building them in dedicated jail? 13:14:22 No you don't, necessarily. And new bootblocks do not prevent booting older kernels. Upgrading zpool+enabling/using new features can. 13:14:44 Ltning: that's sysadmin who could be smart or nto 13:15:56 we do not support booting with only few features (draid for example). so, in general, you want to make sure your boot blocks are updated. 13:16:02 Ltning: you can always send this data to thee other pool and boot it or rewind to checkoint what should allow you to boot 13:17:06 tsoome_: The boot blocks are not the issue here. Those are backwards compatible. 13:18:22 But I guess consensus is that having that additional safety net is not worth it - whoever ends up making their pool incompatible with existing BEs only has themselves to blame and has to deal with it. 13:18:37 yep 13:20:15 I find that a bit harsh - we tend to have the safety on by default for most foot-cannons in the system, and this would seem like a sensible thing to check for given how actively BEs are touted as a rollback capability above and beyond mere snapshots. 13:20:50 the thing is, there is no way to know if you have somewhere the kernel with old zfs or not. Also, in general, there is little point to keep very old BE's around. So if you have *decided* you are going to upgrade your pool, you want to be sure you do not want to *boot* those old BE's any more. 13:22:10 I just proposed a way to know - set a property when a BE is created, indicating the current version/feature flags. 13:25:33 well, there is compatibility property on pool. if you want to keep yourself from shooting into the leg, you set it. 13:26:02 see zpool-upgrade(8) 13:50:59 Ltning: can you clarify 'Since a zpool upgrade will make older BEs useless'? 13:51:49 it's not yet coffee o'clock so I'm not properly caffeinated, but that doesn't sound like something that's necessarily true 13:55:52 Yea it was a bit of an absolutist statement. What I meant was that once you zpool upgrade, it becomes very easy to render old BEs useless for booting. I'm not sure if zpool upgrade in recent times (since feature flags) has immediately rendered BEs with older kernels unbootable. 13:57:01 And while it is true that a sysadmin "should" know not to do this, it is also very easy to do inadvertently. And for a new or inexperienced sysadmin - or for someone playing with FreeBSD for the first time - it can be unexpected and unpleasant. 13:57:46 It would also allow for some extra safety in situations where upgrades are automated. We were just worried about whether opnsense upgrades automatically ran zpool upgrade; turns out they don't seem to be doing that but it triggered the discussion. 14:00:33 Also when is it not coffee o'clock? If it's irc o'clock it's coffee o'clock. Unless it's beer o'clock, of course. 14:09:28 well it starts when I finally get around to making some :-) 14:12:08 kevans: making some beer? 14:12:52 i also thought that "zpool upgrade will stop older BEs booting" seemed wrong because you'd have to update the loader, but it's true that older kernels might not be able to mount it then 14:13:05 which is different from the problem we usually run into where people update the kernel without updating the loader 14:13:27 I, er, well, er 14:15:12 my specific problem with the statement was that it didn't occur to me what might be special about those older BEs that would make them specifically unbootable, but I get it now 14:15:37 the problem with encoding it as a prop is that that relies on a specific model of BE management that we don't really enforce 14:17:03 one where the kernel/zfs version is effectively immutable in that BE, but some folks also choose the other way of spinning off a new BE and upgrading the contents of *that*, then bootonce'ing it 14:21:49 Yea that's the kind of use case-type feedback I was hoping for 16:14:39 Ltning: don't get me wrong, though, I'm not opposed to some more seatbelts there- I'm just not sure of the best shape 16:15:28 i'd hate to add unnecessary roadblocks to one or the other management style when things are working as intended 16:41:55 i have installed python but there is no way to run it 16:42:14 anyone work on python in BSD, let me know 16:43:42 what do you mean "no way to run it"? 16:43:48 *with 16:44:14 have you installed python ? 16:44:39 yes, but how does that matter? 16:45:08 type python3 --version and give me the output, if that's cool with you 16:45:56 I don't see how that matters to your problem but Python 3.11.11 16:46:17 can you solve my problem, you cant even issue the command I asked you to do 16:46:30 lol you're a real asshole 16:46:31 don't try to be hero okay 16:47:02 <[tj]> Python 3.11.10 16:47:11 how about you read my message again? 16:47:13 okay 16:47:15 cool 16:47:37 I get this 16:47:38 root@yuki:~ # python3 --version 16:47:39 -su: python3: not found 16:47:39 root@yuki:~ # 16:47:58 <[tj]> I'm gonna say you haven't installed python3 16:48:08 okay 16:48:21 <[tj]> it might have been pulled in as a dependency, but you need to install it to use it 16:48:32 root@willow:/usr/local/etc # netstat -rn|wc -l 16:48:33 1199268 16:48:39 the routes, we has them. all of them 16:49:07 okay let me run in spyder 16:50:09 spyder give me wrong result 16:50:21 2^3 should be 8 and it gives me 1 16:50:48 python3 is a package that basically installs the python3 symlink to the current default version of python3 (which is 3.11 currently in the ports tree) and if you stated your problem from the start I would have suggested trying python3.11 --version 16:51:56 okay nimaje thanks 16:52:03 its working now I think 16:53:08 thanks its working 16:53:16 bot in spyder and command promot 16:53:25 sorry I got big agitated 16:53:32 it just too much in my plate 16:53:38 no hard feelings my dudes 16:54:40 I need to get the postgreSQL and MariaDB configured correct, but may be some other time 20:56:07 Anyone else seen OpenVPN creating a new tun device each time it runs? 21:01:34 I'm trying to copy a bhyve vm (done with vm-bhyve) to a different host. I've tried creating (but not installing) the guest on the new host, then rysncing over vm/ but it's not working. From what I've seen web searching, that seems to be the usual way to do it. I tried the vm-bhyve migrate command but got an error that it was unable to get config from server2. 21:04:20 by not working, I mean I start it on the new host and (it's a Rocky 9x VM) I get a bunch of error messages about services failing to start and then it hangs on Account service failing to start 21:10:07 smr54: so same rocky.conf on both hosts? 21:11:45 smr54 is the vm disk on zfs or a file? 21:11:53 hmm, tarfs could be the tech we need to have our own App Image/App Bundle style distribution of executable for FreeBSD 21:12:53 How much of a performance hit is it? 21:12:58 why would we want that 21:13:20 Why wouldn't we? :) 21:13:42 got my answer, I'm running OpenVPN as non-root. 21:14:41 dvl: can the ending process not delete the old tun device? 21:15:50 rtprio: that is precisely the problem.https://dpaste.org/bbaAi - however, I've been running with this configuration for years, apart from changing from topology=Point-to-point to subnet. 21:45:35 rtprio yes 21:45:49 getz both are on zfs 21:47:55 Are you perhaps running on an amd zen3 processor? 21:49:58 getz yes. that is what started this. I had vms on an Intel and got an SER5 beelink. Wanted to move some vms to it. Found they wouldn't install on the AMD, so thought I could install on Intel then move guest to the AMD 21:50:18 there was a glibc bug recently when running bhyve would cause an issue 21:50:27 are you able to upgrade the vm on the other host? 21:50:43 let me get you the bugzilla link 21:50:51 Yes, all works fine on the intel. I know about that bug. I don't n eed the link I have it 21:51:31 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=275760 21:52:10 Ok! Then you know how to fix it 21:52:54 I know the bug, but not how to fix it. Unless this is a different bug 21:53:21 I thought kib's patch fixed it, let me have a look 21:53:49 otherwise you just need a newer glibc in the vm 21:54:18 I think so. And it might be that I have to wait for various Linux distributions to update it. 21:54:59 For example, I can install rocky 9.4, and successfully update it to 9.5, but if I try to install 9.5, it won't install 21:55:03 Florian merged the fix on the glibc patch a while ago and backported it, I thought it was fixed by now 21:56:04 Yeah I know, been there done that :) 21:57:03 It's not a disaster, I have plenty of room on the old machine just wanted to take advantage of the greater memory on new machine. But thank you for the input 22:03:34 hm, I cant find it in upstream. Might just need a ping 22:03:59 Here's the patch https://inbox.sourceware.org/libc-alpha/87seqmrz3b.fsf⊙osrc/ 22:08:23 smr54: I realized that we were talking about different bugs, I was referencing 279901 22:22:39 my apologies. I was going on something I got from the vm-bhyve iste. 22:26:07 Yes, Ihad that bug bookmarked too (the one that you just gave me). 22:26:26 thanks again 22:26:45 time to eat so I'll bid you a goodnight