09:42:52 SKull, on the topic of Contabo, I'm curious --- which host did you go to from them? 09:43:31 I'm considering going back to Hetzner dedicated (despite having mixed experience with those ... but granted, the previous ones were from the server auction, not their "usual" offer), but I want to weigh my options first ^^ 13:30:25 is there a reason to use freebsd 13:30:26 ? 13:30:59 yeah, better documentation 13:31:11 but I mean like is it good for casual gamer 13:31:12 ? 13:32:16 I don't play game so I don't know 13:33:56 does someone play cs2 on freebs 13:33:57 d 13:38:40 which freebsd version is the most recent 13:39:19 14.2 is the most recent release 13:39:44 https://www.freebsd.org/where/ 13:40:26 <[tj]> sdr++ doesn't seem to be giving me any audio, is there some way to debug a libportaudio application and verify it thinks I have a sound interface? 14:57:14 does anyone use sixel graphics in tmux? 14:57:23 or sixel graphics at all 15:10:14 "but I mean like is it good for casual gamer" -> big game change for this would be steam client support (proton). but steam itself already has their os and that would mean truckload of developers to port daily with changes... 15:24:32 I *LOVE* rebuilding cmake. 15:24:44 just wait until you get to rebuild rust 15:24:58 Oh, that's in the queue as well. 15:25:47 My hate for that isn't as well established though. 15:26:06 i suppose that depends on how many cores you have devoted to this 15:26:13 but a follow up question, what's the rush 15:26:25 just leave it running in tmux and check on it in a few hours 15:26:30 Today's box is quad core, with 'make -j6' 15:42:30 Any help for this lang/rust build failure?: https://bpa.st/I2JQ 15:42:51 Fresh portsnap. 16:25:11 Okay, so I have a src repo clone in /usr/src_head 16:25:28 How do I get a 13.4 tree in /usr/src/ ? 16:26:14 why do you want to do that? but you use git to checkout the tag 16:27:01 git checkout release/13.4.0 16:27:12 while in /usr/src? 16:29:44 CrtxReavr: Is there any reason you don't just clone the whole tree there? 16:30:13 My git chops are limited. . . 16:31:11 I don't entirely understand the "whole tree" vs. "branch" thing. 16:31:26 move /usr/src to /usr/src_ and move /usr/src_head to /usr/src. in /usr/src git checkout release/13.4.0 16:31:30 Nor why I'd want the whole tree for one branch. 16:31:38 I'd guess /usr/src is empty right now? 16:31:56 I'm currently re-cloning to /usr/src 16:32:00 (I think.) 16:32:14 CrtxReavr: Well that's a reasonable question to ask.. because those whole thing is large with all the branches. 16:32:32 63% 16:32:57 78% 16:33:03 You can clone or pull just a single branch. But for the most part if you have the disk space and this isn't a temp thing it's easier to just pull the whole thing. 16:34:10 Now deltas. . .. 16:34:30 * CrtxReavr is having a hate freebsd day. 16:34:58 It's Friday. 16:35:23 I've been using it since 3.0 and so many things about it have become a pain in teh ass. 16:35:52 Such as using git? 16:36:23 Specifically to manage src & ports, yes. 16:36:50 It's the samething just a different name. 16:36:53 And how more and more ports won't build outside of poudriere. 16:37:08 Well that's just smart. 16:37:14 I think the OS version churn has gotten a ibt out of hand too. 16:40:47 11:37 < skered> Well that's just smart. 16:40:59 It's called broken. 16:41:18 Are you using poudriere-devel? 17:34:48 Hi. Quick question, is download.freebsd.org having problems at the moment in certain regions? Just trying to diagnose my own server problems. I suspect they're on the server, but figured I'd check. 17:42:31 it can happen, but i don't think i saw any status update about it 17:45:52 Yeah, I figured. My host says the (physical) server my vserver is on is overburdened; they said they'd move it, but nothing so far ... 17:46:19 Having all sorts of weird errors. At some point, a jail reverted to a previous state (and *not* a read-only jail!), and the OS went back to 14.0 for a few moments?! 17:46:32 And yet, zpool scrub says there are no errors. 18:04:31 Did you (or they) boot into previous BE? 18:19:24 Not on purpose. I did do a `reboot` without params, though. 18:32:22 `. 18:32:28 oops :) 18:32:33 A very good point. 18:32:59 was meant to kill an ssh connection, but i pressed it in the wrong order 18:48:47 ... 18:49:08 I'm starting to suspect more than 1 server is running on the same IP. I literally shutdown nginx, but the webpage is still working *and* it's showing the wrong version of it. 18:49:38 Also explains all the network errors (IP --- and possibly MAC --- conflict) 18:59:23 YES. ACTUALLY. 18:59:35 I did `ssh <...>` twice, same exact username/IP. 18:59:39 One's reporting 14.1, the other 14.2. 19:00:04 *My host somehow managed to run two instances of my VPS on the same IP.* 19:03:27 Oh boy, it gets better. I suspect I might have *three* servers running. Same MAC, too. 19:03:32 This is actually impressive. 19:06:45 Damn buildworld takes a long ass time these days. 19:07:53 yuripv: FTR ^ 19:08:18 I suspect shutting one down myself might help things, but at this point, I'm going to wait and see what the host does. For science and entertainment. And I'm going to *actually* go grab popcorn. 19:14:36 Also SKull: Remember how you said you were on Contabo, and I mentioned I was too? 19:14:41 NOT FOR LONG. 19:30:07 how so 19:34:10 HER: They're the ones who did this fuckup. 19:34:19 The VPS host. 19:39:57 eh 19:48:20 hah 3 vms at same mac? 19:48:48 i bet if you have proxy at front you can do it internally 19:48:53 but meh 20:03:57 clang takes a long ass time to build. 20:05:02 i thought so too until i had to build chromium :) 20:05:02 CrtxReavr: what? 20:05:07 CrtxReavr: do you add everything? 20:05:36 CrtxReavr: wait. building clang or building by clang 20:07:33 Both, I would assume. 20:10:40 CrtxReavr: starting from the build of llvm, you can cut build time by removing features you won't use 20:11:56 Definitely going to use the build chain. 20:16:30 ketas: 3 VMs with distinct filesystems running interactive services like forums. This is gonna be fun when they sort it out. 20:17:08 To be clear, I'm only renting a VPS from them, the actual physical server is not my own. 20:17:15 So this is definitely a fuckup on their end. 20:18:13 if they virtualize l2, i actually see no issues with 3 macs 20:18:29 I don't know how they virtualize, but it's also 1 IP leading to all of them. 20:18:40 1 mac 3 times that is 20:18:56 And whatever they're doing, the IP and/or MAC issue is causing outgoing TCP to fail to connect properly, etc. Presumably because of a confused router. 20:19:01 why 1 ip? 20:19:09 Because it's supposed to be a single server. 20:19:19 I'm not renting 3, I'm renting *1* that they somehow cloned twice. 20:19:29 And all 3 are running. 20:19:33 FTR, this has been going on for days by now. But it's only today that I realized what's happening. 20:20:07 port 22 on 1 ip connecting to 3 different machines is easy to do in pf if one wants 20:20:10 :p 20:20:27 Yeah, it's kinda what's happening. 20:20:45 I did `ssh <...>` twice, ended up on two machines. And I have a reason to believe there's a third (or was, I can't seem to reach it anymore via ssh) 20:20:48 how vps provider ended up with this, no idea 20:20:50 (but it could just be luck of the draw) 20:21:43 i mean if everything in your host is virtual it'a just up to kernel where data goes 20:22:13 I think they wanted to transfer my server to a different physical machine (because they said they would), and someone fucked up royally. 20:22:17 just strange 20:22:29 BTW, I informed them of the situation. 20:22:51 All I got back is basically a "your domain only has a single A record pointing to , it's a single IP, you're wrong" 20:22:57 Yes, I know it's a single IP. That's my bloody point! 20:23:08 hahaha it would be fun if they accidentally transferred your vm into same physical machine 20:23:24 and fun if their backend allows it 20:23:28 That was my last interaction with their support. I gave up trying to help at that point. 20:23:50 And it's definitely two different machines, I'm still SSH'd into both ^^ 20:23:51 and hypervisor thinks it's 3 load balanced hosts 20:23:55 :) 20:24:08 Yeah, or similar. 20:24:14 it's valid setup 20:24:20 But I don't think it's that. I think the hypervisor is almost 100% confused. 20:24:30 Because a hypervisor that is just doing load balancing would properly handle outgoing connections, methinks. 20:24:49 connection is not going out? 20:25:04 Not reliably. Anything external is super-unreliable, presumably because a confused router somewhere. 20:25:23 I'll do `ping <...>` and it'll work without a hiccup ... I'll stop it, try it again, and it'll fail. 20:25:59 Same for DNS lookups, downloads (e.g. `pkg` and whatnot), all of it. 20:26:01 can't split icmp echp reply 20:26:08 well you could 20:26:16 just send it every vm 20:26:28 it's virtual after all 20:26:39 I just tried `host google.com`, 1 second apart. First failed, second succeeded. 20:26:44 lol 20:26:51 that'll cause ping to crap out i think 20:26:58 "fuck i'm outta here" 20:27:15 It's ... I think this is the worst network-related fuckup I've ever seen. At least as a customer. 20:27:28 it failed because it was rr'd to wrong vm! 20:27:31 :p 20:27:33 Fortunately it's only a personal server. Well, mostly (I host a UT2004 mod and a few other things) 20:27:47 So I'm just sitting here, being ... both amused and bewildered. 20:27:53 And, uh, looking at other hosts' offerings <_< 20:28:07 well ask or put it into other host? 20:28:16 or ask for other ip 20:28:32 Asking for another IP is pointless, they don't even realize there's 2+ servers running. 20:28:36 (despite me telling them) 20:28:43 i mean if you pay for 3 20:28:55 well they should check it 20:28:58 I pay for 1. 20:29:03 Like I said, they cloned it twice by accident. 20:29:10 3 for the price of 1, I guess. 20:29:47 that should be helpdesk issue or then forwarded to admins to fix and disallow it in future 20:30:03 I had *nothing* to do with this (except reporting the connection issue [I didn't realize the root cause at the time, I just reported a problem with outgoing connections]) 20:30:19 free 2 extra machines 20:30:28 Yeah. I told them exactly what's happening, and they're ... incompetent. That's why I gave up. 20:30:36 just takes 100 tries to get ssh into 3 20:30:39 :) 20:30:52 Told them there's 2+ machines with same IP and mac, and that `ssh foo⊙ec<...>` 20:31:08 ... sorry, premature enterkeyation. And that `ssh foo⊙ec` connected me to two different ones. 20:31:11 unsure why they don't believe you 20:31:21 vms do it 20:31:32 Response (quote+redaction): "I have checked domain <...> multiple times  and the only IP address that was returned was your <...>." 20:31:49 Gee, thanks. Yes, running multiple VMs by accident is totally a DNS problem. 20:32:00 Not sure if I mentioned, but --- this has been going on for *DAYS* by now. 20:32:07 that should be escalated for admins to check 20:32:12 It should be, yeah. 20:32:45 But at this point, I'm just ... I wasn't joking when I said I was gonna get popcorn. I was eating it while looking at my two ssh sessions. 20:33:20 besides vps service helpdesk should be aware that hypervisors can easily do it and not tell it's impossible 20:33:29 i would be curious at least 20:34:28 They didn't as much tell me it's impossible, as much as they ... completely misunderstood my statement (which was pretty clear, I think) of IP & MAC conflicts, with a DNS issue with my domain setup (which isn't even hosted by them) 20:34:35 setup outgoing ssh tunnels to get reliable access 20:34:44 Well, that and basically dismissed me and told me to wait while they work on it. 20:34:47 and enjoy the free 3 in 1 20:34:52 :) 20:35:03 Free 3 in 1 is neat and all, but this was mainly for webservers! 20:36:49 oh vm's are fun, you could have machine move between hosts and even run on two machines at same time 20:37:03 just this case is bug 20:37:19 I'm not sure what it is, except hilariously bad. 20:38:19 depends on config, they might have put it into other physical machine too 20:38:27 it's just same ip 20:38:40 Yeah. 20:40:32 that's also not really new, it's called anycast 20:40:49 I mean, I know. But, uh. Well. 20:40:56 Not what I signed up for :D 20:41:02 cheap vps eh 20:41:10 no support no price 20:41:17 "perfect" 20:42:17 lol 21:01:46 Anyway, gonna go for now. Will be back once the server's up & running <_< 21:02:04 (it also happens to be my ZNC host ..... on a side-note, this explains why I couldn't get `DarkUranium` when I connected to znc there) 21:02:33 (which I just killed on both >_<) 21:34:06 >>> World build completed on Fri Dec 20 16:32:52 EST 2024 21:34:06 >>> World built in 17560 seconds, ncpu: 4, make -j6 21:36:30 4 hours, 52 minutes, & 40 seconds. . . I forget how old that box is sometimes. 21:36:43 For most things, it's plenty fast. 23:33:49 It usually takes around an hour here, give or take a few. On low power hardware I try to stick to binary updates