00:50:54 f451: patience young skywalker 00:57:50 heh 00:58:19 well semi seriously it's a lot of juice building rust over and over again 00:58:40 its very not-green 00:59:27 it takes 45min-1hr30 to build, 450W @240V 00:59:47 that's offset by how efficient its software is 00:59:54 total fake news 01:00:02 just for that one thing, and it's over and over and over again 01:00:16 other OSes don't do it, so why us? 01:00:53 when pkg distributes binaries and they're rust and super efficient, it's 1000x more green than the same node.js or python interpreted junk being run over and over and over x millions of users 01:01:02 how efficient it is is a strawman. it has to be built in the first place, even for one software that needs it 01:02:02 its not so much it needs to be built, it has to be built everytime anything else is built. unlike python or llvm 01:02:20 its the number of times it needs to be built 01:02:43 multiplied by the energy required to build it 01:02:52 what are you saying has to be rebuilt? 01:03:06 im talking about rust 01:03:22 ya and what about it? 01:04:18 it takes a lot of energy to build, and for some reason the implementation on freebsd requires it to be rebuilt very frequently 01:05:28 i dont see either of those. it's a compiled lang, it has build time sure like any other. what does it mean rebuilt very frequently? 01:05:52 i have some rust that i build when i set up a new major version like 14.x then never recompile 01:08:02 well, if you build ports (not build -a, build -f portslist) every few days, because you need to keep on top of, say, updates, because you're running internet exposed services, you'll find that in almost every build, even if the rust version doesn't change, rust is rebuilt 01:08:28 observe for yourself 01:11:15 have a look at some posts about it in @hackers 01:11:23 k 01:13:00 doesn't mean anything to me though other than fbsd build infra needs more donations. whatever we can do to get more compiled rust pkgs into ppl's hardware, the bigger the power savings 01:13:23 the majority of power is in millions of users running software, not dozens of machines building it 01:14:38 the majority of power is in millions of users running software, not dozens of machines building it 01:15:46 you''re forgetting those of us who need to build locally because the default options are not what is required, You're missing my point, also - that is requires itself to be rebuilt while it has not in itself changed version 01:16:09 or has its options changed 01:16:23 nothing else, llvm, gcc - requires this 01:16:29 problem in search of solution, easy as 01:17:03 i build locally too fwiw 01:22:53 is it not using makefile? 01:23:11 externalized header guards? 01:23:38 pragma once equivalent, whatever? 01:24:24 is the problem that rust doesn't have a stable abi yet? 01:25:00 rust? no idea. in a month, with poudriere builds running every 3 dats, i can almost guarantee it'll need to be rebuilt every time, llvm maybe twice 01:25:11 if rebuilding the same thing results in changes to anything, you're doing it wrong 01:25:37 stable or not 01:25:43 so my question is really- why no stable abi under freebsd - yet other OSes don;t seem to need this? 01:26:38 remove the me from it - others have reported similar. we're all wrong? 01:27:18 is that not a poudriere thing? where it is rather conservative with guessing at means to save rebuilds 01:27:41 traditiionall ccache is used to workaround that wrt poudriere 01:28:25 i dunno if it's a poudriere thing or not. my poudriere uses ccache and sccache and sccache-overlay 01:29:16 all i know is rust rebuilds all the time with the same options and the same -f portslist 01:30:00 if it changed version number, something like that, i'd understand 01:30:01 im sure there's a solution. ill try to learn more about it 01:31:21 there was a thread in hackers@ about it a couple of months ago 01:32:21 hey, the zfs recordsize property on a dataset just limits at which point larger blobs are split across multiple records, but it doesn't result in files smaller than this limit having to fully allocate a single record of that size, right? 01:33:08 foxxx0: i thought it did, might be wrong though 01:33:10 f451: I mean, what branch of FreeBSD are you building for? 01:33:16 all 01:33:20 e.g. even with recordsize=128k a single 4k file won't allocate a full 128k record? (just 4k+metadata?) 01:33:27 if we're thinking of the same thread on -hackers@, that was a different issue 01:33:58 I'm pondering what recordsize to choose for my datasets. I've done a quick look to estimate distribution of filesizes and what not ... 01:34:00 "stable ABI" has nothing to do with how often we have to rebuild it 01:34:39 we have to rebuild it because either it was updated, or perhaps one of its dependencies, or maybe __FreeBSD_version bumped, etc 01:34:54 unfortunately all of my recordsize fio benchmarking this evening was a complete waste, since my cpu/geli is one heck of a bottleneck :D 01:35:48 kevans well pkgs aren't always fast to update. like grafana pkg is v10 but v11 been out for months. so just pin rust to a max frequency of updates like no more often than once a week? 01:35:55 i used to build compilers in freebsd. it was pretty fun. 01:36:30 l00py: right now I look at lang/rust and it hasn't seen an update since late August, so it's not exactly updating itself very often 01:36:32 kevans: i'd like to map that cos ive seen too many times where the version is the same the os hasnt changed 01:36:34 probably one or more of its dependencies 01:36:50 i guess curl is a dependency of rust 01:36:56 yeah, curl's probably a big one 01:37:09 lemme check the last few builds 01:37:51 curl gets bumped at least once a month, but there's also cmake, ninja, pkgconf, and python 01:37:59 you're probably just getting really unlucky with a set like this 01:38:28 origin: lang/rust reason: textproc/bat 01:39:35 f451: it appears that statement indeed holds true. "If you want to store a 500KiB file on a dataset with recordsize=1M, it goes in a single 512KiB block. If you then store a 3KiB file in the same dataset, it gets stored in a single 4KiB block (assuming ashift<=12)." 01:39:48 yes 01:40:07 maybe this will motivate you to develop a feature such as suppression periods for ports and their dependencies such that they won't rebuild until a threshold d/t period is passed 01:40:10 so it matters more for regular changes to existing files, as each modification *then* needs to rewrite a *whole* record 01:40:10 postgresql is known for wanting 16k recordsize 01:40:33 can't rebuild what you didn't rebuild in the first place 01:40:39 foxxx0: yeah 01:41:13 since probably ~80% of my data are mostly "write once, read a couple of times later", with very little modifications, I should be fine just picking something betweek 128k and 1m recordsize 01:41:26 between* 01:41:58 the_oz: ok if a port xyz needs gcc you're saying that gcc needs to be *rebuilt* in order to facilitate making xyz? 01:42:41 if you already built gcc previously... 01:43:09 remember that rust is already known, and in poudriere 01:43:09 I do not know the particulars of the problem, it isn't mine 01:44:09 which is my point - why did rust need to be rebuilt to facilitate updating bat? 01:45:32 foxxx0: if you're using postgres you'll need recordsize 16k 01:45:49 i dont know about other db 01:46:15 mysql/innodb wants 16k and postgres 8k from what I could find 01:46:32 something about write amplification etc, others know a lot more 01:46:42 I'm more curious now why nvme-cli doesn't find my drives 01:46:48 is nvme-cli not supported on freebsd? 01:47:01 what is it ;) 01:47:16 at the end of unattended bsdinstall, the debug log pops up and it says at the bottom "cannot unmount /mnt/tmp: pool or dataset is busy". so i added 'zfs unmount -f zroot/tmp' to my installerconfig, and now the debug log says "rm: /mnt/tmp/installscript: no such file or dir" so what's the right solution pls? 01:47:35 generic utility for interacting with nvmes, get+set controller/namespace properties, update firmware, etc. 01:47:53 i dunno, ill look in ports 01:47:57 https://man.freebsd.org/cgi/man.cgi?query=nvmecontrol&sektion=8&format=html 01:48:05 it was available in package-ng but "nvme list" doesn't return anything :D 01:48:31 /usr/ports/sysutils/nvme-cli 01:49:20 ah well, the "proper" freebsd utility would likely be "nvmecontrol" 01:49:40 since that nvme-cli appears to be ported from linux 01:50:18 ive never used it, on my builder have ssds on pvi 01:50:24 pci 01:50:50 2x cards, zfs stripe 01:51:14 I've just put 4x 7.68tb micron 7300 pro into my new NAS 01:51:24 generic kernel has nvme stuff in 01:51:36 nice 01:51:55 but I'm struggling with geli+raidz1 performance, I'm *severely* cpu-constrained 01:52:18 geli will do that 01:52:22 I was fully aware of that, but I'd still though I would be able to push more than ~1GiB/s through to a raidz1 01:52:28 particularly geli swap 01:52:48 unfortunately not, it seems geli maxes out around 4x ~600MiB/s streams for me 01:53:29 have crypto extensions enabled in hw? 01:53:34 and loaded in fs 01:53:54 geli reports accelerated-software correctly, dmesg states AESNI available 01:53:57 so I'm guessing yes 01:54:16 f451: sounds more like you're looking at the reason it's needing to build rust in the first place, not the reason that it needs to rebuild rust for this build to continue 01:55:11 I have aesni_load="YES" and geom_eli-load="YES" early in my /boot/loader.conf, it appears to be working as intended, it's just not ... performing all that well 01:56:44 on my storage cluster i get 120-200MB/s with zfs but thats also with a hp-something card in a mirror config 01:57:23 well *each* of these 4 NVMEs is capable of ~3.4 GiB/s read and ~2.5 GiB/s writes. 01:59:35 kevans: rust is already there - by 'there' i mean in its pkg collection. someth=imes poudriere checks shlibs and doesn't have to build it - but thats a recent development and it's afaict only with latest poudriere-devel 02:00:00 foxxx0, sounds like either aesni isn't being used properly, or something else in the setup is chewing up the cpu. I've got 2 arrays of 4x drives raid-z2, and I can get 500-600MB/s on each one, on a 10 year old opteron CPU at 1.4GHz 02:00:17 f451: yes, and it will get removed if any of its dependencies, or their dependencies, so on and so forth, get a version bump 02:00:18 with maybe 30-40% cpu usage? 02:00:45 well I can get 500-600 MiB/s per drive, which, when combined into a 4-drive raidz1, results in ~1GiB/s usable write performance 02:00:58 f451: your real complaint here is that poudriere is overly aggressive on this, but there is a very good reason for that 02:01:13 yeah? 02:01:18 reads on that array go up to ~2.4 GiB/s, that is okay-ish 02:01:36 the very good reason is that we suck at handling PORTREVISION bumps of reverse deps when we need to :-) 02:01:37 just the writes appear notably slower 02:01:50 kevans: LOL 02:02:02 it could be less aggressive, but then we'd run into problems with shit going weird because people suck 02:02:37 some of it could be fixed by pkg handling shlib dependencies better, I think, but that's only part of the story 02:02:53 just trying to stop rust agressively chewing my leccy bill 02:02:56 there are non-shlib ways to wreck your reverse deps' day 02:02:56 foxxx0, out of curiosity, are you using GELI above or below zfs? 02:03:11 blockdev -> geli -> raidz1 02:03:17 so zfs *on top* of geli 02:03:27 ok cool 02:03:32 i have one geli provider per blockdev 02:03:41 that seems fine then 02:03:43 which appears to be running .... 6 threads each? 02:03:53 (judging by ps -auxH | grep g_eli) 02:04:19 I haven't used nvme on freebsd yet, so I'm wondering if something is up with the kernel support of those? 02:04:43 though if you've found it is the geli threads using up cpu cycles, maybe now 02:04:45 *not 02:05:44 what CPU + memory config is this using? 02:06:06 3cores/6threads of a ryzen 5600 with 24G memory 02:06:46 casual spreadsheet drop: https://docs.google.com/spreadsheets/d/1pdu_X2tR4ztF6_HLtJ-Dc4ZcwUdt6fkCjpnXxAEFlyA/edit?usp=sharing 02:06:58 it is a VM though. I've tried my best to isolate the memory and cpu cores for this VM, but there might still be some interference 02:07:15 yeah I wonder if theres some weird CPU pinning issue going on? 02:07:27 (V)CPUs are pinned 02:07:43 and I'm able to see the cores are only utilized by this VM, not even interrupts arrive on them 02:07:47 maybe try allocating physical cores to the VM, even if you drop it to 3 02:08:07 mh, excluding the hyperthreading siblings you mean? 02:08:09 otherwise the scheduler in the VM might be mixing up the SMT with the host 02:08:27 that's an easy test, let me check 02:12:17 now I only have 3 threads per geli provider, which matches the new cpu-corecount, so it does automatically scale that to all available cores on SMT systems 02:16:25 nope, didn't help 02:21:42 at the end of unattended bsdinstall, the debug log pops up and it says at the bottom "cannot unmount /mnt/tmp: pool or dataset is busy". so i added 'zfs unmount -f zroot/tmp' to my installerconfig, and now the debug log says "rm: /mnt/tmp/installscript: no such file or dir" so what's the right solution pls? 03:00:01 hrmpf, I've recreated the zpool without any geli involvement, still only getting ~1GiB/s throughput on writes 03:00:26 the bottleneck is actually not geli. 03:08:38 ah well, the NVMEs were stuck in the lowest power state m( 03:09:21 5.2 GiB/s writes, that's more like it. let's see what happens when I re-add geli inbetween 03:09:46 how'd you get them in a higher power state? 03:10:09 nvmecontrol power -p 0 nvme0 ; repeat for nvme1,2,3 03:10:25 unfortunately it doesn't really indicate what state they are in 03:10:53 how did you benchmark their 5.2g/s write speed? i wanna test mine now 03:11:27 my work-in-progress thingy: https://paste.foxxx0.de/vrY/ 03:11:31 take what you need 03:12:08 i'm still editing a bunch of stuff ondemand, currently log output to file is commented out as you might notice 03:22:11 foxxx0: 5.2GiB/s ? are those on raid ? 03:31:55 HER: yup, raidz1 with 4 drives 03:32:53 just slightly overkill for a personal NAS :D 03:33:36 I'm gonna re-run my benchmarks without and with geli now 03:34:06 foxxx0: how fast can you write to a single one ? 03:34:13 without raid 03:34:19 5.2/4 ? 03:34:21 but I'm probably just going to put the NVMEs back into the lowest powerstate afterwards, I only have 10G network anyways, so 1 GiB/s writes and ~3 GiB/s reads are already more than enough 03:34:33 single drive is ~3.4 GiB/s read, ~2.5 GiB/s write 03:34:46 foxxx0: ok 03:34:52 I'm pretty sure I'm cpu-limited now 03:35:20 foxxx0: strange, i cant seem to pass +- 400mb writes in a single one 03:35:54 HER: what make/model/size of drive? 03:38:19 foxxx0: its a corsair supposed to be 3000MB/s but it never passes 400MB/s =p 03:39:16 it all depends on the parameters, blocksize, queue-depth, sequential/random, etc. 03:42:41 ahh 03:42:58 foxxx0: what do you use ? i used 4k 03:44:16 the numbers above just now were on a dataset with 128k recordsize and 1M blocksize sequential for fio 03:45:04 nice, thanks for the info foxxx0 03:46:52 foxxx0, so it looks like the cpu "usage" of geli was actually in a state of iowait then 03:51:14 edenist: yep, I was also a bit irritated by the swings in throughput, occasionally looked like it would hiccup and almost stall but then crawl right back with a huge spike in throughput 03:51:38 something definitely looked off, which is precisely why I'm doing all these benchmarks to figure out what's happening 03:53:04 once I've collected all plaintext benchmarks with the highest powerstate, I'll probably repeat those for 2 or 3 additional (lower) power states. and then re-run everything with geli enabled 07:59:48 at the end of unattended bsdinstall, the debug log pops up and it says at the bottom "cannot unmount /mnt/tmp: pool or dataset is busy". so i added 'zfs unmount -f zroot/tmp' to my installerconfig, and now the debug log says "rm: /mnt/tmp/installscript: no such file or dir" so what's the right solution pls? 08:42:21 why your installscript is in /mnt/tmp and not in /tmp ? 08:43:03 I mean, if your install is created in /mnt, you do not want to run anything from /mnt 08:44:44 i didn't put it there so if there's a bug it's in bsdinstall 09:02:58 btw i tried at end of installerconfig ls -la /tmp then ls -la /mnt/tmp. /tmp has installscript and a few other things in it but /mnt/tmp says no such file or dir?? 09:03:25 so if there is no /mnt/tmp to ls -la, why do i get the error "cannot unmount /mnt/tmp: pool or dataset is busy? 09:20:05 fs is busy if it has open files from it -- either it is current working directory for some program (so the dir itself is open), or there are currently open files from it, the files can be already unlinked - in that case ls will return empty, but the files are still there and removed when they are closed. 10:00:01 ok so what's the solution for me? 10:42:15 back, so what's the solution? 10:48:10 it can't be normal to just have bsdinstalls fail at the end with unable to unmount errors 11:33:41 no it is not 11:44:19 so what can i do? 11:59:27 make sure you file the bug, and rest depends on how much you can investigate the problem yourself. 12:05:30 how can i check if /mnt/tmp has been is unmounted or not? i want to try running zfs unmount zroot/tmp and sleep # combos 12:48:50 from /etc/mnttab (or just df command;) 13:06:20 Question: while I know that FreeBSD supports Intel integrated graphics really well, how good is AMD integrated graphics? Like with Ryzen xxxxG chips? 13:12:15 Hi; does wg interface compeltely baypass the pf firewall? 13:14:13 if you want it too, if you configure pf to manage traffic on it I would assume not 13:16:44 I only have skip on lo0 but any traffic comming from wg0 is accepted if the server is listening on e.g., *:22 13:17:21 And I don't even see the connection on the PF states list 13:17:38 imm_: do you have "block in" ? 13:18:09 I have a block drop at the beginning and pass out at the end 13:19:49 But even if I have PF misonfigured and it passes the connection, I would've seen it in the states list in pfctl -s, right? 13:20:08 fastest thing might be putting your pf.conf on dpaste.org so we can see :) 13:30:10 imm_: the traffic is somehow bypassed, not tight rules 13:30:10 daemon: https://dpaste.org/dbCuT 13:30:10 And I can do this: nc -l 3333 and get a connection on wg 13:30:10 what is "block drop" 13:30:10 are you blocking host named "drop" ? 13:30:10 Isn't that the block drop all rule? 13:30:10 drop being don't send ICMP reply 13:30:10 OK, it could work, but "drop in" is common solution 13:32:27 drop in works the same 13:32:28 I can access nc -l 3333 from wg remote 13:32:28 My assumption was that it should be blocked by PF or at least shown in states 13:32:28 nice, your rules are now more clear 13:32:28 if it's ignored then it won't be shown 13:32:30 Question: while I know that FreeBSD supports Intel integrated graphics really well, how good is AMD integrated graphics? Like with Ryzen xxxxG chips? 13:32:30 imm_: can you see and log this traffic with tcpdump ? 13:32:32 mzar: Yes, on wg, 14:31:08.707151 IP 10.13.0.33.22249 > 10.13.0.3.3333: Flags [P.], seq 1:5, ack 1, win 1035, options [nop,nop,TS val 3896215339 ecr 228481065], length 4 13:32:32 10.13* are wg IPs 13:32:32 OK, but I can't reproduce it 13:33:04 mzar: Maybe 13.3 is the reason? 13:38:05 mzar: The rules are now very simple, block in, pass ssh, pass wg, pass out; I see 3333 in sockstat but not in PF 13:39:23 imm_: I have no access to FreeBSD 13 to test it 13:42:29 mzar: I must be doing something wrong, I have no 'skip on lo0' but I can connect to any service on localhost 13:43:23 Is maybe skip on lo0 persistent even if I do pfctl -f? 13:44:54 imm_: put your rules in /etc/pf.conf and begin with "service pf onestart" then proceede with mastering and learnign pfctl(8) syntax 13:47:36 mzar: My bad, 'skip on *' is persistant even if you remove it from /etc/pf.conf and reload with pfctl -f 13:48:50 I put skip on wg0 when I had problems establishing the wg connection 13:49:01 So that's why it skipped the wg completely 13:49:26 I had to pfctl -F all, then it reloads properly 14:05:08 ' 14:06:06 imm_: you should read manual and progress step by step 14:11:24 mzar: Thanks for the help 14:20:04 Question: while I know that FreeBSD supports Intel integrated graphics really well, how good is AMD integrated graphics? Like with Ryzen xxxxG chips? 14:25:45 i got a 2080 working fine 14:27:10 l00py90: Just to make sure, we're talking about integrated graphics on Ryzen chips, not discrete graphics on Radeon chips, right? 14:27:52 Because 2080 doesn't sound like a Ryzen CPU. 14:38:35 I'm trying to use egrep to match upper & lowercase letters, numbers, the dash and the period and running into syntax issues: 14:38:40 $ jq -M . ucrm.clean.json | egrep -o [a-zA-Z0-9\-\.]*@ | sort -u 14:38:41 egrep: invalid character range 14:40:40 CrtxReavr: Is the shell trying to expand the * as a glob? 14:41:35 vkarlsen, don't think so - if I drop the ``\.`` portion, it works (minus matching the .) 14:42:17 probably the shell is escaping \- -> - before it gets to egrep 14:42:34 CrtxReavr: I get no matches found unless I put the whole regex inside ' '. I don't have your json data though 14:42:54 quoting aside, you probably want [a-zA-Z0-9.-] which is how you normally include - in a char range 14:43:18 * kevans prefers his dash at the beginning 14:44:55 [-a-zA-Z0-9.]* seems to work. 14:45:14 Also not supposed to be escaped inside of []s apparently. 14:45:31 I didn't know order was important 14:46:03 I was just told in #linux that to match - is must be first or last. 14:46:07 order's not important, it's just ambiguous 14:46:27 Since - is also a range charcter (a-z) 14:46:34 Ah, makes sense 15:36:39 Keep getting ignored... 15:39:08 i would argue that we probably just don't have as many folks around with that experience using amd igpu on freebsd 16:13:20 hmm, managed to make p9fs return 'invalid argument' for a particular directory for no apparent reason 16:14:11 ivy: congrats 16:16:21 Reading back and I cringe at the unquoted regular expression that included shell meta-characters. Then cringed again at the attempt to backslash quote characters inside of a bracketed character class as character class already quote characters and therefore those backslashes become part of the character set in them. 16:16:40 mzar: probably not a very impressive achieve considering this code was merged in somewhat unfinished state :-) i will file a bug though 17:10:52 remiliascarlet: in principle it should be the same as it uses the same architecture and driver 18:31:55 What's the best way to debug this freebsd-update error: "Fetching 2 metadata files... failed."? 19:04:42 turbo23: Sharing the whole command line might help. 19:05:06 tuaris: Sharing the whole command line might help. (Tabfailed that first one.) 19:12:21 Just `freebsd-update fetch` 19:13:42 truss, shows these last system calls :fstatat(AT_FDCWD,"bc21c7281db44e7afade49a7aae9050f4603a2ed71629fbb3d452b9decbaf54d.gz",0x22ebb1ee8f8,0x0) ERR#2 'No such file or directory' 20:14:39 o 20:14:42 err 20:14:52 ld: error: undefined symbol: testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, testing::internal::CodeLocation, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*) 20:14:53 >>> referenced by zfsd_unittest.cc:236 (/data/build/src/freebsd/lf/main/cddl/usr.sbin/zfsd/tests/zfsd_unittest.cc:236) 20:15:03 i just saw someone else reporting this earlier and now i can't remember where 20:20:04 ah, https://lists.freebsd.org/archives/freebsd-current/2024-October/006591.html 21:42:50 at the end of unattended bsdinstall, the debug log pops up and it says at the bottom "cannot unmount /mnt/tmp: pool or dataset is busy". so i added 'zfs unmount -f zroot/tmp' to my installerconfig, and now the debug log says "rm: /mnt/tmp/installscript: no such file or dir" so what's the right solution pls? at end of installerconfig i tried ls -la 21:42:50 /tmp then ls -la /mnt/tmp. /tmp has installscript and a few other things in it but /mnt/tmp says no such file or dir. any ideas? 22:41:27 the unmount problem doesn't happen when i don't do some config that temporarily puts files in tmp. so maybe i need to rm that dir first 23:19:41 a hacky, ugly thing that sometimes works is just putting a sleep 3 into the script before the command that fails, as though there's some contention that has a moment to clear up 23:39:44 i have an old x201, it boots fine under nomadbsd 14.1... but something triggers terrible coil whine or something? when i boot linux it's more silent. any ideas? 23:41:08 I have an x201. Last booted FreeBSD 13. Over on the shelf and not booted since 14. I don't recall any whine from it. 23:41:42 Other than the fan I can't imagine what would be making a whining noise. Is the fan running full speed on yours? 23:42:21 it's not the fan spinning 23:42:30 it's a very electrical chirping 23:42:31 One problem I never worked out on my x201 in 13 was the suspend-resume regarding graphics. It resumed but upon resume graphics was a black screen. I could log into it again with ssh and it was alive but I never figured out how to wake up the graphics. 23:42:46 Do you have a spinning hard drive or an SSD in it? 23:42:47 ah, didnt try that yet 23:42:53 only usb currently 23:43:06 There is no internal drive installed in it? 23:43:27 only arrives tomorrow 23:43:29 A spinning hard drive might be head parking repeatedly and producing what might be a clicking or chirping noise. 23:44:04 it could be a low fi noise on the speakers but it seems to come from a different area 23:44:12 no hdd for sure ^^ 23:47:35 I pulled my x201 off the shelf and booted it up and unfortunately found that I have since then (Sep 2023) installed a different OS on it. 23:48:13 i just put latest nomadbsd on a stick, if you're willing to do that ^^ 23:48:14 It's good to boot things up every year or so however. 23:48:39 when i boot with acpi on graphics is garbled hrm 23:48:41 Sure. I can try nomadbsd on it. But you said it is okay with nomadbsd though, right? 23:48:52 no, linux is okay 23:49:08 so it's not the hardware per se 23:49:16 With nomadbsd you get a whining noise from it. Correct? 23:49:34 yes 23:49:44 as soon as it uses the stick? 23:50:00 but linux runs from a same model of stick 23:50:23 it sounds like an old line printer :D 23:50:26 really weird 23:50:57 Literally I have about 10 minutes as at the top of the hour I have to AFK for something. But I am always in this channel and you can reach me here later. Later might have to be tomorrow though. 23:51:28 i hope to get a ssd by tomorrow and will try a regular install 23:53:21 I don't recall it making noise with FreeBSD 13 kernel. Also other people reported having the suspend-resume work with the graphics okay. So I know that works. I just didn't have it working for me. 23:54:48 I am decompressing the current nomad image now. 23:55:01 coil whine is IO bound 23:55:35 I get it on SSDs when writing a lot to disc, at least on those laptops. 23:56:10 currently just a usb stick tho... 23:56:26 Also there is no x201 page but here is the x201i page: https://wiki.freebsd.org/Laptops/Thinkpad_X201i 23:59:03 That de-compress step took quite a few minutes. Writing it to a USB SD card now. Will try a quick boot of it.