01:56:54 how does bhyve handle it if we oversubscribe cpu cores? let's say i have 16 cores and i give 8 cores to 3 vms. what happens? 02:40:30 dch: pretty goood 02:40:45 demido: probably run into a spinlock issue between instances against CPU cores 02:42:11 varsis ya that's what i was thinking. like a contention fight. what about if i have 16 cores, can i give 8 to 1 vm and 8 to another vm, leaving 0 for host system, and be ok? 02:43:50 na, you'll want to go n - 1 02:44:11 leave one core for the host which isn't contended with at all by guests 02:46:47 hm dang. ok i don't really know how to apportion cores varsis because when i divvy up cores, the system sits mostly idle except when a guest is doing lots of work. not sure how to word it better. so most of the time my vms sit idle so i could give them just a few cores. but then sometimes they get a surge of traffic and peg those few cores 02:47:21 so i either give vms few cores so there's no overlap but when they get work they peg their cores, or i give them more cores and there is overlap so they have more cores to use when they have work to chew through 03:25:03 is there anything wrong with giving odd number of cores to bhyve vms? 03:25:16 demido: yeah, that's the issue with hard allocation of resources. Depending on your wants and needs, using alternative solutions (i.e. Jail, using bastille optional resource limiting) would allow for a more dynamic allocation of resources (with limits). 03:26:43 demido: Nope. You may want to look at cores, sockets and threads, not just cores. I'm not a bhyve expert at all, though have used it a bit 03:27:41 so if i have a system with 64 cores and i want 8 equal vms, i can't give them each 8 cores even though they're rarely using even half of those 8 cores, because it would leave the host system with 0 dedicated cores? 03:28:45 Yes, there's a potential scenario of spinlock contention, you would need to leave one core for the host to avoid this scenario 03:29:26 ok, so if i wanted equal sized vms (doing the same workloads) i'd need to give 7 cores to each of the 8 vms, so 56 cores given to vms, and 8 cores left dedicated to host system? 03:31:45 There's probably some finer tweaks/combinations using the other options (sockets, threads AND cores) that may allow for more resource allocation closer to the 63 core limit you're looking at, though you would need to delve into the documentation around how those config options work. Otherwise, easy mode -> you leave resources to the host (as you outlined above) 03:32:15 Your guest VMs lose out on some resources though 03:32:35 ya i read the bhyve man page but it didn't elaborate too much on the package : socket : core : thead topic a whole lot 03:33:24 ok well i'll do what you suggest and assign <= n-1 to vms 03:33:27 ty 03:33:33 nps 03:33:51 feels weird to use odd number of cores :P 03:33:51 there is a bhyve channel on this irc network btw 03:33:59 ya it's always dead tho seems like 03:34:04 ah, bummer :| 03:35:23 so when ppl use bhyve to run a vps company, and i buy a vps with 2 cores, that's 2 dedicated cores and they don't/can't oversubscribe? 03:37:11 From my understanding (which may be completely without merit), yes :D 03:37:40 have you oversubscribed vms before and actually seen it cause trouble? 03:38:23 Nope, that's my understanding of vCPU allocation and hypervisors in general 03:38:32 "Don't do it" 03:40:05 You can hang the host system completely 04:26:54 if i back a bhyve vm with a sparse file and write 100GB into the vm, it'll show du -hs on the host of like 120GB for the 100 + OS data. then if i delete that 100GB, and write another 100GB into the vm, it shows more than 120GB in the host system being used. does freed space in a sparse file not get reused/reclaimed and it just grows forever or? 07:44:10 demido: I would assume it would be dependent on the FS being used (UFS vs ZFS vs x), there may be some sort of backup/snapshotting behaviour, cleanup await or similar. Not sure tbh 07:44:26 ufs 07:46:20 Sparse files are files that have been extended without having data written to the middle of the file. Therefore they don't consume block storage space for the empty sections in the middle. 07:46:46 When data is written to those empty regions then they are no longer empty and the file consumes space. 07:46:54 but i check their size with du -hs so isn't that real usage? 07:47:04 so empty spaces are filled in first? 07:47:20 First? No. Filled in as they are written. 07:47:51 Let's assume you have a file of 5 blocks just for an example to work through. I create it sparse allocating 5 blocks but not writing anything. It consumes zero blocks of actual storage. 07:48:07 ya but did you read what i wrote? i write 100GB into vm, then delete it, then write 100GB into it again, and seems like usage grows but maybe i'm wrong? 07:48:24 Let's number the blocks 1, 2, 3, 3, 4. I seek to block 3 and write one block. It now consumes one block of disk space. 07:48:35 Sorry: 0, 1, 2, 3, 4 07:49:02 Now that file consumes the block. If I write to block 2 then it consumes 2 blocks of storage. 07:49:38 If in the VM you delete the file from the file system that does NOT recreate the sparseness of the backing store file under the VM. 07:49:58 Deleting a file in a VM file system simply marks those blocks as free in the VM file system. 07:50:18 so in the vm i delete the 100GB file, then write the 100GB file again, and the host system might now have 200GB used right? 07:50:19 Now what I don't know is if TRIM can be used as it is in other VM systems to re-sparse out those files. 07:51:02 No. Why would it do that? If the VM file system is 100GB and you write 100GB then it will consume 100BG. Then free it. Then write it again and it would rewrite on top of the 100GB that is available. 07:51:30 well i give 1TB sparse file to vm so it thinks it has a big disk 07:51:36 The file system in the VM thinks it is writing to a disk block storage device. Because it is all a virtual world. It's on a holodeck. 07:52:22 If you allocate 1TB as a sparse file then the VM thinks it has 1TB to use for the file system. It might allocate blocks from any part of it at any time. 07:52:46 ya that's why i think it keeps growing (up to that 1TB) 07:53:14 so i should size the sparse file as close to what i want to limit the VM to without being too tight 07:53:57 For spinning disks the file system usually allocates an interleave across the entire platter so as to leave blocks in between available for a quick allocation later in other files. I don't think that is modified when the file system knows it is on an SSD since that just doesn't matter there. 07:54:12 Yes. 07:55:00 what % is safe to fill up to of allocated disk space? like if i give 1TB sparse file i shouldn't have the vm using more than how much at any given time? 900GB? 07:55:15 And remember that the main advantage of sparse files is that they are created immediately without waiting to actually write the blocks. Creating a 1TB sparse file is almost instantaneous. That's the advantage. It is not to save disk space. Because at any point in time the VM might balloon to the full size and the hosting system must be able to supply the space. 07:56:16 If you allocate space to a VM you must expect that the VM will use it at some point in time. Otherwise why allocate it to it? 07:57:04 If you only want a VM to consume 100GB then only allocate 100GB to it. 08:09:35 but isn't it bad to fill 100% of a disk? 08:09:46 the disk in this case is the sparse file backing the vm 08:40:59 in all cases running out of disk is bad 08:41:31 you just have to decide if its worse for a VM to consume all space on a server & render every VM and jail and daemon inoperable 08:41:39 or if you prefer to have a single VM full 08:42:03 with zfs you can do a reservation that prevents the user form doing this, and leave it to root to relax the reservation 08:43:14 I prefer to not over-commit VMs on disk, filesystems run better when kept under 75% in general 08:43:34 but I have run accidentally right up to the limit, then you risk downtime to clean up 08:45:15 this is a good summary, but the web page needs an adblocker https://linuxhaxor.net/code/setup-zfs-quotas-reservations.html 08:47:46 ok i'll make sure to make the sparse file size such that the vm can use the space i want it to have + 25% more ty dch. i'm not using zfs but i bookmarked that for when i do 08:47:53 i'm running all this on ufs on hardware raid rn 09:13:12 feel free to come back here when you're ready and ditch that h/w raid 09:13:36 ya i'll get just an hba card in it right? 09:13:49 there would be very few storage people I know, probably none, who would recommend it, if you have zfs or ufs+gmirror available on FreeBSD 09:14:14 demido: one thing at a time. but basically you can flip most h/w raid cards into JBOD (just a bunch of disk) mode. 09:14:29 then FreeBSD will see 2 disks, and you can then mirror your data across to them 09:14:52 i'll get a new box soon so i can choose if i want raid card or hba card 09:14:59 with hot swap sas drives 09:15:02 fair warning, I have no idea if this is possible without erasing/losing all data on the raid card when it switches 09:15:21 demido: how much data are you talking here roughly 09:15:48 not sure a few tb or so 09:15:53 maybe 5ish 09:15:55 usable 09:16:14 just wanna get a 2u for 2.5" sas drives 09:16:16 ok thats quite practical to copy/mirror/sync easily then 09:16:18 used tho 09:16:36 I've dealt with arrays that take over a month for a full mirror to complete 09:21:24 wtf like raid 6 or something? i run raid 10 rn 09:35:46 a little bigger .. you need a special datacentre floor and custom power.. HP StorageWorks EVA, XP24000 and similar. 09:35:54 in my day they were smaller, now they run to 300+TB per rack, up to 250PB for a maxed out install. 09:36:28 damn. now do enterprise scales like that also use zfs instead of hardware raid or? 10:00:12 they provide the vdevs to use zfs on top, but they have special ways to slice them to provide more robust performance guarantees 10:01:24 wow 10:13:49 EVAs are midrange. Also not very good anymore, unfortunately 10:14:07 The XPs are pretty good. Mainly because HP doesn't make them. :°) 10:14:55 They're rebadged Hitachis with a customized firmware and management sauce, basically. 10:44:32 Mmmm, old school Hitachi drives were noice 11:12:06 is it bad to give a vm odd number ram just like odd number cores? like give a bhyve guest 3 cpu cores, and 15gb ram 11:34:02 demido: bad in what sense? if you are trying to balance the number of bhyve vms with the number of physical cores, it would require some understanding of how much utilization is occurring in each of the vms for how long. As for the numbers, mentioned, i am a personal fan of powers of 2 11:37:07 just the numbers 11:37:24 ya i am too but some say odd numbers of cpu cores and mem is ok 11:38:15 no, it doesn't matter really 14:15:47 hi, I upgraded FreeBSD from 14.1 to 14.2, and now the machine won't boot anymore. I can connect to a serial console and I see a kernel panic: http://okturing.com/src/22685/body 14:15:50 1. is this a know issue/can I do some workaround? 14:15:52 2. how can I boot off the old boot environment? when I boot, I see the new kernel booting directly, no prompt. I can interrupt it and then I am in some sort of console that says "OK". Can I boot the old BE from here? 14:33:50 I made it boot. 14:33:50 it's this bug: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=281177 16:40:25 i see about 10W of power associated with "uncore" using powermon, regardless of activity. with the same processor (Xeon E-2388G) on a linux system, uncore is almost always 0W unless i'm using the integrated GPU, and even then is rarely over 3W. any guess as to what might be "on" and consuming power? 16:44:56 so i'm prepping to upgrade a 13.2 server to 14.2, which has thick jails. any tips, warnings, or a good task sheet? 16:47:44 I would make a stop at 13.4 first. 16:48:01 I wonder, is there anyone who works on Podman on FreeBSD here in this channel? 16:48:04 Not sure which people are involved. 16:49:17 $ cd /usr/ports/sysutils/podman && make maintainer 16:49:17 dfr⊙Fo 16:50:02 make -C /usr/ports/sysutils/podman maintainer 16:50:04 :p 16:50:12 CrtxReavr: not sure, can't do package upgrades anymore... i could try to stop all the jails from starting and do 13.4 first 16:50:15 Right-o :P 16:50:23 Though there are a bunch of other components involved, it's not just the maintainer 16:51:01 The maintainer could probably answer your questions though. 16:51:03 I take it dfr isn't in here? 16:51:17 many aren't 16:51:22 #freebsd-ports is a thing. 16:51:24 I'm well-aware, just checking ^^ 16:51:29 As is #bsdports on EFnet 16:51:38 CrtxReavr: was about to join the former. Forgot about the latter, TBH. 16:51:48 CrtxReavr: so upgrade/migration issues are more likely to be address in 13.4, than shipping with 14.2? 16:52:24 Demosthenex, as a general rule, yes. . . though I've been advised there's some specific issues with the 13.x to 14.x jump. 16:53:17 it's not really good idea to jump so far 16:53:47 I've not had issues with big version jumps in the past. 16:54:06 I think I've done src upgrades from 6.x to 10.x 16:54:11 i mean, for the base os, barring any hardware compatibility changes, i'd expect it to be pretty smooth 16:54:20 well you could in theory do 2 majors 16:54:21 the thick jails... i'm going to have to look at 16:54:26 or 3 16:54:33 Honestly, it really just boils down to what parts of the OS are being changed in the version delta. 16:54:35 or in your case hmm more 16:54:38 yes 16:54:45 generally 2 is ok 16:54:52 Though. . . RELENG works very hard to make upgrades feasible. 16:54:56 does it stand to reason that the thick jails at 13.2 will operate when i upgrade the native to 14? 16:55:11 i figured 13 is getting old, may as well goto 14 16:55:21 well 14 runs 13 binaries so 16:55:30 but who knows what happens 16:55:31 Though as always, read /usr/src/UPDATING 16:55:55 the binary compat goes quite far back 16:56:08 and you can extend it iirc 16:56:26 Raise your hand if you mostly just do the upgrade, and only look at UPDATING if there's a problem. . . 16:56:33 _o/ 16:56:35 hah 16:56:45 it depends 16:56:50 What? I would never do such a thing! What psycho maniac would not read UPDATING, am I right? 16:56:54 *eyes around suspiciously* 16:56:56 it's also same in ports 16:57:14 can't be arsed to look changelogs 16:57:28 but (sometimes) you should 16:57:49 ideally if there were a problem, i can just roll back to my bectl snapshot... 16:57:55 I do for some things like DBMS' 16:58:17 (e.g. postgresql-server) 17:38:27 any ocaml coders think they can make a relatively simple package compile on FreeBSD? https://github.com/savonet/ocaml-mem_usage/issues/8 17:43:14 we can get liquidsoap building on freebsd again