00:22:36 OMG, the quarterly update remove my installed vscode and don't provide it anymore 00:23:38 I'm new to freebsd but this seems wrong to me. I really need that vscode ! 00:33:13 doug713705[m]: There should be a changelog that says why. 00:35:31 didn't saw that changelog before applying the upgrade. 00:35:31 I knew that new version of vscode were unavailable for some reason related to blacklisted electron version in the build environement. 00:35:50 But i did not expect to have vscode uninstalled ! 00:36:40 (electron): https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270565 00:36:42 Title: 270565 – electron* ports are blacklisted from the build 00:37:51 the signal-desktop app also has been uninstalled, I guess for the same reason. Those 2 apps, vscode and signal-desktop, are electron based 00:38:32 I thought someone noted Chromium also going but I haven't verified that myself. 00:38:39 but for now, I need to learn neovim from scratch so I can do my work :( 00:38:47 vim is pleasant 00:39:08 doug713705[m]: IMHO, it'll be time well spend and a skill you'll use for a lifetime. 00:39:10 I only looked at 'pkg upgrade -n' but I didn't see chromium being uninstalled. 00:39:15 I use vim but it's not as complete and featur rich as vscode 00:39:25 I just use vi (as in nvi) 00:39:34 but I should probably learn other things too 00:40:11 so neovim will be a life saver but I would have prefered to plan this learning ! 00:40:15 nvi doesn't let you do things like four-space tabs. 00:40:38 What's neovim as compared with vim? 00:41:01 s/neovim/& offer/ 00:41:33 neovim as some "modern look and feel" + plugins and modern features like one can find in vscode (or so) 00:41:46 mason: :set ts=4 ? 00:42:11 xtile: That's still hard tabs though, and then anyone without that setting will see a very funny version of my files. 00:42:14 sort of vim on steroids: https://github.com/LazyVim/LazyVim 00:42:15 Title: GitHub - LazyVim/LazyVim: Neovim config for the lazy 00:42:19 ohhhh 00:42:27 I see what you mean. 00:42:48 xtile: Of course, I turn that off depending on the precedent set by a file I'm editing if it differs. 00:43:01 * xtile nods. 00:44:56 Hmm. I just tried... 00:45:07 :set expandtab alongside :set ts=4 00:45:13 I get 4-space indenting with this 00:46:14 xtile: Is this -current? I heard something was going to happen in -current with both /bin/sh and nvi that'd be interesting, unless I'm making things up. 00:46:29 Nope, it's in 13.1-RELEASE-p7 00:46:31 (I think it was some sort of tab completion thing with sh) 00:46:34 hm hm 00:47:19 Sure enough. 00:47:35 Popped those in my .exrc and it did as you say. 00:48:20 \o/ 00:49:06 Hm, backspace doesn't delete a full tabsworth of autoindent, but ^d is probably enough. 00:50:42 Hm, I think I've found my second-ever bug in nvi. set expandtab, set ts=4, set ai and then tab in and ^d back out. If you tab in three times, the first ^d moves you back four, but the second one erases everything. 00:51:23 interesting 00:52:06 The first bug I found, decades ago now, was similar. Delete (dd) an empty line and if it's next to the last line in a file it'd delete that too. 00:52:26 That one's long since fixed. 00:54:07 xtile: Thank you, though. This is interesting and might break my "dependence" on vim. 00:54:13 \o/ 00:54:21 I'm fine with people using whatever works best for them. 00:54:37 I use Emacs sometimes too. 00:54:44 And sometimes I use viper-mode inside Emacs. 00:54:50 Though I do get mildly salty when people say "Don't you mean vim?" when I say I use vi :^) 00:54:58 Yes. 00:54:59 very mildly so 00:55:12 At least vim isn't objectionable. 01:02:13 is there any good solution for zfs high-availability on multiple nodes ? 01:02:20 commercial or open source 01:03:23 last1: ZFS, I'm not sure. That's what OneFS does, and that's FreeBSD based but not, I think, ZFS. 01:03:41 It's for Isilons: https://en.wikipedia.org/wiki/OneFS_distributed_file_system 01:03:42 Title: OneFS distributed file system - Wikipedia 01:03:52 yeah, I know what it is 01:03:56 costs like a million $ 01:04:06 It's cheap compared to, say, NetApp. 01:04:10 there are some other options, like drbd, rfs-1 01:04:15 for zfs HA 01:04:36 but I want to know what people here thought the best way was 01:04:46 Yeah, compose your pool out of iscsi from multiple hosts or something. Unsure if there's anything packaged that does it. 01:04:55 Rolling your own wouldn't be terrible. 03:42:44 * _xor just read scrollback 03:43:15 <_xor> Oh yeah, good point. Forgot about FIPS certification in terms of BoringSSL vs. OpenSSL, although... 03:43:28 <_xor> https://boringssl.googlesource.com/boringssl/+/master/crypto/fipsmodule/FIPS.md 03:43:29 Title: FIPS 140-2 03:43:33 <_xor> https://www.openssl.org/blog/blog/2022/08/24/FIPS-validation-certificate-issued/ 03:43:34 Title: OpenSSL FIPS 140-2 Validation Certificate Issued - OpenSSL Blog 03:43:57 <_xor> Not sure how current the one for BoringSSL is, but only the "core" is FIPS certified, whereas it appears OpenSSL as a whole is fully certified. 03:45:09 <_xor> meena: Normally I would agree with the sentiment of forking vs. contributing upstream, except when it comes to crypto, I personally defer to concensus "domain experts". 03:45:50 <_xor> I remember reading that the OpenBSD guys forked LibreSSL because the codebase as a whole for OpenSSL was so terrible and because it would require breaking changes to fix anyway, so forking apparently made more sense in that specific case. 03:50:16 <_xor> Hmm, LibreSSL supported was initially added and then dropped on Alpine Linux, Gentoo Linux, & Python 3.10+. I wonder if it broke too much stuff & OpenSSL was considered (audit?) to be "ok" now or if there was some kind of licensing/culture issue clash. 03:53:25 libressl broke API a few times, and many found it not worth the effort to follow that 03:54:49 <_xor> Apparently, according to this at least, OpenSSL broke back-compat and LibreSSL had back-compat as a critical goal, and so OpenSSL 1.1.x changes broke on LibreSSL... 03:54:52 <_xor> https://old.reddit.com/r/openbsd/comments/luqn6y/what_do_you_think_about_the_recent_drop_in/gp93jvd/ 03:54:53 Title: williewillus comments on What do you think about the recent drop in LibreSSL in many Linux distros? 03:56:19 <_xor> Which is kind of lame, I guess, but I don't know enough to have any real opinion. I also remember LibreSSL having an issue with OpenSSL bugs not being announced in advance to them (but it was announced to others?), not sure if they wouldn't sign the disclosure embargo, though apparently Google decided early on to play nice with LibreSSL. 03:58:28 <_xor> Ah, 1.1.1k fixed the issue linked by /u/joshhatesusernames (CVE-2021-3450). 06:41:36 any recommandation for a pci sata expansion card? (running 13.1) 07:27:21 morning! is PkgBase still a viable option? i've heard rumors of its demise now and then. i currently have some non-tier-1 boxes on 12 that i use PkgBase for. should i keep doing that for (the upgrade to) 13 as well, or are there other recommendations? 07:29:14 dk: 13 worked very well for me on PkgBase 07:38:36 that's great to hear, ty 07:39:40 dk: i used to run a repo: https://github.com/freebsd/freebsd-doc/pull/143/files 07:39:41 Title: PkgBase.live: add an un-update by igalic · Pull Request #143 · freebsd/freebsd-doc · GitHub 07:48:06 <_xor> meena: What kind of hardware is required for it? 07:57:01 _xor: https://codeberg.org/pkgbase/website/src/branch/main/howto/howdo.md#prerequisites that's the last it ran on 07:57:02 Title: website/howdo.md at main - website - Codeberg.org 07:58:35 but, I've had more CPU and less, CPU and I've had more storage and less than that, it's all workable. but right now, i think, it needs to be amd64, because https://github.com/freebsd/poudriere/issues/1048 07:58:37 Title: PkgBase: why does poudriere require qemu to cross build FreeBSD? · Issue #1048 · freebsd/poudriere · GitHub 07:58:37 1048 – ep driver fails to detect card when told specific values https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=1048 08:22:13 is anyone working on remaking the www.freebsd.org website to look more modern? 08:27:58 ugh 09:01:55 any idea why with a DEFAULT_VERSIONS+= python2=2.7 python3=3.11 python=3.11 in my make.conf I'm getting tons of: "Ignored: Unknown flavor 'py39', possible flavors: py311" when building with Poudriere ? 09:03:26 I don't understand: 1) why it tries to build @py39 flavors when default python version is 3.11 ? 2) why @py39 flavors fail when it is the default Python version? 09:04:55 should I set PYTHON{2,3}_DEFAULT too in make.conf? 09:25:51 maybe you need to purge @py39 flavors, firstly 09:31:05 what do you mean by purge ..? 09:42:58 pkg delete 09:47:57 my server is still frozen 09:48:05 trying to delete zfs datasets 09:48:21 it's been over 24 hours 09:48:57 should i give it another night 09:49:10 would i risk anything forcefully powercycling it? 09:49:24 how could i fix it assuming i can regain a shell 10:13:53 angry_vincent: it's with poudriere 10:16:34 do you have some example port for which that happens? 10:18:03 Little unofficial news item: 13.2-RELEASE images are already available for most architectures, e.g. at https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/13.2/ for amd64 10:18:05 Title: Index of /releases/amd64/amd64/ISO-IMAGES/13.2/ 10:18:49 nimaje: https://gist.github.com/silenius/ea5f202ae151e063952da8cae8b423f6 10:18:50 Title: gist:ea5f202ae151e063952da8cae8b423f6 · GitHub 10:34:15 ok, no idea where that should come from for devel/py-pycparser except for explicitly listing devel/py-pycparser@py39 hm and except for a bulk -a the better question would be which port pulled them in 11:48:53 meena: as qemu not being build for aarch64 is a problem with the ports tree, is there a report on bugs.f.o? and shouldn't there be a ONLY_FOR_ARCHS_REASON for every ONLY_FOR_ARCHS (or something like that)? 11:52:04 nimaje: it's been there since the beginning 11:57:24 but is there a problem report for the missing ONLY_FOR_ARCHS_REASON? the port eihter unnessesarry restricts archs or is missing _REASON, maybe both and that upstream problem report reads like it should be supported 12:02:37 nimaje: i haven't had time to test it 12:02:41 I'll check 12:04:35 https://bugs.freebsd.org/bugzilla/buglist.cgi?quicksearch=qemu-user-static doesn't look like it 12:04:37 Title: Bug List 12:10:38 nimaje: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270685 12:10:41 Title: 270685 – emulators/qemu-user-static: missing ONLY_FOR_ARCHS_REASON 12:27:49 seems to be quite some ports that are missing _REASON if I did my check correct for f in */*/Makefile; do awk '$1 == "ONLY_FOR_ARCHS_REASON=" { has_reason=1; } $1 == "ONLY_FOR_ARCHS=" { has_archs=1; } END { if (has_archs && !has_reason) { print(FILENAME " is missing ONLY_FOR_ARCHS_REASON=") } }' $f; done 12:58:57 nimaje: you wanna open hugs for all of them? 12:59:34 you said the forbidden word! 13:57:39 * debdrup pretends to gasp, shrugs, and goes on with whatever 14:01:29 Oh right, I was reading a blog post about generative algorithms and copyright. 14:04:21 any consensus on that topic yet? 14:06:58 meena: this is just the opinion of one person, of course - but I think she's got it right, that at present nobody can risk using it for commercial purposes, since it's an entirely open question and the worst outcome is that you get sued for more money than you can reasonably expect to make of it 14:08:29 debdrup: are you aware of any viable zfs ha solutions for FreeBSD ? 14:08:50 I was considering drbd or rsf-1 but I think there might be more 14:08:52 last1: please don't ask me. 14:09:30 If I know, or think I know, the answer to a question posted, I'll answer - but don't expect me to know the answer. 14:10:58 https://fosstodon.org/@RL_Dane/110156008726592753 this is kinda impressive. 14:10:59 Title: R. L. Dane: "Oh #FreeBSD, you just took the negative space del…" - Fosstodon 14:12:17 if you uninstall chromium in that process, you'd free up double that 14:12:59 I wonder how badly ZFS performs on Ceph… 14:14:13 Also: don't use drbd if you only got two nodes 14:15:48 why not ? 14:16:08 because of split-network issues ? 14:16:44 last1: what we do here is: we have two "big" machines, on each of them we create a zvol and we export them through iscsi to $servers, and on the $servers we create a zpool mirror over the two iscsi blocks 14:18:20 the advantage is that you have only one two LUNs (although there are many disks) and you don't have to resync the whole volume where one of the two zvol dissapear (reboot, upgrade os, etc) 14:18:58 so when the disconnected zvol rejoins, it doesn't have to resync ? how come ? 14:19:24 last1: yeah, you're pretty much guaranteed lose data in a split brain situation. you need a majority quorum, which requires at least three nodes 14:19:52 meena: I guess this is why that rsf-1 solution requires a serial cable between the two nods 14:19:54 *nodes 14:20:17 but I haven't had a server with serial port in quite a while lol 14:21:57 * meena has almost always just used serial over usb 14:23:52 last1: because zfs knows exactly which blocks have been modified so it only writes the delta 14:23:57 last1: mage's solution sounds pretty cool. so perhaps the question should actually be: what kind of hardware, network and price restrictions do you have? 14:24:18 mage: that's not a bad solution, however we'd rather export the data via NFS 14:24:30 so then the problems lies in creating yet another HA floating-ip NFS server-cluster 14:24:34 otherwise there is also minio, but your need 3 nodes 14:25:20 last1: yeah, I must admit that the NFS server here is the only SPOF and I haven't found a good replacement 14:25:33 maybe Minio, but I haven't tested it yet 14:25:48 I think you'd have to pay for minio for more than one node to be usable 14:25:53 maybe: export serverA blocks to serverB , serverB to serverA, have NFS on each with pacemaker, floating IP, etc 14:27:25 right now my hardware is dual intel sp2 cpus, hbas and 6 x 7.68Tb Intel S4520 ssds per node 14:27:45 each node is connected via lacp to vpc-linked switches 14:27:55 @10gbps 14:30:05 another solution is to use something like zrepl, snapshot every minute, and use CARP with devd scripts 14:30:17 but.. beware of split-brain 14:31:10 mage: how about that idea to export the drives to each other ? 14:31:49 and run nfs locally on each node directly 14:33:28 we used something similar in the past (a pool over x local disks and x iscsi luns 14:33:43 you've to test it carefully 14:43:33 but you stopped using it because of issues ? 14:43:45 also, how do you export the iscsi luns ? drbd ? 15:14:54 I have also seen some references to HAST on some older mailing lists 15:14:59 is anyone using that ? 15:21:29 last1: again, if this compares to drbd, you need three boxes 15:22:17 these solutions sound like potential pitfalls everywhere, I might just do what everyone does on the forums. sending snapshots and manual failover 15:22:30 basically, with almost any clustering solution you need three or five or seven etc nodes. 15:23:46 and the ones that work with two nodes are probably selling snake oil, or are lying about one or more parts or CAP 15:25:16 *of CAP 15:26:41 I was actually considering linbit, they do sell a 2-node solution but I had lots of questions that didn't sit right 15:26:46 hence, here I am 15:29:17 last1: they also sell Desaster Recovery, so maybe that's related 15:29:36 I've installed FreeBSD 1.0 on an emulated 486 with 12MB of RAM 15:29:41 Just got X11 working yesterday 15:30:01 Hoping I'll be able to get networking working too, today 15:30:50 FreeFull: exciting 15:33:39 I'm a bit surprised that the tar command included with FreeBSD 1.0 is GNU tar 15:47:45 * paulf is a newbie only starting using FreeBSD with 2.1 15:50:55 meena: that's funny :) 15:51:00 Ooh, good sign, got an IP address from DHCP 15:52:54 Just gotta figure out how to configure routed 15:55:19 Seems like the answer is not to use routed, and instead just set a route manually 16:12:36 It's working 16:48:00 mason, I figured out why I couldn't get internet access from my jail last night 16:48:27 I was testing jails on my virtual machine (which is on my lan) and on my jail I assigned a new IP address from my lan (which worked) 16:48:51 last night i as trying to configre the jail on a VPS and I assigned the jail its own local ip number (which didnt work) 16:49:14 so today I assigned the jail on the vps its external ip number and voila, it worked 16:49:49 I ended up buying FreeBSD Jail Mastery and will get to Chapter 9 on networking to figure out the rest 16:49:53 Thanks for your assistance 17:35:27 interesting, unplugged my keyboard, replugged, layout is set to standard xorg 17:45:22 FreeFull: libarchive didn't exist until the mid-2000s 17:46:18 I see, so up until then GNU tar was the only real option? 17:47:04 Well, as the HISTORY subheader for tar(1) mentions, GNU tar wasn't invented whole-sale. 17:49:17 Before it was called GNU tar it was called pdtar and was developed on SunOS. 17:50:27 Oh, my apologies. Apparently it originated on 4.2BSD. 17:50:55 have you used it with tapes? 17:51:00 At least according to https://archive.org/details/PDTAR-1.21-src which is probably about as authorative as it gets, considering John wrote pdtar and uploaded that. 17:51:02 Title: Public Domain (PD)TAR 1.21 Source Code : John Gilmore : Free Download, Borrow, and Streaming : Internet Archive 17:51:10 Interesting 17:51:17 la_mettrie: pdtar? No, before my time. 18:29:21 Kinda surprising, I can X forward mpv into the FreeBSD 1.0 guest and it works 18:29:29 Very slowly though, but that's to be expected 18:37:05 It's not that surprising; X hasn't really changed for... a VERY long time. 18:37:44 This reminds me, I wanted to see how Wayland would work for me on FreeBSD. 18:38:09 X11 it self dates back to the 1980s. 18:38:32 msiism: when I occationally use it for things that aren't alacritty and Firefox, it seems to work fine. 18:38:40 Good to know. 18:42:24 X11 from 1993 lacks a lot of the extensions that newer programs depend on 18:43:54 That's not exactly surprising. 18:44:23 Backwards compatibility is a fair bit easier than forwards compatibility. 18:52:30 it's pretty cool tho, that you can just send a video thirty years into the past 18:59:51 no notes about FreeBSD 13.2 RELEASE just yet, anyone? 19:06:02 FreeFull: can you xforward anything and it not be slow? 19:08:45 xterm performance is ok 19:08:51 When xforwarded, that is 19:09:00 xlinks runs ok too 19:28:05 RoyalYork: Glad to hear it! As a general role of thumb, anything you can do on the host, the jail can do too. Likewise, if something isn't valid for the host, it won't work for a jail either. 19:28:10 rule of thumb* 19:29:14 is an ashift of 12 a good value for intel enterprise ssds on zfs ? 19:29:41 or can I find the optimal value 19:29:49 *how 19:30:38 last1: There's some useful discussion here: https://forum.proxmox.com/threads/samsung-ssds-a-right-ashift-size-for-zfs-pool.71627/ 19:30:40 Title: Samsung SSDs a right ashift size for ZFS pool? | Proxmox Support Forum 19:31:05 TL;DR you want to accomodate your device's actual block size, which may or may not be visible. 19:31:13 yep, it's not visible :| 19:32:14 last1: is it nvme? 19:32:42 nvmecontrol identify should list the optimal lba format, which includes the blocksize 19:32:46 no, intel ssd dc s45XX series 19:33:01 by default I see the installed sets an ashift of 0 19:33:02 the question was is it nvme or sata 19:33:05 *installer 19:33:15 s45xx are all sata drives 19:33:18 ok. 19:34:42 kraptv: it'll happen when it does. 19:34:53 so 0 means auto-detect, how can I know what it auto-detected ? 19:35:07 last1: It likely can't, and I'd expect 12. 19:35:33 last1: I'd tend to look online for specs if anyone's published them. 19:35:55 alright, but in the meantime, if it just shows 0, is there a way to see what it's working with ? 19:36:16 There's a couple of sysctls that control the minimum and maximum values for the automatic ashift adjustments. 19:36:32 last1: You can probably explicate 13. Not sure there's a way to say "tell me what you're going to guess" without actually doing it, which might be suboptimal in this case. 19:36:43 But I'd research first. 19:36:46 yep, min max shows 12 - 16 19:36:48 vfs.zfs.min_auto_ashift=12 19:36:52 There's never really a downside to having an ashift that's a bigger exponent than the equivalent number of bytes in a sector. 19:37:02 I was just worried it would pick something like 4 19:37:45 just file.physical_ashift is 9 , as well as file.logical_ashift also 9 19:38:17 It's only a problem if your ashifts exponent results in something smaller than the sector size of the disks you're running on (which can't really happen, because the default is 12 which works out to be 4096, ie. even drives that pretend they're 512 while being 4k in reality don't get cheated). 19:38:29 Where are those values from? 19:38:50 I just did: sysctl -a | grep -i ashift 19:39:18 I don't see those sysctls on my system. *shrug* 19:39:38 vfs.zfs.vdev.file.physical_ashift 19:40:00 that's for file based vdevs 19:40:01 Oh, you stripped part of the OID. ;) 19:40:14 not something you use in real life :) 19:40:31 yes :) 19:40:37 That only matters if you're creating files via truncate(1) to use ZFS on, ie. when you're testing ZFS for the very first time. 19:40:58 That's also what sysctl -d would've told you. 19:41:14 Not all sysctls are documented, but a reasonable number of them are - and it's quite useful ;) 19:51:03 mason, RoyalYork: NFS only works as of last week ;) 19:53:30 meena: In which context? I'm kind of interested in getting into NFS with built in TLS, although it's probably useless until we're all geared up for post-quantum crypto. 19:53:34 4096/12==341.33333333333 19:54:04 mason: you can now run an NFS server in a vnet jail 19:55:07 anyone else type bc when looking for a calculator on android? 19:56:30 meena: 2^12 19:56:34 Why're we dividing 4k with 12? 19:56:37 Oh, hm, I guess I hadn't tried it before. Interesting. 19:57:03 meena: I'm pretty sure I've been using NFS since before last week. :P 19:57:13 do you guys use autotrim on your pools ? 19:58:21 * debdrup eyes /etc/auto_master 19:58:26 last1: I do on my T480s. 19:59:02 Before enabling it on anything, I'd recommend doing a cursory look into whether your SSD is one of the ones that has quirky behaviour regarding TRIM (it's more common than it has any right to being). 20:00:15 FreeBSD has a list of devices with that known quirk, but that's best-effort since manufacturers don't exactly go out of their way of informing anyone (least of all any FreeBSD folks) of when they send out something that can end up making life miserable for people. 20:01:44 ok, so if it's unknown status it's safe to run trim every x days or not at all ? 20:02:05 If you're a hyper-scaler or a direct purchaser from the manufacturer, they may send out a PCN - but that requires a considerable amount of volume to get to that point, and even then they're more likely to just say "hey here's an update, maybe it's a good idea to update" (which, I might add, we've known about since the SSD reliability study that was published at FAST '20). 20:03:06 Well, one way to check would presumably be to use trim(8) on a disk that you're intending to use, and see if ZFS reports any errors - because ZFS is designed to report errors, even if the disk won't admit to them. 20:05:34 and if it doesn't report any errors, could that still lead to premature wear out ? I keep on reading this behavior on various pages/forums 20:07:52 TRIM is meant to negate premature wear-out, and badly implementing it usually leads to dataloss (silent or otherwise). 20:08:19 I'm not sure I see the workload where TRIM leads to premature wear-out. 20:08:53 If a SSD can wear out by enabling TRIM, it's probably QLC and therefore not meant to be used more than once anyhow. 20:11:02 reading the Intel docs, they have this phrase: TRIM is only supported on RAID 0. Beginning with the Intel® 7 Series chipset. the driver supports TRIM on SSDs in a RAID 0 configuration. 20:11:13 I thought it was an individual drive setting 20:13:04 I've no clue what that means. 20:13:43 I think it's referring to the softraid implemented by graid(8), so I don't think it's relevant in either case. 20:14:06 It's also talking about a decade-old chipset. 20:14:20 https://community.intel.com/t5/Rapid-Storage-Technology/SSD-D3-S4510-series-with-RAID1-and-trim-function/td-p/1359131 20:14:23 Title: SSD D3-S4510 series with RAID1 and trim function - Intel Communities 20:14:24 it's from last year... 20:14:42 It's still got nothing to do with ZFS. 20:14:53 It's entirely relating to Intels softraid implementation. 20:15:06 It doesn't even have anything to do with gmirror(8). 20:16:44 hmm, ok, I guess it got me confused because I can't find other specs where they say whether they support trim or not 20:20:12 in any case, my 6 drive raid 10 ssd setup does about 2GB/second. versus 500Mb in Debian's lvm 20:20:28 can't believe I even considered using that 20:24:09 Like I said, they're not likely to publish that kind of information. 20:41:17 Hi, I am new to freebsd andI wonder If I am required to use `pkg` as well as `freebsd-update` on a regular basis to make sure my system is up to date? 20:41:49 Yes, that's advisable. 20:43:36 Does `freebsd-update` manage/update some parts of my system that are not part of any package? 20:45:27 yes, kernel and userland 20:46:14 Oh ok, makes sense now. Thanks 21:01:11 And libraries and documentation (manual pages, examples, et cetera) 21:07:06 I'm coming from debian linux where there is no such distinction. Everything is managed by one package manager. So this separation confused me initially. But now I see for myself that e.g. `pkg info` does not list any kernel related packages. 21:09:01 there's PkgBase WIP 21:13:22 oo_miguel, In FreeBSD there is a division between "core" (base system) and "ports" (also binary packages). 21:13:28 Good to know. I personally have no problem using `freebsd-update` for now on my 13.1, once I learned this is required. No idea if it becomes more problematic on cutting-edge versions. i.e. If binary patches are provided. 21:13:54 These are managed separately and each is updated using a different update process. 21:14:08 I think I used only `core` so far, unless the ports are used automagically as well 21:14:15 oh 21:14:25 Assuming you are using binary updates using freebsd-update and pkg installing binary packages (the alternative is a source compiled install, also good) then 21:14:34 or I did like: pkg install vim # does it mean I used ports? 21:15:11 need to read more of the documentation I guess 21:15:14 "freebsd-update fetch" will fetch the new core files and "freebsd-update install" will install the new core system files. Without touching ports in /usr/local. 21:15:35 "pkg upgrade" will upgrade the binary installed ports in /usr/local. 21:16:24 By default the 13.1-RELEASE (for example) install will set up a system with the "quarterly" release upgrades. That's fairly conservative and stable. 21:16:33 I believe I heard/read that using the source-versions will let me finetune compiling options to my preferences 21:16:36 There is also a daily and weekly. 21:16:44 sounds worth a try as well.. at some point 21:17:12 Yes. You can compile everything from source. 21:17:55 Also, "freebsd-version" shows your current core versions, plural. I suggest always using "freebsd-version -kru" to show all three of the versions in the pipeline. 21:18:21 I have to confess that for now I run it on my raspberryPI only ... so guess I will wait with the from-source-compilation until I put it on my desk or lap 21:19:45 I probably would not want to run a source compilation on a Raspberry Pi. But for those are about to try I salute you! /o 21:21:19 I'll mention that one of the advantages of core being a fully cohesive unit installed all together is that it always works. There is never any problem with having the wrong initramfs tools or other mixed out of tree problems as sometimes hit by accident on other systems. 21:22:24 And the ports are separated off into the /usr/local tree and so if the core is updated including shared libraries then ports might/possibly/likely need an upgrade to get new shared library linkages. But the core always boots allowing one to upgrade the ports on the new core. Very reliable separation of powers. 21:23:12 I appriciate that. 21:24:19 Lastly I am one of the zfs proponents (it's truly awesome) and at times (such as from FreeBSD 12 to 13) new versions of ZFS become available. 21:24:26 rwp: That part is pernicious. /usr/local is for the local admin. 21:24:53 That's a separate upgrade to be done separately. Because holding off on that upgrade allows one to use the Boot Environments to boot the old kernel in the event of a problem. 21:25:07 zfs - also something I plan for the future. but again not on a raspberry ;) 21:25:12 This allows problems to be worked out before deciding to upgrade the ZFS file system under it. 21:25:18 oo_miguel: People run it on RPis. 21:26:05 mason, re /usr/local, Yes. That was a hard thing for me to accept, that FreeBSD takes over /usr/local when it is my dog given right to own it myself as the local admin! 21:26:32 But life is a compromise. And the overall result is very good. So we keep on keeping on! :-) 21:27:10 Last (for today) quastion. Tried the `freebsd-version -kru` and seems my userland is 13.1-p7 while installed&running kernel is 13.1-p6 21:27:15 Is this expected? 21:27:29 Yes. Expected. At that patch release there was no need for a new kernel. 21:28:24 That difference in versions caught me the first time too. Which is why I now suggest -kru all of the time. The differences can be important. But it is good to know regardless. 21:29:01 If the other two are different then it means a new kernel and core were installed but the system has not yet been rebooted to it yet. Needs a reboot. 21:29:40 So really it is -kr is kernel and core and _should_ be in sync, unless pending a reboot, and -u is userland. 21:29:41 Right, checked before the reboot and this was the case indeed 21:30:20 Ok thanks a lot. Learned enough for today! 21:31:06 And most importantly updated my FreeBSD :) 21:31:30 Come back any time! We are here all week. Remember to tip the wait staff! :-) 21:32:15 Allright, Just added tha channel to my auto-join list. 21:32:22 mason, "pernicious"? /usr/local? I am still contemplating what you said there... 21:36:08 I would certainly agree with "contentious". The real problem is that there exists no absolutely correct solution. So it is always going to be pragmatic compromise. 21:38:41 why would manufacturers hide the trim settings ? Is it possible they have firmware that manages that automatically behind the scenes ? 22:07:02 rwp: Yeah, it's a compromise, and hardly the worst one in the world. I prefer what pkgsrc does, using /usr/pkg. I guess FreeBSD ports are moving towards the flexibility to do that kind of thing in, say, Poudriere. 22:29:13 So here is my current problem. ZFS: "errors: Permanent errors have been detected in the following files: <0x4c>:<0x8e328>..." https://bsd.to/2ztq/raw 22:29:14 Title: 2ztq 22:30:18 I had TWO hard crashes in the last few hours. Which left things in this state. 22:30:49 I think my workstation hardware is failing. I swapped the drives into a different workstation and booted it. 22:31:15 We will see if the problem follows the hardware or follows the OS. Meanwhile... Is it possible to clean up the above pasted problem? 22:33:51 I note that everything is running okay and with the exception of losing two rather large files that I had just created (grr...) everything is otherwise working okay. 22:45:12 I have also performed two scrubs already. A scrub runs in about 1h15m start to finish. 22:45:39 I am guessing that I should "zpool clear zroot" and then "zpool scrub zroot" again to see what results. 22:58:38 Running "zpool clear zroot" did not change anything. 23:08:39 is there a software that manages zfs snapshots and send/receive for backup purposes ? 23:08:59 I'd like to backup pool1 every 5 minutes for example 23:09:09 and send it to another host 23:13:27 last1: I don't know if there is a tool, but it sounds like a job for a script run with cron. 23:15:56 I also have no idea if there is a tool but agree that a script from cron seems reasonable. 23:16:16 Meanwhile... I followed the suggestion from 4oo4 here https://serverfault.com/questions/576898/clear-a-permanent-zfs-error-in-a-healthy-pool 23:16:17 Title: zfsonlinux - Clear a permanent ZFS error in a healthy pool - Server Fault 23:16:36 The suggestion was to start a scrub and then to scrub -s stop the scrub after a moment of it running. 23:16:43 And that worked! 23:16:53 last1: It looks like there are some: https://forums.freebsd.org/threads/looking-for-a-zfs-snapshot-management-tool.85848/ 23:16:55 Title: ZFS - Looking for a ZFS snapshot management tool | The FreeBSD Forums 23:17:25 However now I am going to start a full scrub again and let it run to completion. Hoping the errors are not re-discovered. 23:17:55 If errors are discovered again then I am going to grab a couple of spare disks and pour data off and on again. 23:18:07 rwp: gz :) 23:18:33 souji: thanks for using Google for me, weird enough that Google didn't point out the FreeBSD forum result first 23:18:46 on a side note, I do believe that Google is getting suckier 23:19:37 let me check these out, there is one big feature I'm looking for that I haven't seen any package have so far 23:19:44 souji, I feel that I am going to need the luck. But backups are current and I haven't lost anything. So, good! I just need to get things solid so I can go back to using it heavily again. 23:20:04 I want to initiate the send|recv from the backup server 23:20:10 not the live data server 23:21:40 rwp: yeah, backups are always good to have! 23:23:43 last1: I would be surprised if any tool is able to do that... 23:27:20 I'm trying to do that now 23:27:45 bkp server issues: ssh root@live zfs send | ssh root@bkp zfs recv 23:28:03 hope it doesn't create some sort of loop :) 23:28:21 looks good to me 23:28:45 I shuld just execute the command on the remote host 23:32:21 However, you would need an SSH key on the live server for the backup server if you do not already have one. 23:32:23 yeah, that worked 23:32:30 just had to use quotes 23:32:33 nice :) 23:33:07 ssh root⊙112 'zfs send -i zfs/testindex@now4 zfs/testindex@now5 | ssh root⊙112 zfs recv zfs/testindex' 23:33:27 I don't want the live server to hold keys to the backup server, only the other way around 23:33:42 this way if live gets compromised, hackers can't jump to backup and erase my backups or encrypt them 23:34:03 I'm surprised most backup tools don't use this method of thinking as default 23:35:02 With out the key, how do you connect back to your backup server? 23:35:46 son of a 23:35:48 lol 23:36:32 I can probably program around it wit perl/expect and actually use a password 23:36:56 but that's not ideal either, because a hacker could compromise sshd and read all that I enter 23:37:59 I guess the backup system could rotate passwords at every interaction, this way each password is valid just for that session 23:39:34 Maybe you can use sshpass on the server machine, so you could use a password 23:40:25 yeah, was just reading about that 23:41:28 but then the password would show in the process tree, which however would not be as big of a problem if you limit the visible process to only the own user 23:42:34 true, but the password would also be changed as soon as the transfer finishes 23:42:57 so bkp system: generates new pass, ssh using new pass, generates new pass 23:43:17 it would be vulnerable for the duration of the transfer 23:43:24 ugh, not ideal 23:43:36 there has to be another way 23:44:01 In my opinion, an SSH-key would be better for that. 23:44:21 but how would that prevent a hacker from ssh-ing into my backup server ? 23:45:08 they would only be able to, if they have root acces on your server machine 23:45:23 well yeah, that's what I'm assuming, the worst 23:45:42 they gain full priviledge on production and want to encrypt everything. live & backup 23:45:54 And if they have root acces, they can just wait for the next connect from the backup server, so yeah... 23:46:42 Is it possible to get the backups with an unprivileged account? 23:51:11 it wouldn't matter 23:51:26 if that account has snapshot access, then they can just use that to connect over and destroy the snapshots 23:52:52 commercial system have something called immutable storage 23:53:07 where no matter what, snapshots can't be removed except through time policies