02:46:22 can zfs on geli do trim on a ssd? 02:51:33 Hiya! If I want to install Android in a bhyve VM {I've seen the video that suggests it's possible), what image/ROM should I install? Is there a lineageos build that isn't tieed to hardware, or do I build an AOSP on a Linux box? 06:05:17 anyone knows what the major differences between this and zram are? 06:05:18 https://man.freebsd.org/cgi/man.cgi?query=md&sektion=4&format=html 06:05:18 https://man.freebsd.org/cgi/man.cgi?query=mdconfig&sektion=8&apropos=0&manpath=FreeBSD+14.2-RELEASE+and+Ports 06:05:18 also i am pretty sure there are differences else this wouldn't exist 06:05:18 https://wiki.freebsd.org/SummerOfCode2019Projects/VirtualMemoryCompression 11:47:11 having a stupid day.. I made a zvol, and was expecting to be able to 11:47:36 `gpart create -s gpt /dev/zvol/embiggen/iscsi/straylight` and then add partitions inside it 11:50:16 but I get `gpart: arg0 'zvol/embiggen/iscsi/straylight': Invalid argument` 11:52:26 aha. I need to create the zvol with `volmode=full` not `volmode=dev`. TIL. 20:45:55 How does it come deduplication only takes effect for new data? I suppose it's because of the implementation effort 21:31:26 because it would mean a massive IO workload if it applied retroactively, you would need to read everything and write it again, if you want that you can do it manually by making a new dataset with deduplication enabled, copy your stuff there and later rename the new dataset and remove the old one 21:34:07 (and as you mentioned deduplication, note that only a few usecases profit from it and you should throughly evaluate if yours does, if you think about enabling it) 21:52:53 nimaje, yeah, I noticed that it only gives me 1%. Still 2GB on my small drive ^^' 21:53:34 I guess ZFS is smart enough to store blocks redundantly anyways if you set copies > 1 22:02:53 compression will probably help more and uses less resources 22:04:08 Who is it who benefits from dedup? I mean, even for files containing many zeroes there's a distinct compression helping with that 22:05:53 if for example you have a large dataset where a group of different users are storing the same movie, it might benefit - think cloud provider scale. 22:06:50 They only store the same movie if they pirated it, since their private movies are all distinct :D 22:06:51 but as mentioned, the cost of using dedupe is real, i can see using it for a backup destination but not for "primary storage" 22:07:36 the movie example was just an example, any large file which was compressed by someone else and shared many times 22:07:55 * jemius is just toying around with a private machine and switched every cool ZFS switch he could find on 22:08:38 i used early de-dupe in production on a netapp (not ZFS but WAFL), and I got bit by a bug which affected cifs shares 22:08:58 learned my lesson, only lost of acls but could have been much worse :-) 22:10:00 de-dupe was also used at a company i worked at for a sun storage array running ZFS that received backups, it does provide some non-trivial % savings at scale 22:10:58 another use case / dream case is where you have say ... 900 virtual machine images on one deduped volume. if each VM shares large number of blocks (like OS images) then there can be benefit 22:11:37 but it's risky because of what happens when the IO bottleneck comes from the de-dupe engine/code instead of just the disks, unpredictable 22:13:05 I guess dedup makes all writes significantly slower because each new block has to be compared to the hashlist 22:13:26 I think mostly write-once storage, where you expect many files to share some block for some reason, benefits from de-dup (and I think that is mostly back-ups)