-
nomia
can zfs on geli do trim on a ssd?
-
gh00p
Hiya! If I want to install Android in a bhyve VM {I've seen the video that suggests it's possible), what image/ROM should I install? Is there a lineageos build that isn't tieed to hardware, or do I build an AOSP on a Linux box?
-
polyduekes
anyone knows what the major differences between this and zram are?
-
polyduekes
-
polyduekes
-
polyduekes
also i am pretty sure there are differences else this wouldn't exist
-
polyduekes
-
dch
having a stupid day.. I made a zvol, and was expecting to be able to
-
dch
`gpart create -s gpt /dev/zvol/embiggen/iscsi/straylight` and then add partitions inside it
-
dch
but I get `gpart: arg0 'zvol/embiggen/iscsi/straylight': Invalid argument`
-
dch
aha. I need to create the zvol with `volmode=full` not `volmode=dev`. TIL.
-
jemius
How does it come deduplication only takes effect for new data? I suppose it's because of the implementation effort
-
nimaje
because it would mean a massive IO workload if it applied retroactively, you would need to read everything and write it again, if you want that you can do it manually by making a new dataset with deduplication enabled, copy your stuff there and later rename the new dataset and remove the old one
-
nimaje
(and as you mentioned deduplication, note that only a few usecases profit from it and you should throughly evaluate if yours does, if you think about enabling it)
-
jemius
nimaje, yeah, I noticed that it only gives me 1%. Still 2GB on my small drive ^^'
-
jemius
I guess ZFS is smart enough to store blocks redundantly anyways if you set copies > 1
-
nimaje
compression will probably help more and uses less resources
-
jemius
Who is it who benefits from dedup? I mean, even for files containing many zeroes there's a distinct compression helping with that
-
aquamo4k
if for example you have a large dataset where a group of different users are storing the same movie, it might benefit - think cloud provider scale.
-
jemius
They only store the same movie if they pirated it, since their private movies are all distinct :D
-
aquamo4k
but as mentioned, the cost of using dedupe is real, i can see using it for a backup destination but not for "primary storage"
-
aquamo4k
the movie example was just an example, any large file which was compressed by someone else and shared many times
-
» jemius is just toying around with a private machine and switched every cool ZFS switch he could find on
-
aquamo4k
i used early de-dupe in production on a netapp (not ZFS but WAFL), and I got bit by a bug which affected cifs shares
-
aquamo4k
learned my lesson, only lost of acls but could have been much worse :-)
-
aquamo4k
de-dupe was also used at a company i worked at for a sun storage array running ZFS that received backups, it does provide some non-trivial % savings at scale
-
aquamo4k
another use case / dream case is where you have say ... 900 virtual machine images on one deduped volume. if each VM shares large number of blocks (like OS images) then there can be benefit
-
aquamo4k
but it's risky because of what happens when the IO bottleneck comes from the de-dupe engine/code instead of just the disks, unpredictable
-
jemius
I guess dedup makes all writes significantly slower because each new block has to be compared to the hashlist
-
nimaje
I think mostly write-once storage, where you expect many files to share some block for some reason, benefits from de-dup (and I think that is mostly back-ups)