06:25:10 I wonder again, why when I select fixed IP for external interface on Smartos , in /usbkey/config , that IP is not used at all but external is left without IP assigned to global zone at all 08:46:26 I was thinking not to use DMZ from main router on external network, but to do per-VM port forwarding as needed. 08:47:02 Again not sure if it i OK for external to simply not have IP address for the global zone 10:50:37 hello all, a simple question for file system mounting: I had a NVMe SSD, and format it as UFS in global zone on smartos, how can I mount it in a local zone under a dir? even after the local zone rebooted, it can mount automatically. is there an example? 10:51:13 best wishes 10:54:23 and UFS support TRIM feature on illumos/smartos ? 11:07:55 I think the cleanest way to handle that is to just mount it in the GZ as normal and then add it as a lofs mount to the zone 12:46:56 I planned to add physical drive on NTFS to Linux Bhyve HVM to share it over SMB. If disk is NTFS, would that be possible 14:16:03 jperkin: Thank you 14:33:21 how to create a Etherstub link to a phy network link? eg how to link Etherstub0 to ixgbe0, best wishes 15:12:59 pjustice: if you still have a GZ that's seeing the upgrade issue could you grab https://us-central.manta.mnx.io/pkgsrc/public/tmp/check-pkgdb/pkg_admin and run ./pkg_admin rebuild-tree? I have a feeling you will see errors. 15:18:23 jperkin: see pm 15:36:23 tozhu: So for example, ixgbe0 has x.y.z.A/24 and you want to create an etherstub that is also on x.y.z.A/24? Or do you want an etherstub that is w.x.y.A/24 ? 15:37:31 If you want link-level connectivity, you shouldn't need an etherstub (which only serves same-machine anyway), you can just use VNICs over ixgbe0 that are also x.y.z.A/24 15:38:28 If you want a distinct IP network on your etherstub, you'll have to route between the etherstub's network and your physical (ixgbe0) network. You can do that with a zone that has ixgbe0 (or a vnic over ixgbe0) AND an vnic on top of the etherstub. 15:38:31 danmcd: not sure could create this: ixgbe0 — etherstub0 — vnic0, vnic1, vnic2 … ixgbe0 linked with a phy switch 15:39:45 An etherstub serves as an on-machine ethernet switch. If you're sharing the same IP network with ixgbe0, you can just put more vnics over ixgbe0; you wouldn't need an etherstub. 15:40:14 If you want a distinct IP network, then you need to route. I don't understand the precise problem you're trying to solve. 15:42:31 danmcd: I’m not clear the concept of etherstub, and tried to find some docs, but all of docs descript how to use it, have not find docs to descript what’s that exactly. 15:43:12 danmcd: thank you for the explain 15:43:34 I got it, thank you 15:44:20 YW. 15:46:25 other question regarding to ZFS, now ZFS on illumos/smartos does not support dRAID, but openzfs supports that, is there any reason to now support? or had plan to migrate from Linux/FreeBSD into illumos kernel, is there anyone know the plan or status? 15:48:45 I belive dRAID is very useful case when there is a JBOD with 60*22T disk, to replace failured disk is much quick, 16:06:44 I don't know of any short-term plans for dRAID, but that is a question better asked to the wider audience on #illumos IMHO. 16:11:29 thanks danmcd 16:50:39 I would like to try Manta, what is the best available guide for installation? 16:51:30 xmerlin: Start here: https://github.com/TritonDataCenter/manta/blob/master/docs/operator-guide/README.md 16:52:59 xmerlin: Manta requires a lot more up front consideration for deployment. You need to consider both your current need, and potential future growth to make sure that it's scaled correctly for your needs. 16:54:40 Do we need to dedicate entire machines to Manta, or is it possible to utilize parts of existing SmartOS servers for the various zones? 16:54:50 It's best if you do. 16:55:15 In particular, storage nodes are best dedicated to a single storage instance so that all available storage can be used. 16:55:56 Usually storage zones are deployed with no quota, so that they *can* consume the entire pool. The down side of that is if there is any significant space used up otherwise then it reduces the available manta capacity. 16:56:32 Surely, it's better, but is it possible to start using servers containing other VMs and then switch to dedicated servers as the setup grows? 16:56:38 But you *can* deploy it with general compute. You'd be best off adding a quota to the storage zones in that case. 16:56:49 ok 16:57:26 Is there a comparison between Minio and Manta? 16:57:49 moving the metadata tier is easy, but moving storage zones might be trickier. I'm not sure if built in migration tools will handle manta at all. 16:57:58 There's not a doc with feature comparisons. 16:58:19 Has anyone used Manta as the object storage for Dovecot? 16:58:20 But, my understanding is that minio is really just the external interface, and relies on other systems for back end storage? 16:58:54 Not that I'm aware of, and you'd probably need to write a specific driver for it. 16:58:57 minio create many xfs filesystems 1 fs on each disk 16:59:14 dovecot is compatibly with s3 16:59:17 That's *if* you use xfs as the backend. 16:59:18 *compatibel 16:59:36 Then in theory it can work, with an appropriate driver. 17:00:24 minio uses a custom Erasure Code parity to distribute object on local and remote disks 17:01:31 The important thing is that Manta covers the features of Amazon S3; in that case, the Dovecot S3 driver works perfectly. 17:01:45 Yeah, with Manta erasure coding is done at the zfs layer. 17:02:01 The manta and S3 APIs are different, so you can't just treat it like an S3 endpoint. 17:03:31 as I can see there is a s3-manta-bridge but I cannot see any recent release