We're seeing what I think is reasonable failure rates: 6tb reds are between
2 and 2.5% per year. The 8's are slightly higher, but we're still running
numbers on those. Overall the "slowness" of the volume is the drives, not
gluster or zfs. We toss a 100gb-ish ZIL drive in the box as well, so that
helps, and run OS on a single drive. If it fails and a box drops out, oh
well, there's plenty of redundancy built in. Re-syncs can take a while,
depending on how long that node was "missing", but we havne't had two
completely fail at the same time, yet.... Speaking of, we're running
infiniband interconnects between all the nodes, so that part is fast. I
didn't include the cost of that in the $70K figure though, but 10gbe works
as well and is cheap.
Anyway, in the interests of keeping things simple, this is working pretty
well for us. Turning off atime helps, as does the ZIL. And not running
dedup, of course, but we know our data will never, ever, be duplicated.
sudo zfs create -o atime=off -o compression=lz4 -o exec=off -o xattr=sa -o
acltype=posixacl tank/gluster
I've got a basic example howto mostly written up that I can send to anybody
interested, minus some of the site specific stuff we do.
And we're using these, at least this year:
https://www.supermicro.com/products/system/4u/6048/ssg-6048r-e1cr24h.cfm
(.......and, if you order right now, it'll cost less than $70K/pb. Call
1-800-netapp-sux, that's 1-800-netapp-sux, 800-netapp-sux!)
kw
Interesting .. how are you finding gluster stability and any gotchas with
zfs underneath it ?
To the OP, if you can switch to ZFS, it might be your best option
Post by Ken WoodsJust as a curiosity, what's your base data size, and how much change do
you have per month?
We are standing up zfs under gluster for less than $70k/PB
It can be done for $60k if you stuff drive into the middle of a chassis.
It'd really make my day if you're using netapps. Please tell me you are.
Post by Tim CooteHullo
is there any work/success in using rsnapshot with cloud based storage?
Iâve noticed that Iâm chewing through quite a lot of cash using owned
storage and would prefer an approach that used, say, Google Nearline, or
AWS glacier storage. However, neither of these supports hard links, so the
size of the backups would be huge.
Post by Tim Cootetc
------------------------------------------------------------
------------------
Post by Tim Coote_______________________________________________
rsnapshot-discuss mailing list
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
------------------------------------------------------------
------------------
_______________________________________________
rsnapshot-discuss mailing list
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss