[tech] 10GbE upgrade for loveday

Felix von Perger frekk at ucc.asn.au
Fri Nov 16 22:40:53 AWST 2018


Hi tech,

I've looked into configuring ceph distributed storage for VM disks 
(http://docs.ceph.com/docs/master/releases/luminous/) on the Proxmox 
cluster using the 3 existing 500GB SSDs.In order to ensure failover is 
possible in case of one of the 3 hosts going offline, ceph requires a 
minimum data redundancy of 3 leaving a total storage capacity of around 
500TB (from the total raw storage space of 1.5TB). The idea is to have 
at least our core VMs and filesystems (ie /services) on SSD-backed 
storage to make things more snappy.

As per the documentation 
(http://docs.ceph.com/docs/master/start/hardware-recommendations/) ceph 
is limited to the bandwidth of the slowest network link, and given that 
we are using SSDs there would be a noticeable improvement upgrading to 
10Gbps from the current bottleneck of 1Gbps on loveday.

Hardware-wise, the cheapest option seems to be the Mellanox ConnectX-2 
(such as https://www.ebay.com.au/itm/192421526775) for around $50 each. 
SFP+ cabling could either be passive (such as 
https://www.fs.com/au/products/30856.html for $17) or somewhat fancier 
active setup using fibre (such as 2 * 
https://www.fs.com/au/products/74668.html for $22 each).

It seems that loveday is fussy when it comes to booting when certain 
types of PCIe cards are installed - should this be an issue and the 
above-mentioned hardware be effectively unusable then the ceph cluster 
could be configured using the other machines with 10GbE (ie. 
murasoi/medico/maltair), albeit with the loss of the convenient Proxmox 
ceph configuration UI, and the spare 10GbE card could be put to use 
elsewhere. BIOS/firmware upgrades on loveday permitting.

Let me know if you have any thoughts about this.

Best regards,

Felix von Perger
UCC President & Wheel member



More information about the tech mailing list