[tech] 10GbE upgrade for loveday :: Downtime on Saturday 12/01/2019
Felix von Perger
frekk at ucc.asn.au
Sun Jan 6 14:48:27 AWST 2019
Hi again,
A quick followup - we now have 4 additional "generic" SFP+ modules
<https://www.fs.com/au/products/74668.html> from FS.com and 2 HP
NC523SFP dual port PCIe cards
<https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c02694717>
to play with. Ceph has also been configured across the 3 proxmox hosts
with a total of around 400GB of SSD-backed, redundant storage.
Now that we have the parts, time to do the upgrade! [CFE] and I will be
coming in next Saturday to attempt to connect loveday and motsugo to the
10G network, and if anyone else is interested please feel free to meet
us in the clubroom around 10am.
As a result of installing cards and (re)configuring networking, expect
downtime for motsugo (including email access via IMAP/POP3,
ssh.ucc.asn.au, and unexpected termination of all running user sessions)
and allow for the possibility of (temporary) total catastrophic network
failure on 2018-01-12 between 10:00 and 18:00 AWST.
Due to failover capabilities in our Proxmox cluster, it is unlikely that
there will be any noticeable interruptions to our VM hosting and storage
services during this time, except perhaps in the case of possible total
network failure as mentioned above.
Best regards,
Felix von Perger [FVP]
UCC President & Wheel member
On 16/11/18 10:40 pm, Felix von Perger wrote:
> Hi tech,
>
> I've looked into configuring ceph distributed storage for VM disks
> (http://docs.ceph.com/docs/master/releases/luminous/) on the Proxmox
> cluster using the 3 existing 500GB SSDs.In order to ensure failover is
> possible in case of one of the 3 hosts going offline, ceph requires a
> minimum data redundancy of 3 leaving a total storage capacity of around
> 500TB (from the total raw storage space of 1.5TB). The idea is to have
> at least our core VMs and filesystems (ie /services) on SSD-backed
> storage to make things more snappy.
>
> As per the documentation
> (http://docs.ceph.com/docs/master/start/hardware-recommendations/) ceph
> is limited to the bandwidth of the slowest network link, and given that
> we are using SSDs there would be a noticeable improvement upgrading to
> 10Gbps from the current bottleneck of 1Gbps on loveday.
>
> Hardware-wise, the cheapest option seems to be the Mellanox ConnectX-2
> (such as https://www.ebay.com.au/itm/192421526775) for around $50 each.
> SFP+ cabling could either be passive (such as
> https://www.fs.com/au/products/30856.html for $17) or somewhat fancier
> active setup using fibre (such as 2 *
> https://www.fs.com/au/products/74668.html for $22 each).
>
> It seems that loveday is fussy when it comes to booting when certain
> types of PCIe cards are installed - should this be an issue and the
> above-mentioned hardware be effectively unusable then the ceph cluster
> could be configured using the other machines with 10GbE (ie.
> murasoi/medico/maltair), albeit with the loss of the convenient Proxmox
> ceph configuration UI, and the spare 10GbE card could be put to use
> elsewhere. BIOS/firmware upgrades on loveday permitting.
>
> Let me know if you have any thoughts about this.
>
> Best regards,
>
> Felix von Perger
> UCC President & Wheel member
>
> _______________________________________________
> List Archives: http://lists.ucc.asn.au/pipermail/tech
>
> Unsubscribe here: https://lists.ucc.gu.uwa.edu.au/mailman/options/tech/frekk%40ucc.asn.au
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.ucc.gu.uwa.edu.au/pipermail/tech/attachments/20190106/af715127/attachment.htm
More information about the tech
mailing list