[tech] Server Donation; use as bare-metal Staging "backup" storage host
Nick Bannon
nick at ucc.gu.uwa.edu.au
Wed Mar 24 13:08:37 AWST 2021
On Tue, Mar 23, 2021 at 01:45:54PM +0800, David Adams wrote:
> During one of the last UWA IT chuck outs I managed to come in possession of
> two Dell PowerEdge r710 servers.
[...]
> I have no real expectations around my own use of the server (hence donation)
> but I would appreciate being able to be a part of the setup process as I
> am interested in it.
Lovely! I think those drive bays are rightsized for a couple of things we
want to do right now. I saw another group pick up a bunch of R710's in
late 2018 and even bought a couple of drive sleds on ebay, but lining up
the enthusiast building time to make finishing the job with club money
efficient... has been tricky. Which made me pause on the final ACTION(s).
1. Staging backups for in-clubroom server
rebuilds/upgrades/rearrangements, separate to the offsite legacy
backups. We've already budgetted for the bulk HDD storage we need to
start this plan. Until now I was aiming to start with an old IBM/Lenovo
x-series server and replace the drive controller/HBA with a secondhand
LSI9207-8i I picked up.
2. Minor upgrades/DR of our very oldest machines like
motsugo/mooneye/murasoi - we've virtualised a lot but we still need some
bare-metal. [333] was planning to fill in some more detailed plans here
for motsugo, this could add an option.
vs:
3. Eventually, or if the total costs start to add up, we need to weigh
against major upgrades to post-2018 new/secondhand equipment with...
- better hardware mitigations for Spectre/Meltdown, Rowhammer
- M.2 / U.2 slots/ports
- less-terrible firmware processes and management consoles, faster boot,
care of https://linuxboot.org/ / NERF / https://fwupd.org/ , etc.
So, for 1. all we need is:
- at least two 10TB+ 3.5" HDDs, more like 14-18 TB at current prices:
- Careful to avoid the SMRs - only the smallest and now the very
largest have been SMR
- https://staticice.com.au/cgi-bin/search.cgi?q=14TB+exos
- https://www.ozbargain.com.au/node/612624 shuckables and the like
are almost irresistable, put the savings into spares
- room for a spare for mirror rebuilding, and/or expanding with
another pair
- No RAID-5 https://www.baarf.dk/
- SSD or two for boot/L2ARC/ZIL/SLOG/bcache
- Mounting for the above
(Looks like we can print the drive sleds/trays/caddys!
https://www.thingiverse.com/thing:2168447
)
- Access through the DELL PERC6/i/H200/H700/H800... or the LSI9207-8i
Most kinds of upgrades to motsugo or molmol (our fileservers), or
our larger VMs, are disruptive. To help, we need a target we can
send whole snapshot filesystems and system images to and from at
gigabit/dual-gigabit/10G sorts of speeds. Some flexibility is needed,
we're probably wanting to at least try most of the following at
multi-terabyte scale:
- dd if=/dev/...
- zfs/btrfs send/receive
- borg/borgmatic ( aggregates all the little files, makes an offsite
rclone much happier)
- a target for https://pbs.proxmox.com/ ?
- a https://tracker.debian.org/pkg/moosefs chunkserver ?
From outside, this is going to look a lot like molmol with extra space,
but it's time we tried to pin down why molmol's latency goes up so much
when one client starts a single big boring copy or VM clone.
Nick.
--
Nick Bannon | "I made this letter longer than usual because
nick-sig at rcpt.to | I lack the time to make it shorter." - Pascal
More information about the tech
mailing list