sol.net's home on the World Wide Web.
Webmail Access
Access your mailbox conveniently via any Web browser through our Web Mail System.
Usenet News
Our Usenet News operations transit NNTP news with sites around the globe.
DSRS
DSRS is a Usenet News abuse tracking engine, available to qualified applicants involved in the battle against abuse.
Quake Gaming
We sponsor interactive games on the Internet, including a number of popular Quake servers.
Network Engineering
We maintain a multi-state backbone network to support our operations.
Network Monitoring
We gather a large variety of statistics from all of our equipment.
Paging
Web interface to our pager system.
Web Counter Services
Web Mail Form Services
User configurable web counters and mail forms for our virtual web users.
Domain Registrations
We offer Internet domain name registration services.
FreeBSD Apache HTTPD Valid HTML 4.01 Valid CSS2
Hardware Evaluation: ASRock 1U Avoton Storage Server - 1U12LW-C2750

In the late 2000's, one of my various tech geek friends, Mike Horwath, convinced me to look again at the new generation of storage appliances based on ZFS. We had been using UNIX boxes as fileservers since the '80's, with Sun gear like the Sun 3/260, but had traditionally used DAS (.Direct Attached Storage.) or SAN (.Storage Area Network.) for server storage. The advent of virtualization changed the equation, because it is very attractive in a virtualization environment to have shared storage, and suddenly instead of fifty servers each with their own local direct attach disk, with varying levels of redundancy, it became desirable to have that become a shareable resource with high redundancy.

I experimented with Nexenta for awhile, but ran into several problems, including a fundamental problem involving it constantly clashing with years of UNIX admin experience in strange ways.

I eventually moved on to FreeNAS. While I've been very pleased with FreeNAS as NAS storage for our normal file storage needs, and have slowly replaced legacy FreeBSD based servers with FreeNAS, I had significant problems getting ZFS and FreeNAS to be sufficiently responsive and to maintain performance under pressure. I have commented several times on the FreeNAS forums that I expect ZFS has the potential to be a fantastic way to store VM data, but that the resources required to do this effectively might be mind-numbing.

I had spent the past few years slimming our VM's to be more compatible with virtualization, taking steps to avoid pointless IOPS where possible by eliminating trite metadata updates, etc. So a new requirement eventually evolved, one well-suited to L2ARC, which was that reads of the working set ought to be fulfilled from ARC/L2ARC where possible.

To complicate issues, I am no stranger to having servers live in environments with power and cooling budgets, and both rack space and power are expensive. My ideal storage is low power, very fast for reads, pretty fast for writes, and small footprint. These are, sadly, rather contradictory requirements.

I had pretty much resigned that the solution would be to go Haswell, outfit it with some 2.5" drives in a 1U chassis, add some L2ARC, and I'd have a solution.

When the ASRock Avoton boards first came out, I was intrigued. While slower than a Haswell, they had a few things going for them, including the potential to go up to 64GB of RAM. ASRock had also blessed the board with 12 SATA ports, making for a potentially powerful storage platform. The main problem is ZFS itself; it has a very heavy footprint on the system and requires lots of CPU to make it work well. I've predicted that the Avoton won't do so well with ZFS and CIFS to a single client, because the Avoton probably lacks the single-core horsepower to drive a Samba connection at full speed. However, thinking about VM storage, and multiple instances of istgt, and VM storage in mirrors, I found myself wondering if maybe it would perform well as an iSCSI SAN with FreeNAS.

But I'd been busy these past few months, and hadn't gotten around to finding out.

Then I noticed ASRock had released a very intriguing product, one that took the Avoton board and crammed it into a 1U chassis that holds 12 3.5" and 2 2.5" drives. This creates a rather innovative combination of qualities. Even for conventional NAS storage purposes, there are plenty of environments where lots of storage behind a pair of gigabit ethernets would be useful. Backups, nearline storage, etc.

So I set about to lay my hands upon one. The folks over at ASRock were quite good about it, and so a big box arrived. Complete with hole (gee thanks FedEx!).

Unboxing Image

Unboxing Image

 

 

Well packed, dual boxed. Two double-layer corrugated cardboard boxes. FedEx still managed to puncture them both.

Unboxing Image

 

 

Great interior padding saves the chassis.

Unboxing Image

 

 

Accessory kit.

Unboxing Image

 

 

Damage. Plenty of foam saves the day.

Unboxing Image

 

 

At 32 inches long, this thing may have some trouble fitting into some racks.

Unboxing Image

 

 

But thankfully someone who has worked with servers designed the packaging. The bag opens to the side, making unpacking reasonably easy.

Unboxing Image

 

 

Finally unpacked and on the bench. Notice the ample ventilation and a UID button!

Unboxing Image

 

 

Wow. Unit #52! Those 5's sure look like 6's.

Unboxing Image

 

 

Screws attaching the back of the cover (in addition to screws in the front). Not my favorite but a pragmatic solution to the inevitable problems inherent in a large 1U chassis.

Unboxing Image

 

 

Cover off.

Unboxing Image

 

 

Power Supply. 80Plus, good. Dual 12V rails at 18A, perfect! 250 watts... wait, what? I'm going to have to check up on that. The typical start current for a HDD is around 2 amps at 12V, so a storage server for 12 drives ought to allow for 24A just to spin the drives. With more recent ATX versions, mainboards have typically also derived much of their power from the 12V rails, so 36A strikes me as a very reasonable spec. I'm going to have to investigate this further, since 36 amps at 12 volts would be 432 watts, if all 12 drives were spun up together.

Unboxing Image

 

 

Kind of crowded, but neatly done. New style of DIMM socket that only click-locks on one side.

Unboxing Image

 

 

Dual 2.5" bays and power underneath PCIe slot.

Unboxing Image

 

 

View of the back half, PCIe slot, etc. It is also easy to see the interesting mounting arrangements for the 3.5" drives. In this design, there is space both above and below, which should be about as good as you could ask for in the cooling department. The drives are supposed to be hot-swappable, though of course this would require some planning and design care.

Unboxing Image

 

 

Closeup of my only significant complaint to this point. In a 1U server, airflow is critical. The fan bulkhead serves as the ultimate separation between the front (cold row) and back (hot row) in a data center. Internally, there should be no possibility for backflow of air between the lower pressure zone in the front of the chassis and the higher pressure zone where warm air is being exhausted. Looking carefully at the bulkhead, there are numerous gaps that could allow significant amounts of air to recirculate. It is obviously necessary to have cable channels in the bulkhead, but these should be minimized for air backflow with foam. For servers we build here in shop, we've been cheating and using Sugru, which allows you to mold it around cords.

Unboxing Image

 

 

Closeup of one of the SATA ports on the backplane. Whoever installed the jumper should try to avoid bending pins.

Unboxing Image

 

 

PCIe slot riser. Interestingly the board comes with 12 SATA ports but 14 bays. This is a slight weakness. For those of us who might add more network ports (quad gigE or 10GbE), it precludes the use of some of the bays.

Unboxing Image

 

 

32GB Kingston ECC for install. Sadly 16GB modules are still made of unobtanium.

Unboxing Image

 

 

Is it just me or should they have named it ASRack? NMI, reset, UID, and power. Two network activity LED's, One HDD activity LED!

Unboxing Image

 

 

Memory installed.

Unboxing Image

 

 

Plugged in and ready to play!

Unboxing Image

 

 

Arriving back at my desk, I was a bit puzzled to not find any new devices registered on the bench network. I had plugged both the management and lower ethernet ports into the bench switch, but nothing was DHCP'ing. Tried turning it on manually. That kind-of worked, but the bench KVM (which is all PS/2) when combined with a PS/2-to-USB adapter, displayed a BIOS screen but wouldn't take any keystrokes. A little further examination showed that the LAN2 LED was lit, so the bottom port on the back must be LAN2. Tried plugging in LAN1. And suddenly the IPMI DHCP's. So apparently it is set by default to connect the IPMI to LAN1, despite having a dedicated port for IPMI.

Pulling up the IPMI, the default login and password are "admin"/"admin". Unlike some of the other IPMI solutions out there, this actually looks like it was designed by someone who learned web design this decade.

Unfortunately, I still can't get at the console because Windows 8.1 doesn't include Java with IE11. So that brings this to a temporary close.

This is a Work In Progress. Please check back.

Last Modified: Monday, 10 February 2014 11:54:06 AM CST