Picture1

Given the recent news regarding NetApp purchasing SolidFire, it was a storage company I had heard of but had not had the chance to explore in depth what they were all about, I had heard things around Service Providers etc. but nothing in more detail, I wanted to get to grips with what it was all about and how this can be leveraged into NetApp’s product portfolio alongside the FAS range running Clustered ONTAP and the E-Series platform.

I am sure there are many NetApp Admins, Consultants and Pre Sales Engineers out there that do not have the knowledge of what SolidFire is and does, so I have put together this 101 or fundamentals around the offerings. My next plan will be looking into where the SolidFire product set fits within the NetApp portfolio as a complimentary product to their existing FAS, ONTAP, AltaVault and E-Series Platforms.

As a vendor I couldn’t gain access to the SolidFire training material so I was left to search online for my own resources. To be honest there is a lot out there just over YouTube mostly presented by Amy Lewis @CommsNinja and Josh Atwell @Josh_Atwell as well as a good blog full of resource from @arob_uk

The other valuable resource was the Tech Field Day YouTube channel albeit from 2014 and the Carbon OS so quite dated but the fundamental building blocks were still there.

My initial take away from the above resources were this was a SAN based All Flash Array offering FC and iSCSI connectivity, designed for the large scale infrastructure with a “Scale out high performance storage” mantra in mind.

So why was the above a good fit for Service Providers, (the alluded target vertical I had heard SolidFire was hitting hard in the industry) well that was answered first in my research, a Multi-Tenancy design with Quality Of Service capabilities alongside some deep integration with OpenStack, CloudStack and also VMware.

As with many a tech company these days they have a “5 Capabilities” or “5 Elements” pitch which looks like the following.

5 Elements (capabilities, unique, areas)

Scale Out – Capacity and Performance

Node based scale out architecture, 1U x86 servers

Linear scalability

Picture3

*Graphic taken from TFD presentation in 2014.

  • New nodes are added as demand dictates
  • Performance and capacity instantly available to all volumes
  • Nodes added on the fly without the downtime.

Also able to scale down to re distribute storage.

3 nodes to sustain quorum as a minimum configuration.

Guaranteed Performance – QoS
Performance Virtualisation: unified global pools of capacity and performance.

  •             Allocate: Storage Performance independent of capacity
  •             Manage: Performance real-time without impacting other volumes.
  •             Guarantee: Performance to every volume with fine-grain QoS settings.

*traditional storage QoS has been based on capacity or performance need to find more on this. Seems very similar to the NetApp QoS but need to check.

Shared Nothing – High availability

Cluster wide RAID-Less data protection (SolidFire Helix)
No Single points of failure
Automatic self-healing – restores redundancy after failure
Maintains all QoS settings regardless of failure condition.

Non-disruptive hardware and software upgrades
Validated in carrier class telecommunication and service provider datacenters.

Rebuild time is short because of this and the bigger the cluster and disk pool the faster the rebuild time.

In-Line Effciency

Always-On in-line data reduction

  •             Deduplication
  •             Compression
  •             Thin Provisioning

Executed across entire data store without performance impact
Space efficient snapshots and clones
Delivers drastic reduction in power, cooling and floor space

Operational Efficiency

  • Reduce rack space
  • Consume less power
  • Increase Performance

*Graphic taken from TFD presentation in 2014.

Architecture

  • Global unified pools of capacity and performance
  • Automatic load distribution across entire cluster
  • Singlie click provisioning
  • Self healing

Eliminates

  • Separate pools of SATA, SAS and SSD capacity (no aggregates)
  • RAID Levels, Aggregates, volume groups
  • Forklift controller and storage system upgrades
  • *Fire dills on hardware failure?

Automated Management

Lots of integration with different management stacks, using REST

Oxygen / Element OS 8

Picture5

Oxygen is the latest OS release from SolidFire this was released in June 2015, the next release should be named Fluorine given the release naming cycle of the periodic table. Atomic number 9 “a highly toxic pale yellow diatomic gas”

Press release – http://www.solidfire.com/press-releases/solidfires-new-element-os-release-deepens-data-assurance-capabilities-and-accelerates-transition-to-next-generation-data-center

Synchronous Replication – Maximizes data protection against a disaster.
Snapshot Replication – Replicates snapshots to a second site for rollback flexibility.
Scheduler / Retention Manager – Simplifies the scheduling and automation of snapshots, rollback points and retention duration.
Expanded VLAN Tagging – Delivers industry-leading support for up to 256 secure, logically isolated, per-tenant storage networks on a single storage platform.
LDAP Authentication – Simplifies management with centralized user accounts.

Replication in General

Synchronous Replication

Synchronous replication ensures all data written in the source storage is simultaneously written in the target storage, and waits for acknowledgement from both storage arrays before completing the operation. This relies on matching storage between source and target with fibre channel latencies to minimize the performance overhead of the link between the storage arrays. Because of the potential for performance impact, synchronous replication should only ever be performed in the storage layer and not performed by a virtual appliance technology. For this reason, I will only compare storage based synchronous replication technologies in this post.

A-Synchronous Replication

A-Synchronous replication does not write data to both the source and target storage simultaneously, it uses snapshots to take a point in time copy of the data that has changed and sends it to the recovery site on a schedule. The frequency is typically set on a schedule of hours, depending on the number and frequency of snapshots that the storage and application can withstand. A-synchronous replication can be performed by the storage array or by using a VM-level technology, but with storage based replication being the most predominant I will focus this type of replication for comparison.

Near-Synchronous Replication

Near-Synchronous replication is always-on and constantly replicating only the changed data to the recovery site within seconds. Because it is always-on it does not need to be scheduled, doesn’t use snapshots and writes to the source storage don’t have to wait for acknowledgement from the target storage.

Closing Comments

All in all, I am looking forward to seeing where NetApp and the existing SolidFire guys can take this product, it seems to fit well into the NetApp portfolio and I am sure there are some features that can be ported between each of the products to enhance further. There is also a great rockstar team that SolidFire bring with them, I do hope they can keep these guys.

That’s it from me, hopefully as and when I can get some access to training I can dive a little deeper into the technical aspects of the system. Lastly best of luck to both parties, should be an exciting year for both.

Picture6

Leave a Reply

Your email address will not be published. Required fields are marked *