Datrium Zero to Hero Series: A New Breed of Convergence – Part 1

One of the things that excited me about joining Datrium was the sheer amount of engineering prowess the company possesses, having been onboard for a few weeks now I’ll admit I haven’t been this jazzed up about a technology in a long while. Let’s dive into the Datrium DVX Open Converged Platform.

A new breed of convergence

Traditionally, IT procures their compute and storage separately. Each component would operate in a silo with no insight or awareness into what the other was doing. Virtualization talks in VMs, essentially this a bunch of files and each virtualization platform (vSphere, Hyper-V, RHEV, Docker Containers) have their own language. This makes per-VM activities impossible for traditional storage arrays since they speak in LUNs or Volumes. Usually overcoming this storage performance challenges involve either a controller upgrade, adding additional disk shelves or a complete forklift upgrade of the storage array – sounds expensive and complex!

Datrium was founded in 2012 by several guys (really cool guys too I must add). The founders include ex-Data Domain CTO’s (Brian Biles and Sazzala Reddy), NetApp (Hugo Patterson developed SnapVault) and VMware (Boris Weissman and Ganesh Venkitachalam, Principal Engineers for the VMware ESX Hypervisor). In 2012 this group set out to build the best open converged infrastructure which provides simplicity, scalability, high performance and integrated cloud data management all through the familiar tools you’re used to using, like the vSphere Web Client.

Datrium DVX

Datrium DVX system leverages either an existing virtual host environment (3rd party servers) or customers can leverage Datrium Compute Nodes for fast, local IO operations while all durable data is persisted in a global pool of capacity storage. This design is fundamental to being able to scale both performance and capacity separately without any strict HCL’s to abide by. Each Compute Node is packed with NVMe or SSD drives, the idle CPU power is used to accelerate performance. All VM IO is serviced locally, from each Compute Node, which is completely stateless, DVX ensures that all data is saved in the Data Pool before additional IO requests are serviced.

Scalability

The Datrium DVX platform can scale to 128 Compute Nodes and 10 Data Nodes. This means that as the environment scales it gets proportionally faster – not only for servicing applications and VM data but also for drive rebuilds. The founders know deduplication, they invented it – Datrium features an ALWAYS-ON compression and deduplication engine (its not an optional check-box, its ON by default). For example, if I have 20 Compute Nodes with qty 6 2TB SSD drives that equals 240TB of usable Compute Node capacity – Add on top of that a conservative of 2x deduplication, this makes our Compute Node IO capacity and performance pool to 480TB of flash performance! Global Deduplication is huge, on-premises, but expand that Global Deduplication pool over the wire to the public cloud (Cloud DVX). Very powerful technology!

  • Increase Performance
    • Add additional Compute Nodes (Supports 1-128 Compute Nodes)
    • Add additional flash to Compute Nodes
  • Increase Capacity
    • Add additional Data Nodes to the DVX Platform to increase write throughput of the Data Pool (Supports 1-10 Data Nodes)
    • Add additional Data Nodes to the DVX Platform to increase capacity

Below is a basic architectural diagram, on the top is the Compute Nodes (vSphere, RHV, RHEL and Docker Containers, both bare-metal and virtualized) hosting our business applications and services. On the bottom, there is Data Nodes which serve as the durable capacity tier for our data.

On the surface this might sound like a hardware solution, but really it’s not. All of the intelligence lies in the Datrium DVX software – The DVX Hyperdriver. The Hyperdriver is delivered as a VMware Installation Bundle (VIB) which is installed in the user space of vSphere which does not require maintenance mode or a host reboot. The Hyperdriver is responsible for presenting the Datrium DVX as an NFS datastore – I will dive into more details of the DVX Hyperdriver in a future post, but a quick teaser. The DVX Hyperdriver is responsible for the per-VM Snapshot, Cloning (VAAI) and Replication along with blanket end-to-end FIPS 140-2 AES-XTS-256 military grade crypto algorithm encryption, Protection Group Policy driven operations, compression, deduplication, erasure coding and space reclamation tasks.

Wrap-Up

Datrium DVX has taken the best pieces of converged infrastructure (3-tier) and hyperconverged infrastructure (HCI) to build a truly open converged infrastructure platform. This platform includes host compute, primary and secondary storage to provide: Simplicity, Scalability, High Performance and Integrated Cloud Data Management. Stay tuned for more details about the underpinnings of the DVX platform.

0 thoughts on “Datrium Zero to Hero Series: A New Breed of Convergence – Part 1

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: