148: GreyBeards talk software defined infrastructure with Anthony Cinelli and Brian Dean, Dell PowerFlex

Sponsored By:

This is one of a series of podcasts the GreyBeards are doing with Dell PowerFlex software defined infrastructure. Today, we talked with Anthony Cinelli, Sr. Director Dell Technologies and Brian Dean, Technical Marketing for PowerFlex. We have talked with Brian before but this is the first time we’ve met Anthony. They were both very knowledgeable about PowerFlex and the challenges large enterprises have today with their storage environments.

The key to PowerFlex’s software defined solution is its extreme flexibility, which comes mainly from its architecture which offers scale-out deployment options ranging from HCI solutions to a fully disaggregated compute-storage environment, in seemingly any combination (see technical resources for more info). With this sophistication, PowerFlex can help consolidate enterprise storage across just about any environment from virtualized workloads, to standalone databases, big data analytics, as well as containerized environments and of course, the cloud. Listen to the podcast to learn more.

To support this extreme flexibility, PowerFlex uses both client and storage software that can be configured together on a server (HCI) or apart, across compute and storage nodes to offer block storage. PowerFlex client software runs on any modern bare-metal or virtualized environment.

Anthony mentioned that one common problem to enterprises today is storage sprawl. Most large customers have an IT environment with sizable hypervisor based workloads, a dedicated database workload, a big data/analytics workload, a modern container based workload stack, an AI/ML/DL workload and more often than not, a vertical specific workload.

Each workload usually has their own storage system. And the problem with 4-7 different storage systems is cost, e.g., cost of underutilized storage. Typical to these environments, each storage system could be used at say, 60% utilization on average, but this will vary a lot between silos, leading to stranded capacity.

The main reason customers haven’t consolidated yet is because each silo has different performance characteristics. As a result, they end up purchasing excess capacity which increases cost and complexity, as a standard part of doing business.

To consolidate storage across these disparate environments requires a no-holds barred approach to IO performance, second to none, which PowerFlex can deliver. The secret to to its high levels of IO performance is RAID 10, deployed across a scale-out cluster. And PowerFlex clusters can range from 4 to 1000 or more nodes.

RAIID 10 mirrors data and spreads mirrored data across all drives and servers in a cluster or some subset. As a result, as you add storage nodes, IO performance scales up, almost linearly.

Yes, there can be other bottlenecks in clusters like this, most often networking, but with PowerFlex storage, IO need not be one of them. Anthony mentioned that PowerFlex will perform as fast as your infrastructure will support. So if your environment has 25 Gig Ethernet, it will perform IO at that speed, if you use 100 Gig Ethernet, it will perform at that speed.

In addition, PowerFlex offers automated LifeCycle Management (LCM), which can make having a 1000 node PowerFlex cluster almost as easy as a 10 node cluster. However to make use this automated LCM, one must run its storage server software on Dell PowerEdge servers.

Brian said adding or decommissioning PowerFlex nodes is a painless process. Because data is always mirrored, customers can remove any node, at any time and PowerFlex will automatically rebuild data across other nodes and drives. When you add nodes, those drives become immediately available to support more IO activity. Another item to note, because of RAID 10, PowerFlex mirror rebuilds happen very fast, as just about every other drive and node in the cluster (or subset) participates in the rebuild process.

PowerFlex supports Storage Pools. This partitions PowerFlex storage nodes and devices into multiple pools of storage used to host volume IO and data Storage pools can be used to segregate higher performing storage nodes from lower performing ones so that some volumes can exclusively reside on higher (or lower) performing hardware.

Although customers can configure PowerFlex to use all nodes and drives in a system or storage pool for volume data mirroring, PowerFlex offers other data placement alternatives to support high availability.

PowerFlex supports Protection Domains which are subsets or collections of storage servers and drives in a cluster where volume data will reside. This will allow one protection domain to go down while others continue to operate. Realize that because volume data is mirrored across all devices in a protection domain, it will take lots of nodes or devices to go down before a protection domain is out of action.

PowerFlex also uses Fault Sets, which are a collection of storage servers and their devices within a Protection Domain, that will contain one half of a volume’s data mirror. PowerFlex will insure that a primary and its mirror copy of volume’s data will not both reside on the same fault set. A fault set could be a rack of servers, multiple racks, all PowerFlex storage servers in an AZ, etc. With fault sets, customer data will always reside across a minimum of two fault sets, and if any one goes down, data is still available.

PowerFlex also operates in the cloud. In this case, customers bring their own PowerFlex software and deploy it over cloud compute and storage.

Brian mentioned that anything PowerFlex can do such as reconfiguring servers, can be done through RESTful/API calls. This can be particularly useful in cloud deployments as above, if customers want to scale up or down IO performance automatically.

Besides block services, PowerFlex also offers NFS/CIFS-SMB native file services using a File Node Controller. This frontends PowerFlex storage nodes to support customer NFS/SMB file access to PowerFlex data.

Anthony Cinelli, Sr. Director Global PowerFlex Software Defined & MultiCloud Solutions

Anthony Cinelli is a key leader for Dell Technologies helping drive the success of our software defined and multicloud solutions portfolio across the customer landscape. Anthony has been with Dell for 13 years and in that time has helped launch our HCI and Software Defined businesses from startup to the multi-billion dollar lines of business they now represent for Dell.

Anthony has a wealth of experience helping some of the largest organizations in the world achieve their IT transformation and multicloud initiatives through the use of software defined technologies.

Brian Dean, Dell PowerFlex Technical Marketing

Brian is a 16+ year veteran of the technology industry, and before that spent a decade in higher education. Brian has worked at EMC and Dell for 7 years, first as Solutions Architect and then as TME, focusing primarily on PowerFlex and software-defined storage ecosystems.

Prior to joining EMC, Brian was on the consumer/buyer side of large storage systems, directing operations for two Internet-based digital video surveillance startups.

When he’s not wrestling with computer systems, he might be found hiking and climbing in the mountains of North Carolina.

141: GreyBeards annual 2022 wrap-up podcast

Well it has been another year and time for our annual year end wrap up. Since Covid hit, every year has certainly been interesting. This year we have seen the start of back in person conferences which was a welcome change from the covid lockdown. We are very glad to start seeing everybody again.

From the tech standpoint, the big news this year was CXL. As everyone should recall, CXL is a new-ish PCIe hardware and protocol that supports larger memory sitting out on a PCIe bus and in the future shared memory between servers. All this is to enable a new wave of memory based computing. We spent probably half our time discussing CXL and it’s impact on IT.

The other major topic was the Cloud Native ecosystem. In the past all we talked about was K8s but nowadays the ecosystem that surrounds it is almost as important as K8s itself. The final topic was a bit of a shock earlier this year and yes it was the Broadcom’s acquisition of VMware. Jason and I spend our Explore podcast talking about it (see our 137: VMware Explore wrap-up). Keith has high hopes that the EU will shut it down but the jury’s still out on that one. Listen to the podcast to learn more.

As for CXL, it turns out that AMD have just released full support for CXL hardware and protocols with their latest round of CPU chips. But the new AMD CPUs only support DDR5 memory, (something about there’s only so much logic one can fit on a chip…) which means all those DDR4 DIMs out in the wild need somewhere to land. CXL could supply a new lease on life for DDR4 DIMs.

And it’s not just about shared memory or increased memory sizes, CXL can also provide a tiered memory hierarchy, with gobs of flash behind memory DIMs (see: 136: FMS2022 wrap up …) So, now its no longer a TB or ten of server memory but potentially 100s of TBs. What this means for SAP HANNA, AWS Aurora and other heavy-memory solutions has yet to play out.

Cloud Native won. We see this in the increasing adoption of containers and K8s in the enterprise, cloud and just about anywhere IT happens these days. But the ecosystem surrounding K8s is chaos.

Over time, many of these ecosystem solutions will die off, be purchased, or consolidated but in the mean time, it’s entirely too confusing. Red Hat’s OpenShift is one answer and VMware’s Tanzu is another. And of course all the clouds have their own K8s packaged solution. But just to cover their bets, everyone also supports native K8s and just about every software package that works with it. So, K8s’s ecosystem is in a state of flux and may take time to become a stable set of tools useable by the enterprise IT.

Finally, Broadcom’s acquisition of VMware has everyone up in arms. Customers are concerned the R&D juggernaut that VMware has been, since its very beginning, will be jettisoned in favor of profits. And HCI vendors that always felt Dell EMC had an unfair advantage will all look at Broadcom in a similar light.

Keith says there’s a major difference in how USA regulators view an acquisition and how EU regulators view one. According to Keith, EU views acquisitions in how they help or hurt the customer. USA regulators view acquisitions on show they help or hurt the competition. Will have to wait and see how this all plays for Broadcom-VMware.

On the other hand, speaking of competition, Nutanix seems to be feeling the heat as well. Rumors are it’s up for sale. Who will want it and how the regulators view both of these acquisitions may be as interesting story for 2023

2023 looks to be another year of transition for enterprise IT. The cloud players all seem to be coming around to the view that they can’t be all things to all (IT) people. And the enterprise vendors are finally seeing some modicum of staying power in the face of a relentless push to the cloud. How this plays out over the next few years will be of major interest to everybody.

Happy New Year from the GreyBeards!

Keith Townsend, The CTO Advisor

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.

Jason Collier, Principal Member of Technical Staff, AMD

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology. He was founder and CTO of Scale Computing and has been an innovator in the field of hyperconvergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years. He’s on LinkedIN.

140: Greybeards talk data orchestration with Matt Leib, Product Marketing Manager for IBM Spectrum Fusion

As our listeners should know, Matt Leib (@MBleib) was a GreyBeards co-host But since then, Matt has joined IBM to become Product Marketing Manager on IBM Spectrum Fusion, a data orchestration solution for Red Hat OpenShift environments. Matt’s been in and around the storage and data management industry for many years which is why we tapped him for GreyBeards co-host duties.

IBM Fusion, in its previous incarnation, came as an OpenShift software defined storage or as an OpenShift (H)CI solution. But recently, Fusion has taken on more of a data orchestration role for OpenShift stateful containerized applications. Listen to the podcast to learn more.

Fusion can run in any OpenShift deployment whether (currently AWS, Azure, & IBM) clouds, under VMware (wherever it runs), or on (x86 or IBM Z) bare metal. It supplies NFS file or S3 compatible object storage for container applications running under OpenShift. But it does more than just storage.

Beyond storage, Fusion includes backup/recovery, site to site DR and global (file & object) data access. It’s almost like someone opened up the IBM Spectrum software pantry and took out the best available functionality and cooked it up in to an OpenShift solution. IBM’s Spectrum Fusion current website (linked to above (Dec.’22)) still refers only to the software defined storage and (H)CI solution, but today’s Fusion includes all of the functions identified above.

All Fusion facilities run as containers under OpenShift. Customers can elect to run all Fusion services or pick and chose which ones they want for their environment. IBM Fusion supports an API, an API backed GUI, and CLI for its storage & data management as well as REST access. Fusion is fully compatible with Red Hat Ansible.

IBM Fusion is intended to be storage agnostic. Which means it can support its data management services for any NFS file storage as well as anyone’s S3 compatible, object storage.

Now that Red Hat software defined CEPH and ODF are under IBM product management, CEPH and ODF options will become available under Fusion. And CEPH offers block as well as file and object. We’ve talked about CEPH before, packaged in a hardware appliance, see our SoftIron podcast.

One intriguing part of the Fusion solution is its global data access. With global access, any OpenShift application can access data from any Fusion data store, across clouds, across on prem installations, or just about anywhere OpenShift is running. Matt mentioned that compute could be on AWS OpenShift, Fusion’s data control plane could be running on prem OpenShift and the data storage could be running on Azure OpenShift. All this would be glued together by Fusion global access, so that AWS compute had access to data on Azure.

There’s some sophisticated caching magic to make global access happen seamlessly and with decent levels of performance, but customers no longer have to copy whole file systems over from one cloud to another in order to move compute or data. IBM Fusion would need to run in all those locations for global access.

Keith asked if it was directly available in the AWS marketplace. Matt said not yet but you can deploy OpenShift out of the marketplace and then deploy IBM Fusion onto that.

It took us sometime to get our heads wrapped around what Fusion has to offer and throughout it all, Keith and I had a bit of fun with Matt.

Matthew Leib, Product Marketing Manager, IBM Spectrum Fusion

Matt has spent years in IT, from Engineering, to Architecture, from PreSales to analyst work, and finally to Product Marketing at IBM.

He’s spent years trying to achieve both credibility in the space, as a podcaster, blogger, and community member.

In his spare time, he’s a dad, dog owner, and amateur guitar player..

117: GreyBeards talk HPC file systems with Frank Herold, CEO of ThinkParQ, makers of BeeGFS

We return back to our storage thread with a discussion of HPC file systems with Frank Herold, (@BeeGFS) CEO of ThinkParQ GmbH, the makers of BeeGFS. I’ve seen BeeGFS start to show up in some IO500 top storage benchmark results and as more and more data keeps coming online every day, we thought it time to start finding out how our friends in the HPC world handle their data deluge.

Frank’s a former rocket scientist, that’s been in and around the storage industry for years, and was very knowledgeable about BeeGFS’s software defined, parallel file system. He seemed to have a great grasp of the IO requirements in HPC, Life Sciences and other HPC-like applications. Listen to the podcast to learn more.

Turns out that ThinkParQ is a spinoff of the research institute in Germany that originally developed BeeGFS parallel file system. There are apparently two version of their product one which is publicly available (downloadable from their website) and another with commercial support. It’s not quite 100% open source but it’s got a lot of open source in it and their GIT repository is available

BeeGFS was primarily focused on HPC workloads but as this type of work has become more mainstream, they have moved beyond HPC and now have significant installations in Life Sciences, Oil&Gas and many other big data environments.

It runs on x86/AMD, OpenPower, and ARM CPUs. BeeGFS comes as a number of services, one of which is a storage service which uses a backend with ZFS or XFS file system. It also uses (POSIX compliant) host client software to access their system. There’s also a metadata and monitoring service. Most of the time these services run on separate servers but BeeGFS also supports a “converged mode”, where all these services run on a single server. And you can have multiple converged mode servers in a cluster.

BeeGFS is a parallel file system. This means that it intrinsically supports multiple metadata services/servers and multiple storage servers which allow it to scale up storage bandwidth and performance considerably beyond single appliance systems. Data is automatically distributed across all the storage servers in the configuration, unless you specify that data reside on specific, say all flash storage servers. Similarly, metadata is automatically distributed across all metadata servers in the system.

They don’t support any specific RAID protection other than mirroring and that really to speed up read throughput. Rather they depend on the underlying XFS/ZFS file system to provide drive failure protection (RAID5/6).

One of BeeGFS’s selling points is that it has few tuning parameters that a customer needs to fiddle with. Frank said it runs quite well right out of the box.

BeeGFS offers a single name space that spans the cluster (of metadata servers/storage servers). But customers can elect to split this name space across a subset of these metadata and storage servers, and by doing so they create multiple BeeGFS clusters.

There’s no inherent support for NFS or SMB but customers can configure NFS or SAMBA servers that use BeeGFS as backend storage. Also, there’s no data reduction built into BeeGFS and no automatic data tiering across the backend storage (file systems).

But as noted above, customers can direct which backend storage to use to hold their data. And they do offer a CLI data movement primitive and customers can use this in conjunction with other software to implement storage tiering or do it themselves.

Metadata performance is extremely important for small files and for large multi Billion object file systems. BeeGFS uses extensive metadata caching to provide faster access to this information.

Speaking of small file performance, we had a decent discussion on the tradeoffs involved between small and large file performance. And although BeeGFS has decent small file performance it’s not a be all for every small file intensive application. According to Frank, not every small file workload is optimal for BeeGFS.

They offer BeeOND which is BeeGFS on demand. This is an integration with Slurm workload scheduler (HPC work scheduler) that allows customers to spin up a scratch BeeGFS parallel file system across compute servers with storage.

Slurm’s BeeOND integration brings all BeeGFS services up and deploys them on compute nodes you specify. At this point you have a fully installed BeeGFS (scratch) parallel file system. Customers may use this scratch file system to support any compute-data intensive workload theyneed to run. When no longer needed, Slurm can be directed to automatically dismantle the BeeGFSl file system.

We talked about BeeGFS partners. They have a number of regional partners that provide installation and onsite support and a number of technical partners, such as NetApp, Dell, HPE and INSPUR, that supply BeeGFS configured servers and systems for deployment/installation.

Frank Herold, CEO ThinkparQ

Frank Herold is the CEO of ThinkParQ GmbH – the company behind BeeGFS. He actively leads the company and the product strategy of BeeGFS as a global player for parallel high-performance file systems.

Prior to joining ThinkParQ, he held various senior management positions within ADIC and Quantum Corporation, responsible for market segments within the academic and scientific research, oil and gas, broadcast and video surveillance sectors, focusing on large scale, high-performance and enterprise accounts within EMEA. 

Frank has over 25 years of experience in the IT industry and holds a master’s degree in engineering (Dipl. -Ing.) in rocket science.

116: GreyBeards talk VCF on VxBlock 1000 with Martin Hayes, DMTS, Dell Technologies

Sponsored By:

This past week, we had a great talk with Martin Hayes (@hayes_martinf), Distinguished Member Technical Staff at Dell Technologies about running VMware Cloud Foundation (VCF) on VxBlock 1000 converged infrastructure (CI). It used to be that Cloud Foundation required VMware vSAN primary storage but that changed a few years ago. . When that happened, the Dell Technologies team saw it as a great opportunity to support VCF on VxBlock CI.

This is the first GreyBeards podcast for Martin, but he was extremely knowledgeable about VxBlock and Cloud Foundation technologies. He’s been a technical product manager on the VxBlock converged infrastructure at Dell Technologies for many years. He’s an expert on Cloud Foundation and he knows an awful lot more about VMware NSX-T networking than seems reasonable (good thing). In any case, Martin’s expertise covers the whole gamut of VCF services as well as VxBlock 1000 infrastructure. The podcast is a bit longer than our normal sponsored podcast but there was a lot of information to cover. Listen to the podcast to learn more.

With VCF enabling primary storage on networked storage systems, all the storage vendors in the world gave a mighty cheer. But VMware Cloud Foundation still requires the vSAN servers to run its management domain. Late in 2020, VxBlock 1000 from Dell Technologies released a new software defined version of its Advanced Management Platform (AMP) to run on vSAN Ready Nodes. AMP is VxBlock’s management platform but also runs management domains for VCF and NSX-T.

For workload domains, VxBlock 1000 offers Cisco UCS M5 rack and blade servers, that can be configured to support just about any workload needed by a data center.

Historically, VMware vSphere problems with DR weren’t as much storage replication issues as networking problems. But NSX-T and VCF seemed to have solved that problem.

And with vRealize Automation plugins and NSX-T APIs, customers can have 0 touch network provisioning which enables the use of IaaS or infrastructure as code for their data center.

VMware vVOLs are now available with Dell EMC PowerMax storage. So, now VxBlock 1000 customers can use vSphere storage policy-based management (SPBM) as well as automated vVOL replication for data on PowerMax.

VMware NSX-T implements Application Virtual Networks (AVNs) using a GENEVE overlay network, which make extensive use of encapsulation. But where there’s encapsulation, de-encapsulation must follow to access outside networks. All this (encapsulation on ingress, de-encapsulation on egress) is done through NSX-T Edge clusters.

The net result of all this is that VMware customers have more choice, i.e., now they can run VCF on HCI or CI. And with VxBlock 1000 CI, VCF customers can select a best of breed components for each level of their 3-tier infrastructure.

Martin Hayes, DMTS, Dell Technologies

Martin Hayes is a Technical Product Manager at Dell Technologies, where he develops and executes data center product strategies that incorporate virtualization, software-defined networking (SDN) and converged systems.

Previously, he served in network advisory and architect roles at Dell EMC, converged systems pioneer VCE and Irish broadband provider eircom.