121: GreyBeards talk Cloud NAS with Peter Thompson, CEO & George Dochev, CTO LucidLink

GreyBeards had an amazing discussion with Peter Thompson (@Lucid_Link), CEO & co-founder and George Dochev (@GDochev), CTO & co-founder of LucidLink. Both Peter and George were very knowledgeable and easy to talk with.

LucidLink’s Cloud NAS creates a NAS storage system out of cloud (any S3 compatible AND Azure Blob) object storage. LucidLink is made up of client software, LucidLink SaaS (metadata service) and data on object storage. Their client software runs on any Linux, MacOS, or Windows desktop/laptop. LucidLink provides streaming, collaborative access to remote users for (file) data on object storage.

Just when 90% of the workforce was sent home for the pandemic, LucidLink emerged to provide all those users secure file access to any and all corporate data in the cloud. Peter mentioned one M&E customer who had just sent 300 video editors home with laptops and a disk drive which would last them all of 2 weeks. But they needed an ongoing solution for after that. The customer started with 300 users and ~100TB of file storage on LucidLink and a few months later, they had 1000 users with a PB+ of LucidLink data and was getting rid of all their NAS boxes. Listen to the podcast to learn more.

They are finding a lot of success in M&E, engineering design, Oil&Gas exploration, geo-spatial design firms and just about anywhere user collaboration on file data is required outside al data center.

LucidLink constructs a  FileSpace for customer file (object) data, which represents a drive letter or mount point that remote users can use to access files from the cloud. LucidLink supports a POSIX compliant file service for that data.

LucidLink data and user generated metadata is encrypted, using client owned/stored keys. So, data-at-rest (and -in-flight) can always be secure. They also support LDAP security and other standard SSO solutions to secure user access to data.

The LucidLink SaaS (metadata) service runs in a hyperscaler and links clients to file data on object storage. It also supports user distributed, byte range locking of file data.

One interesting nuance is that when a client locks a file, the system changes from an eventual to strongly consistent POSIX compliant file system. This ensures that the object storage is always the single source of truth.

The key that differentiates LucidLink from cloud gateways or file synch & share systems is that they 1) are not intended to operate in a data center, (yes, object storage can be located on prem but users are remote) and 2) don’t copy files from one user/access point to another. 

George said latency is enemy number one. LucidLink’s secret is prefetching. Each client uses a customer configured local persistent cache which can range from 5GB to a TB or more. LucidLink maintains a data and (in the next version) metadata working set for the user in their local cache.

Customer file data is split across multiple objects, that way LucidLink can stream data from all of them, in parallel, if needed. And doing so can supply extreme throughput when needed.

As for GDPR and data compliance, the customer controls who has access to the LucidLink SaaS as well as encryption keys.

LucidLink considers their solution “fault tolerant” or DR ready, because customers can load client software on any device and access any LucidLink file data. They also consider themselves “highly available” because their metadata/LucidLink SaaS service runs in a hyper scaler and object backing storage can be configured as highly available.

As mentioned earlier, LucidLink customers can use any S3 compatible or Azure Blob object storage, on prem or in the cloud. But when using cloud object storage, one pays egress charges. LucidLink’s local caching can minimize but cannot eliminate egress charges.

LucidLink offers two licensing models: 1) a BYO (bring your own) object storage and LucidLink provides the software to support your Cloud NAS or 2) LucidLink supplies both the object storage as well as the LucidLink service that glues it all together. The later is a combination of IBM COS and LucidLink that offers less expensive egress charges.

The LucidLink service is billed on capacity under management and user count basis. Capacity is billed on a GB/day, summed over a month. Their minimum solution is 5TB/5 users but they have customers with 1000s of users and PB+ of data. They offer a free 2-week trial period where customers can try LucidLink out.

Peter Thompson, CEO and Co-founder

Peter Thompson, co-founder, and CEO of LucidLink is a passionate and experienced leader and business builder. Thompson has over 30 years of experience in driving business expansion, key programs, and partnerships across regions such as APAC and the Americas mostly in the storage and file system market.

With over 14 years at DataCore Software, most recently as VP of Emerging and Developing Markets, Thompson drove DataCore’s expansion into China working with key industry partners, technology alliances and global teams to develop programs and business focused on emerging markets. Thompson also held the role of Managing Director, APAC responsible for the bottom-line of all Asia operations. He also was President and Representative Director of DataCore Japan, acquiring the majority of ownership and running it as a standalone entity as a beachhead of marquis customers in Japan.

Thompson studied Japanese, history, and economics at Kansai Gaidai and has a BA in International Management, Psychology, Japanese, from Gustavus Adolphus College, is a graduate of Stanford University Business School’s MSx program, with a focus on entrepreneurial finance, design thinking, and the soft skills required to build and lead world-class, high performing teams.

George Dochev, CTO and Co-Founder

George Dochev, co-founder and CTO of LucidLink, is a storage and file system expert with extensive experience in bringing emerging technologies to market. Dochev has over 20 years of success leading the development of complex virtualization products for the storage industry. He specializes in research and development in the fields of high-performance distributed systems, storage infrastructure software, and cloud technologies. 

Dochev was co-founder and principal member of the engineering team at DataCore Software for nearly 17 years. While at Datacore, Dochev helped transform that company from a start-up into a global leader in software-defined storage. Underscoring Dochev’s impact as an entrepreneur is the fact that DataCore Software now powers the data centers of 10,000+ large enterprises around the world.

Dochev holds a degree in Mathematics from Sofia University St. Kliment Ohridski in Bulgaria, and an MS in Computer Science from the University of National and World Economy, in Sofia, Bulgaria.

120: GreyBeards talk CEPH storage with Phil Straw, Co-Founder & CEO, SoftIron

GreyBeards talk universal CEPH storage solutions with Phil Straw (@SoftIronCEO), CEO of SoftIron. Phil’s been around IT and electronics technology for a long time and has gone from scuba diving electronics, to DARPA/DOD researcher, to networking, and is now doing storage. He’s also their former CTO and co-founder of the company. SoftIron make hardware storage appliances for CEPH, an open source, software defined storage system.

CEPH storage includes file (CEPHFS, POSIX), object (S3) and block (RBD, RADOS block device, Kernel/librbd) services and has been out since 2006. CEPH storage also offers redundancy, mirroring, encryption, thin provisioning, snapshots, and a host of other storage options. CEPH is available as an open source solution, downloadable at CEPH.io, but it’s also offered as a licensed option from RedHat, SUSE and others. For SoftIron, it’s bundled into their HyperDrive storage appliances. Listen to the podcast to learn more.

SoftIron uses the open source version of CEPH and incorporates this into their own, HyperDrive storage appliances, purpose built to support CEPH storage.

There are two challenges to using open source solutions:

  • Support is generally non-existent. Yes, the open source community behind the (CEPH) project supplies bug fixes and can possibly answer some questions but this is not considered enterprise support where customers require 7x24x365 support for a product
  • Useability is typically abysmal. Yes, open source systems can do anything that anyone could possibly want (if not, code it yourself), but trying to figure out how to use any of that often requires a PHD or two.

SoftIron has taken both of these on to offer a CEPH commercial product offering.

Take support, SoftIron offers enterprise level support that customers can contract for on their own, even if they don’t use SoftIron hardware. Phil said the would often get kudos for their expert support of CEPH and have often been requested to offer this as a standalone CEPH service. Needless to say their support of SoftIron appliances is also excellent.

As for ease of operations, SoftIron makes the HyperDrive Storage Manager appliance, which offers a standalone GUI, that takes the PHD out of managing CEPH. Anything one can do with the CEPH CLI can be done with SoftIron’s Storage Manager. It’s also a very popular offering with SoftIron customers. Similar to SoftIron’s CEPH support above, customers are requesting that their Storage Manager be offered as a standalone solution for CEPH users as well.

HyperDrive hardware appliances are storage media boxes that offer extremely low-power storage for CEPH. Their appliances range from high density (120TB/1U) to high performance NVMe SSDs (26TB/1U) to just about everything in between. On their website, I count 8 different storage appliance offerings with various spinning disk, hybrid (disk-SSD), SATA and NVMe SSDs (SSD only) systems.

SoftIron designs, develops and manufacturers all their own appliance hardware. Manufacturing is entirely in the US and design and development takes place in the US and Europe only. This provides a secure provenance for HyperDrive appliances that other storage companies can only dream about. Defense, intelligence and other security conscious organizations/industries are increasingly concerned about where electronic systems come from and want assurances that there are no security compromises inside them. SoftIron puts this concern to rest.

Yes they use CPUs, DRAMs and other standardized chips as well as storage media manufactured by others, but SoftIron has have gone out of their way to source all of these other parts and media from secure, trusted suppliers.

All other major storage companies use storage servers, shelves and media that come from anywhere, usually sourced from manufacturers anywhere in the world.

Moreover, such off the shelf hardware usually comes with added hardware that increases cost and complexity, such as graphics memory/interfaces, Cables, over configured power supplies, etc., but aren’t required for storage. Phil mentioned that each HyperDrive appliance has been reduced to just what’s required to support their CEPH storage appliance.

Each appliance has 6Tbps network that connects all the components, which means no cabling in the box. Also, each storage appliance has CPUs matched to its performance requirements, for low performance appliances – ARM cores, for high performance appliances – AMD EPYC CPUs. All HyperDrive appliances support wire speed IO, i.e, if a box is configured to support 1GbE or 100GbE, it transfers data at that speed, across all ports connected to it.

Because of their minimalist hardware design approach, HyperDrive appliances run much cooler and use less power than other storage appliances. They only consume 100W or 200W for high performance storage per appliance, where most other storage systems come in at around 1500W or more.

In fact, SoftIron HyperDrive boxes run so cold, that they don’t need fans for CPUs, they just redirect air flom from storage media over CPUs. And running colder, improves reliability of disk and SSD drives. Phil said they are seeing field results that are 2X better reliability than the drives normally see in the field.

They also offer a HyperDrive Storage Router that provides a NFS/SMB/iSCSI gateway to CEPH. With their Storage Router, customers using VMware, HyperV and other systems that depend on NFS/SMB/iSCSI for storage can just plug and play with SoftIron CEPH storage. With the Storage Router, the only storage interface HyperDrive appliances can’t support is FC.

Although we didn’t discuss this on the podcast, in addition to HyperDrive CEPH storage appliances, SoftIron also provides HyperCast, transcoding hardware designed for real time transcoding of one or more video streams and HyperSwitch networking hardware, which supplies a secure provenance, SONiC (Software for Open Networking in [the Azure] Cloud) SDN switch for 1GbE up to 100GbE networks.

Standing up PB of (CEPH) storage should always be this easy.

Phil Straw, Co-founder & CEO SoftIron

The technical visionary co-founder behind SoftIron, Phil Straw initially served as the company’s CTO before stepping into the role as CEO.

Previously Phil served as CEO of Heliox Technologies, co-founder and CTO of dotFX, VP of Engineering at Securify and worked in both technical and product roles at both Cisco and 3Com.

Phil holds a degree in Computer Science from UMIST.

0101: Greybeards talk with Howard Marks, Technologist Extraordinary & Plenipotentiary at VAST

As most of you know, Howard Marks (@deepstoragenet), Technologist Extraordinary & Plenipotentiary at VAST Data used to be a Greybeards co-host and is still on our roster as a co-host emeritus. When I started to schedule this podcast, it was going to be our 100th podcast and we wanted to invite Howard and the rest of the co-hosts to be on the call to discuss our podcast. But alas, the 100th Greybeards podcast came and went, before we could get it done. So we decided to refocus this podcast back on VAST Data.

We talked with Howard last year about VAST and some of this podcast covers the same ground (see last year’s podcast with Howard on VAST Data) but I highlighted below different aspects of their product that we also discussed.

For starters, VAST just finalized a recent round of funding, which if I recall, valued them at over $1B USD, or yet another data storage unicorn.

VAST is a scale out, disaggregated, unstructured data platform that takes advantage of the economics of QLC SSD (from Intel) combined with the speed of 3D XPoint storage class memory (Optane SSD, also from Intel) to support customer data. Intel is an investor in VAST.

VAST uses mutliple front end (controller) servers, with one or more HA NVMe drive module(s) connected via a dual infiniband or 100Gbps Ethernet RDMA cluster interconnect. The HA NVMe drive module has two (IO modules) adapter cards, one for each connection that takes IO and data requests and transfers them across a PCIe bus which connects to QLC and Optane SSDs. They also have a Mellanox (another investor) switch on their backend with a (round robin) DNS router to connect hosts to their storage (front-end) servers.

Each backend HA NVMe drive module has 12 1.5TB Optane U.2 SSDs and 44 15.4TB QLC SSDs, for a total of 56 drives. Customer data is first written to Optane and then destaged to QLC SSD.

QLC has the advantage of being 4 bits per cell (for a lower $/GB stored) but it’s endurance or drive writes/day (dw/d)) is significantly worse than TLC. So VAST has had to work to increase QLC endurance in their system.

Natively, QLC offers ~0.2 dw/d when doing random 4K writes. However, if your system does 128KB sequential writes, it offers 4.0 dw/d. VAST destages data from Optane SSDs to QLC in 1MB chunks which both optimizes endurance and reduces garbage collection write amplification within the drive.

Howard mentioned their frontend servers are stateless, i.e., maintain no state information about any IO activity going on. Any IO state information is maintained by their system in Optane SSDs. Each server maintains a work log (like) structure on Optane that describes what they are doing in support of host IO and other activities. That way, if one front end server goes down, another one can access its log and take over its activity.

Metadata is also maintained only on Optane SSDs. Howard called their metadata structure a V-tree (B-tree). VAST mirrors all meta-data and customer data to two Optane SSDs. So if one Optane SSD goes down, its pair can be used to continue operations.

In last years podcast we talked at length about VAST data protection and data reduction capabilities so we won’t discuss these any further here.

However, one thing worth noting is that VAST has a very large RAID (erasure code protection) stripe. Data is written to the QLC SSDs in a VAST designed, locally decodable erasure coding format.

One problem with large stripes is rebuild time. VAST’s locally decodable parity codes help with this but the other thing that helps is distributing rebuild IO activity to all front end servers in the system.

The other problem with large stripe sizes is garbage collection. VAST segregates customer data by “temporariness” based on their best guess. In this way all data in one stripe should have similar lifetimes. When it’s time for stripe garbage collection, having all temporary data allows VAST to jettison the whole stripe (or most of it) rather than having to collect and re-write old stripe data to another new stripe.

VAST came out supporting NFSv3 and S3 object storage protocols, Their next release adds support for SMB 2.2, data-at-rest encryption and snapshotting to an external S3 store. As you may recall SMB is a stateful protocol. In VAST’s home grown, SMB implementation, front end servers can take over SMB transactions from other failed servers, without having to fail the whole transaction and start over again.

VAST uses a fail in place, maintenance policy. That is failed SSDs are not normally replaced in customer deployments, rather blocks, pages, or SSDs are marked as failed and the spare capacity available in the drive enclosure is used to provide space for any needed rebuilt data.

VAST offers a 10 year maintenance option where the customer keeps the same storage for 10 full years. That way customers don’t have to migrate data from one system to another until their 10 years are up.

The podcast runs a little under 44 minutes. Howard and I can talk forever. He is always a pleasure to talk with as well as extremely knowledgeable about (VAST) storage and other industry solutions.  The co-hosts and I had a great time talking with him again. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Howard Marks, Technologist Extraordinary and Plenipotentiary, VAST Data, Inc.

Howard Marks brings over forty years of experience as a technology architect for hire and Industry observer to his role as VAST Data’s Technologist Extraordinary and Plienopotentary. In this role, Howard demystifies VAST’s technologies for customers and customer requirements for VAST’s engineers.

Before joining VAST, Howard ran DeepStorage an industry test lab and analyst firm. An award-winning speaker, he has appeared at events on three continents including Comdex, Interop and VMworld.

Howard is the author of several books (all gratefully out of print) and hundreds of articles since Bill Machrone taught him journalism at PC Magazine in the 1980s.

Listeners may also remember that Howard was a founding co-Host of the Greybeards-on-Storage Podcast.

098: GreyBeards talk data protection & visualization for massive unstructured data repositories with Christian Smith, VP Product at Igneous

Sponsored By:

Even before COVID-19 there was a lot of file data being created and mined, but with the advent of the pandemic, this has accelerated considerably. As such, it seemed an appropriate time to talk with Christian Smith, VP of Product at Igneous, (@IgneousIO) a company that targets the protection and visibility of massive quantities of unstructured data, on premise, in the cloud, or just about anywhere else it may live.

Let me state at the outset, that my belief had always been, that you don’t backup 10PB of data, rather you bite the (big expense) bullet to replicate it and hope for the best. After talking with Christian and Igneous I am going to have to modify that belief by a couple of more orders of magnitude.

All this data is coming from: LIDAR, RADAR, audio, video, pictures, medical film, MRI/CAT Scans, etc., and as noted above, it’s exploding. Christian talked about one customer of theirs that supplies aerial photography/LIDAR/RADAR scans of areas on request. This can used to better understand crop, forest, wildlife, land health and use. One surprise Igneous found with this customer is that the data is typically archived after first use, but within a month or so it’s moved back online for some other purpose.

Igneous heritage

Many of the people who started up and currently work at Igneous have been around file storage for some time having, primarily coming from (Dell EMC) Isilon, NetApp, Qumulo and other industry heavyweights. When they started Igneous, they realized the world didn’t need another NAS box or file system. Rather, with the advent of 10-100PB unstructured data farms, what was needed was an effective way to protect and understand that data.

When they considered how to protect and visualize 100PB of unstructured data, the only they found to do this was to build a scale-out solution that used on premise and cloud infrastructure and was offered as a service.

Igneous DataProtect solution

With 10PB or 100PB of files, located across a gaggle of heterogeneous file servers, with billions of files across ~100s of servers, each of with has ~1K or more file shares, just scanning all the file servers would take weeks, if not longer and then you need to move the data someplace to protect it. Seems like an impossible task.

Igneous immediately figured out the first thing they needed was a radically new, scale out architecture to rapidly scan of the file servers. Thus was born ActiveScan. Christian said it was designed to scan a trillion files and they have customers with a billion files using their service today. ActiveScan doesn’t use NFS/SMB/Object (S3) access protocols to talk with file servers rather it uses internal APIs to access file metadata. DataProtect currently supports APIs for NetApp, Dell EMC Isilon, Pure FlashBlade, Qumulo, Gluster, Lustre, & GPFS (IBM Spectrum Scale) file systems. They use ActiveScan to build a file index database.

Their other major concern was hot to move PBs of data rapidly across to the cloud and other locations. Again they created a scale out, multi-threaded service to do this and also made use of internal APIs rather than standard file or object protocols. This became IntelliMove. That same customer above with billions of files, has 6PB of file data to protect.

Normal data movement is fine for largish, files but bogs down with lots of small files or extremely large files to back up. DataProtect gathers together small files into a large chunks and splits up extremely large files into smaller chunks and moves these chunks to secondary storage.

Data expiration is another problem, especially when you chunk files together. Here they came up with an intelligent garbage collection algorithm which only collects free space when it makes the most sense but deletes data access at the time of expiration.

DataProtect uses a cloud based, SaaS control plane that manages and coordinates its activities across data centers, sites and cloud instances. It also has a client VM (OVA, with 8 core CPU, 32GB DRAM, ~100MB) that runs in the customers infrastructure, on site, in CoLo’s or in the cloud that is used to scan-move-protect customer unstructured data. If more scan and data movement performance is needed, the VM can spawn additional threads automatically and more VMs can be added to provide even more throughput.

DataDiscover solution

The other service that Igneous offers is DataDiscover a data visualization tool. DataDiscover uses ActiveScan and its database to provide customers a way to understand the file data that resides in their massive unstructured data farms across the data center, cloud or wherever else it resides.

We didn’t discuss this solution as much but having a way to better understand the files in a 10-100PB unstructured data farm could be very useful and a great way to keep that 100PB from growing to 1EB faster than it has too.

As part of their outreach to the world, Igneous is giving away free DataProtect services to organizations that are focused on COVID-19 research. Check out their offer here

The podcast ran ~24 minutes. Christian was extremely knowledgeable about the problems that happen with very large unstructured data farms and how Igneous solutions can provide a better way to protect and visualize that data. Matt and I had a fun time discussing Igneous’s approach with Christian. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Christian Smith, VP Product at Igneous

Christian is VP of Product, responsible for product management, solutions, and customer success. Prior to Igneous, Christian spent 15 years running field engineering organizations at EMC, Isilon Systems, NetApp and Silicon Graphics.

Christian has been working with organizations that work with file data since working at Silicon Graphics. Before that Christian was co-founder of a small management consulting company associated with Y2K and deregulation.

Christian received dual bachelor’s degrees in Chemistry and Computer Science from the University of Missouri-Columbia. Christian is an avid camper, skier and traveler and has long since traveled through all of the continental 48 states.

095: GreyBeards talk file sync&share with S. Azam Ali, VP Customer Success at CentreStack

We haven’t talked with a file synch and share vendor in a while now and Matt was interested in the technology. He had been talking with CentreStack, and found that they had been making some inroads in the enterprise. So we contacted S. Azam Ali, VP of Customer Success at CentreStack and asked if he wanted to talk about their product on our podcast.

File synch and share, is part collaboration tool, part productivity tool. With file synch & share many users share the same files, across many different environments and end point devices. It’s especially popular with road warriors that need access to the same files on the road that reside in corporate data centers. With this technology, files updated anywhere would be available to all.

Most file synch&share systems require you to use their storage. But CentreStack just provides synch and share access to NFS and SMB storage that’s already in the data center.

CentreStack doesn’t use VPNs to access data, many other vendor do. But with CentreStack, one just log’s into a website (with AD credentials) and they have immediate browser access to files.

CentreStack uses a gateway VM, that runs in the corporate data center, configured to share files/file directories/shares. We asked whether they were in the data path and Azam said no. However, the gateway does register for file system notifications (e.g. when files are updated, outside CentreStack, they get notified).

CentreStack does maintain meta-data on the files, directories, shares that are under it’s control. Presumably, once an admin sets it up, it goes out and access the file systems that have shared files and populates their meta-data for those files.

CentreStack works with any NFS and SMB file system as well as NAS servers that support these two. It’s unclear whether customers can have more than one gateway server in their data center supporting synch and share but Azam did say that it wasn’t unusual for customers with multi-data centers to have a gateway in each, to support synch&share requirements for each data center.

They use client software on end point devices, which presents the shared files as an external drive (to Mac), presumably a cloud drive for Windows PCs and similar services (in an App) for other systems (IOS, Android phones, iPad, etc.). We believe Azam said Linux was coming soon.

The client software can be configured in cache mode or offline mode:

  • Cache mode – the admin can configure how much space to use on the endpoint device and the software will cache the most recently used files in that space for faster access
  • Offline mode – the software moves all files that the endpoint login can access, to the device.

In cache mode, when users open a file (not in the most recently used cache), there will be some delay as the system retrieves data from the internet and copies it to the endpoint device. Unclear what the delay might be but it’s probably a function of internet speed and load on the gateway, with possibly some overhead for the NFS/SMB/NAS system to supply the data. If there’s not enough space to hold the file, the oldest non-open file is erased from the cache.

In both modes, Centrestack supports cross domain locking. That is, if one client has a file open (for update), all other systems/endpoints may only access the file in read-only mode. After the file is closed. the file can then be opened for update by other users.

When CentreStack clients are used to update files, the data is stored back in the original file systems with versioning. This way if the data is corrupted, admins can easily return back to a known good copy version.

CentreStack also offers a cloud backup and DR service. Gateway admins can request that synch&share files be backed up to cloud storage (AWS S3, Azure Blob and Wasabi). When CentreStack backups file data to the cloud, it also includes metadata information about the files so they can be re-constituted anywhere.

A CentreStack cloud gateway VM can be activated in the cloud to supply access to backed up files. Unclear whether the CentreStack cloud backup has to be restored to block or file storage first or if it just accesses the data on cloud storage directly. But one customers using CentreStack cloud DR would need to run client software in their applications accessing these files.

Wasabi seemed an odd solution to have on their list of supported cloud storage providers, but Azam said for their market, the economics of Wasabi storage were hard to ignore. See our previous podcast with David Friend, Co-Founder& CEO, Wasabi, to learn more about Wasabi.

CentreStack is licensed on a per user basis, not storage capacity bucking industry trends. But they don’t actually own the storage so it makes sense. For CentreStack cloud backup, customers also have to supply the cloud storage.

They also offer a 30 day free trial on their website with unlimited users. We assume this uses CentreStacks cloud gateway and customers bring their own cloud storage to support it.

The podcast runs about 35 minutes. Azam was a bit more marketing than we are used to, but he warmed up once we started asking questions. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

S. Azam Ali, VP of Customer Success, CentreStack

S. Azam Ali, is VP of Customer Success at CentreStack and is an executive with extensive experience in managing global teams including sales, support and consulting services.

Azam’s channel experience includes on-boarding new partners including creation of marketing and training collateral for the partners. Azam is an executive with a passion for customer success and establishing long term relationships and partnerships.

Azam is also an advisor to startups as well as established technology companies.