169: GreyBeards talk AgenticAI with Luke Norris, CEO&Co-founder, Kamiwaza AI

Luke Norris (@COentrepreneur), CEO and Co-Founder, Kamiwaza AI, is a serial entreprenaur in Silverthorne CO, where the company is headquartered.. They presented at AIFD6 a couple of weeks back and the GreyBeards thought it would be interesting to learn more about what they were doing, especially since we are broadening the scope of the podcast, to now be GreyBeards on Systems.

Describing Kamiwaza AI is a bit of a challenge. They settled on “AI orchestration” for the enterprise but it’s much more than that. One of their key capabilities is an inference mesh which supports accessing data in locations throughout an enterprise across various data centers to do inferencing, and then gathering replies/responses together, aggregating them into one combined response. All this without violating HIPPA, GDPR or other data compliance regulations.

Kamiwaza AI offer an opinionated AI stack, which consists of 155 components today and growing that supplies a single API to access any of their AI services. They support multi-node clusters and multiple clusters, located in different data centers, as well as the cloud. For instance, they are in the Azure marketplace and plans are to be in AWS and GCP soon.

Most software vendors provide a proof of concept, Kamiwaza offers a pathway from PoC to production. Companies pre-pay to install their solution and then can use those funds when they purchase a license.

And then there’s their (meta-)data catalogue. It resides in local databases (and replicated maybe) throughout the clusters and is used to identify meta data and location information about any data in the enterprise that’s been ingested into their system.

Data can be ingested for enterprise RAG databases and other services. As this is done, location affinity and metadata about that data is registered to the data catalogue. That way Kamiwaza knows where all of an organization’s data is located, which RAG or other database it’s been ingested into and enough about the data to understand where it might be pertinent to answer a customer or service query.

Maybe the easiest way to understand what Kamiwaza is, is to walk through a prompt. 

  • A customer issues a prompt to a Kamiwaza endpoint which triggers,
  • A search through their data catalog to identify what data can be used to answer that prompt.
  • If all the data resides in one data center, the prompt can be handed off to the GenAI model and RAG services at that data center. 
  • But if the prompt requires information from multiple data centers,
  • Separate prompts are then distributed to each data center where RAG information germane to that prompt is located
  • As each of these generate replies, their responses are sent back to an initiating/coordinating cluster
  • Then all these responses are combined into a single reply to the customer’s prompt or service query.

But the key point is that data located in each data center used to answer the prompt are NOT moved to other data centers. All prompting is done locally, at the data center where the data resides.  Only prompt replies/responses are sent to other data centers and then combined into one comprehensive answer. 

Luke mentioned a BioPharma company that had genonome sequences located in various data regimes, some under GDPR, some under APAC equivalents, others under USA HIPPA requirements. They wanted to know information about how frequent a particular gene sequence occurred. They were able to issue this as a prompt at a single location which spun up separate, distributed prompts for each data center that held appropriate information. All those replies were then transmitted back to the originating prompt location and combined/summarized.

Kamiwaza AI also has an AIaaS offering. Any paying customer is offered one (AI agentic) outcome per month per cluster license. Outcomes could effectively be any AI application they would like to perform.

One outcome he mentioned included:

  • A weather-risk researcher had tons of old weather data in a multitude of formats, over many locations, that had been recorded over time.
  • They wanted to have access to all this data so they can tell when extreme weather events had occurred in the past.
  • Kamiwaza AI assigned one of their partner AI experts to work with the researcher to have an AI agent comb through these archives, transform and clean all the old weather data into HTML data more amenable to analysis . 
  • But that was just the start.. They really wanted to understand the risk of damage due to the extreme weather events. So the AI application/system was then directed to go and gather from news and insurance archives, any information that identified the extent of the damage from those weather events. 

He said that today’s AgenticAI can implement a screen mouse click and perform any function that an application or a human could do on a screen. Agentic AI can also import an API and infer where an API call might be better to use than a screen GUI interaction.

He mentioned that Kamiwaza can be used to generate and replace a lot of what enterprises do today with Robotics Process Automation (RPAs). Luke feels that anything an enterprise was doing with RPA’s can be done better with Kamiwaza AI agents.

SaaS solution tasks are also something AgenticAI can easily displace . Luke said at one customer they went from using SAP APIs to provide information to SAP, to using APIs to extract information from SAP, to completely replacing the use of SAP for this task at the enterprise. 

How much of this is fiction or real is subject of some debate in the industry. But Kamiwaza AI is pushing the envelope on what can and can’t be done. And with their AI aaS offering, customers are making use of AI like they never thought possible before. .

Kamiwaza AI has a community edition, a free download that’s functionally restricted, and provides a desktop experience of Kamiwaza AI’s stack. Luke sees this as something a developer could use to develop to Kamiwaza APIs and test functionality before loading on the enterprise cluster. 

We asked where they were finding the most success. Luke mentioned anyone that’s heavily regulated, where data movement and access were strictly constrained. And they were focused on large, multi-data center, enterprises.

Luke mentioned that Kamiwaza AI has been doing a number of hackathons with AI Tinkerers around the world. He suggested prospects take a look at what they have done with them and perhaps join them in the next hackathon in their area.

Luke Norris, CEO & Co-Founder, Kamiwaza AI

Luke Norris is the co-founder of Kamiwaza.AI, driving enterprise AI innovation with a focus on secure, scalable GenAI deployments. With extensive experience raising over $100M in venture capital and leading global AI/ML deployments for Fortune 500 companies.

Luke is passionate about enabling enterprises to unlock the full potential of AI with unmatched flexibility and efficiency.

161: Greybeards talk AWS S3 storage with Andy Warfield, VP Distinguished Engineer, Amazon

We talked with Andy Warfield (@AndyWarfield), VP Distinguished Engineer, Amazon, about 10 years ago, when at Coho Data (see our (005:) Greybeards talk scale out storage … podcast). Andy has been a good friend for a long time and he’s been with Amazon S3 for over 5 years now. Since the recent S3 announcements at AWS Re:Invent, we thought it a good time to have him back on the show. Andy has a great knack for explaining technology, I suppose that comes from his time as a professor but whatever the reason, he was great to have on the show again.

Lately, Andy’s been working on S3 Express, One Zone storage, announced last November, a new version of S3 object storage with lower response time. We talked about this later in the podcast but first we touched on S3’s history and other advances. S3 and its ancillary services have advanced considerably over the years. Listen to the podcast to learn more

S3 is ~18 years old now and was one of the first AWS offerings. It was originally intended to be the internet’s file system which is why it was based on HTTP protocols.

Andy said that S3 was designed for 11-9s durability and high availability options. AWS constantly monitors server and storage failures/performance to insure that they can maintain this level of durability. The problem with durability is that when a drive/server goes down, the data needs to be rebuilt onto another drive before another drive fails. One way to do this is to have more replicas of the data. Another way is to speed up rebuild times. I’m sure AWS does both.

S3 high availability requires replicas across availability zones (AZ). AWS availability zone data centers are carefully located so that they are power-networking isolated from others data centers in the region. Further, AZ site locations are deliberately selected with an eye towards ensuring they are not susceptible to similar physical disasters.

Andy discussed other AWS file data services such as their FSx systems (Amazon FSx for Lustre, for OpenZFS, for Windows File Server, & for NetApp ONTAP) as well as Elastic File System (EFS). Andy said they sped up one of these FSx services by 3-5X over the last year.

Andy mentioned one of the guiding principles for lot of AWS storage is to try to eliminate any hard decisions for enterprise developers. By offering FSx files, S3 objects and their other storage and data services, customers already using similar systems in house can just migrate apps to AWS without having to modify code.

Andy said one thing that struck him as he came on the S3 team was the careful deliberation that occurred whenever they considered S3 API changes. He said the team is focused on the long term future of S3 and any API changes go through a long and deliberate review before implementation.

One workload that drove early S3 adoption was data analytics. Hadoop and BigTable have significant data requirements. Early on, someone wrote an HDFS interface to S3 and over time lots of data analytics activity moved to S3 object hosted data.

Databases have also changed over the last decade or so. Keith mentioned that many customers are foregoing traditional data bases to use open source database solutions with S3 as their backend storage. It turns out that Open Table Format database offerings such as Apache Iceberg, Apache Hudi and Delta Lake are all available on AWS use S3 objects as their storage

We talked a bit about Lambda Server-less processing triggered by S3 objects. This was a new paradigm for computing when it came out and many customers have adopted Lambda to reduce cloud compute spend.

Recently Amazon introduced a file system Mount point for S3 storage. Customers can now use an NFS mount point to access any S3 bucket.

Amazon also supports the Registry for Open Data, which holds just about every canonical data set (stored as S3 objects) used for AI training.

In the last ReInvent, Amazon announced S3 Express One Zone which is a high performance, low latency version of S3 storage. The goal for S3 express was to get latency down from 40-60 msec to less than 10 sec.

They ended up making a number of changes to S3 such as:

  • Redesigned/redeveloped some S3 micro services to reduce latency
  • Restricted S3 Express storage to a single zone reducing replication requirements, but maintained 11-9s durability
  • Used higher performing storage
  • Re-designed S3 API to move some authentication/verification to the beginning of object access from every object access call.

Somewhere during our talk Andy said that, in aggregate, S3 is providing 100TBytes/sec of data bandwidth. How’s that for a scale out storage.

Andy Warfield, VP Distinguished Engineer, Amazon

Andy is a Vice President and Distinguished Engineer in Amazon Web Services. He focusses primarily on data storage and analytics.

Andy holds a PhD from the University of Cambridge, where he was one of the authors of the Xen hypervisor. Xen is an open source hypervisor that was used as the initial virtualization layer in AWS, among multiple other early cloud companies. Andy was a founder at Xensource, a startup based on Xen that was subsequently acquired by Citrix Systems for $500M. Following XenSource,

Andy was a professor at the University of British Columbia (UBC), where he was awarded a Canada Research Chair, and a Sloan Research Fellowship. As a professor, Andy did systems research in areas including operating systems, networking, security, and storage.

Andy’s second startup, Coho Data, was a scale-out enterprise storage array that integrated NVMe SSDs with programmable networks. It raised over 80M in funding from VCs including Andreessen Horowitz, Intel Capital, and Ignition Partners.

160: GreyBeard talks data security with Jonathan Halstuch, Co-Founder & CTO, RackTop Systems

Sponsored By:

This is the last in this year’s, GreyBeards-RackTop Systems podcast series and once again we are talking with Jonathan Halstuch (@JAHGT), Co-Founder and CTO, RackTop Systems. This time we discuss why traditional security practices can’t cut it alone, anymore. Listen to the podcast to learn more.

Turns out traditional security practices are keeping the bad guys out or supplies perimeter security with networking equivalents. But the problem is sometimes the bad guy is internal and at other times the bad guys pretend to be good guys with good credentials. Both of these aren’t something that networking or perimeter security can catch.

As a result, the enterprise needs both traditional security practices as well as something else. Something that operates inside the network, in a more centralized place, that can be used to detect bad behavior in real time.

Jonathan talked about a typical attack:

  • A phishing email link is clicked on ==> attacker now owns the laptop/desktop user’s credentials
  • Attacker scans the laptop/desktop for admin credentials or one time pass codes which can be just as good, in some cases ==> the attacker attempts to escalate privileges above the user and starts scanning customer data for anything worthwhile to steal, e.g. crypto wallets, passwords, client data, IP, etc.
  • Attacker copies data of interest and continues to scan for more data and to escalate privileges ==> by now if not later, your data is compromised, either it’s in the hands of others that may want to harm you or extract money from you or it’s been copied by a competitor, or worse a nation state.
  • At some point the attacker has scanned and copied any data of interest ==> at this point, depending on the attacker, they could install malware which can be easily detected to signal the IT organization it’s been compromised.

By the time security systems detect the malware, the attacker has been in your systems and all over your network for months, and it’s way too late to stop them from doing anything they want with your data.

In the past detection like this could have been 3rd party tools that scanned backups for malware or storage systems copying logs to be assessed, on a periodic basis.

The problem with such tools is that they always lag behind the time when the theft/corruption has occurred.

The need to detect in real time, at something like the storage system, is self-evident. The storage is the central point of access to data. If you could detect illegal or bad behavior there, and stop it before it could cause more harm that would be ideal.

In the past, storage system processors were extremely busy, just doing IO. But with today’s modern, multi-core, NUMA CPUs, this is no longer be the case.

Along with high performing IO, RackTop Systems supports user and admin behavioral analysis and activity assessors. These processes run continuously, monitoring user and admin IO and command activity, looking for known, bad or suspect behaviors.

When such behavior is detected, the storage system can prevent further access automatically, if so configured, or at a minimum, warn the security operations center (SOC) that suspicious behavior is happening and inform SOC of who is doing what. In this case, with a click of a link in the warning message, SOC admins can immediately stop the activity.

If it turns out the suspicious behavior was illegal, having the detection at the storage system can also provide SOC a list of files that have been accessed/changed/deleted by the user/admin. With these lists, SOC has a rapid assessment of what’s at risk or been lost.

Jonathan and I talked about RackTop Systems deployment options, which span physical appliances, SAN gateways to virtual appliances. Jonathan mentioned that RackTop Systems has a free trial offer using their virtual appliance that any costumer can download to try them out.

Jonathan Halstuch, Co-Founder & CTO, Racktop Systems

Jonathan Halstuch is the Chief Technology Officer and Co-Founder of RackTop Systems. He holds a bachelor’s degree in computer engineering from Georgia Tech as well as a master’s degree in engineering and technology management from George Washington University.

With over 20-years of experience as an engineer, technologist, and manager for the federal government, he provides organizations the most efficient and secure data management solutions to accelerate operations while reducing the burden on admins, users, and executives.

159: GreyBeards Year End 2023 Wrap Up

Jason and Keith joined Ray for our annual year end wrap up and look ahead to 2024. I planned to discuss infrastructure technical topics but was overruled. Once we started talking AI, we couldn’t stop.

It’s hard to realize that Generative AI and ChatGPT in particular, haven’t been around that long. We discussed some practical uses Keith and Jason had done with the technology.

Keith mentioned its primary skill is language expertise. He has used it to help write up proposals. He often struggles to convince CTO Advisor non-sponsors of the value they can bring and found that using GenAI has helped do this better.

Jason mentioned he uses it to create BASH, perl, and PowerShell scripts. He says it’s not perfect but can get ~80% there and with a few tweaks, is able to have something a lot faster than if he had to do it completely by hand. He also mentioned its skill in translating from one scripting language to others and how well the code it generates is documented (- that hurt).

I was the odd GreyBeard out, having not used any GenAI, proprietary or not. I’m still working to get a reinforcement learning task to work well and consistently. I figured once I mastered that, I train an LLM on my body of (text and code) work (assuming of course someone gifts me a gang of GPUs).

I agreed GenAI are good at (English) language and some coding tasks (where lot’s of source code exists, such as java, scripting, python, etc.).

However, I was on a MLops slack channel and someone asked if GenAI could help with IBM RPG II code. I answered, probably not. There’s just not a lot of RPG II code publicly accessible on the web and the structure of RPG was never line of text/commands oriented.

We had some heated discussion on where LLMs get the data to train with. Keith was fine with them using his data. I was not. Jason was neutral.

We then turned to what this means to the white collar workers who are coding and writing text. Keith made the point that this has been a concern throughout history, at least since the industrial revolution.

Machines come along, displace work that was done by hand, increase production immensely, reduce costs. Organizations benefit, but people doing those jobs need to up level their skills, to take advantage of the new capabilities.

Easy for us to say, as we, except for Jason, in his present job, are essentially entrepreneurs and anything that helps us deliver more value, faster, easier or less expensively, is a boon for our businesses.

Jason mentioned, Stephen Wolfram wrote a great blog post discussing LLM technology (see What is ChatGPT doing … and why does it work). Both Jason and Keith thought it did a great job about explaining the science and practice behind LLMs.

We moved on to a topic harder to discuss but of great relevance to our listeners, GenAI’s impact on the enterprise.

It reminds me of when Cloud became most prominent. Then “C” suites tasked their staff to adopt “the cloud” anyway they could. Today, “C” suites are tasking their staff to determine what their “AI strategy” is and when will it be implemented.

Keith mentioned that this is wrong headed. The true path forward (for the enterprise) is to focus on what are the business problems and how can (Gen)AI address (some of) them.

AI is so varied and its capabilities across so many fields, is so good nowadays ,that organizations should really look at AI as a new facility that can recognize patterns, index/analyze/transform images, summarize/understand/transform text/code, etc., in near real-time and see where in the enterprise that could help.

We talked about how enterprises can size AI infrastructure needed to perform these activities. And it’s more than just a gaggle of GPUs.

MLcommons’s MLperf benchmarks can help show the way, for some cases, but they are not exhaustive. But it’s a start.

The consensus was maybe deploy in the cloud first and when the workload is dialed in there, re-home it later. With the proviso that hardware needed is available.

Our final topic was the Broadcom VMware acquisition. Keith mentioned their recent subscription pricing announcements vastly simplified VMware licensing, that had grown way too complex over the decades.

And although everyone hates the expense of VMware solutions, they often forget the real value VMware brings to enterprise IT.

Yes hyperscalars and their clutch of coders, can roll their own hypervisor services stacks, using open source virtualization. But the enterprise has other needs for their developers. And the value of VMware virtualization services, now that 128 Core CPUs are out, is even higher.

We mentioned the need for hybrid cloud and how VCF can get you part of the way there. Keith said that dev teams really want something like “AWS software” services running on GCP or Azure.

Keith mentioned that IBM Cloud is the closest he’s seen so far to doing what Dev wants in a hybrid cloud.

We all thought when DNN’s came out and became trainable, and reinforcement learning started working well, that AI had turned a real corner. Turns out, that was just a start. GenAI has taken DNNs to a whole other level and Deepmind and others are doing the same with reinforcement learning.

This time AI may actually help advance mankind, if it doesn’t kill us first. On the latter topic you may want to checkout my RayOnStorage AGI series of blog posts (latest … AGI part-8)

Jason Collier, Principal Member Of Technical Staff at AMD, Data Center and Embedded Solutions Business Group

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology.

He was founder and CTO of Scale Computing and has been an innovator in the field of hyperconvergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years.

He’s on LinkedIN. He’s currently working with AMD on new technology and he has been a GreyBeards on Storage co-host since the beginning of 2022

Keith Townsend, President of The CTO Advisor a Futurum Group Company

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations.

Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.

157: GreyBeards talk commercial cloud computer with Bryan Cantrill, CTO, Oxide Computer

Bryan Cantrill (@bcantrill), CTO, Oxide Computer was a hard man to interrupt once started but the GreyBeards did their best to have a conversation. Nonetheless, this is a long podcast. Oxide are making a huge bet on rack scale computing and have done everything they can to make their rack easy to unbox, setup and deploy VMs on.

They use commodity parts (AMD EPYC CPUs) and package them in their own designed hardware (server) sleds, which blind mate to networking and power in the back of the own designed rack. They use their own OS Helios (OpenSolaris derivative) with their own RTOS, Hubris, for system bringup, monitoring and the start of their hardware root of trust. And of course, to make it all connect easie,r they designed and developed their own programmable networking switch. Listen to the podcast to learn more.

Oxide essentially provides rack hardware which supports EC2-like compute and EBS-like storage to customers. It also has Terraform plugins to support infrastructure as code. In addition, all their software is completely API driven.

Bryan said time and time again, developing their own hardware and software made everything easier for them and their customers. Customers pay for hardware but there’s absolutely NO SOFTWARE LICENSING FEEs, because all their software is open source.

For example, the problem with AMI bios and UEFIs is their opacity, There’s really no way to understand what packages are included in its root of trust because it’s proprietary. Brian said one company UEFI they examined, had URL’s embedded in firmware. It seemed odd to have another vendor’s web pages linked to their root of trust.

Bryan said they did their own switch to reduce integration and validation test time. The Oxide rack supports all internal networking, compute sled to compute sled, and ToR switch (with no external cabling) and has 32 networking ports to connect the rack to the data center’s core networking.

As for storage, Bryan said each of the 10 U.2 NVMe drives in their compute sled is a separate, ZFS file system and customer data is 3 way mirrored across any of them. ZFS also provides end to end checksumming across all customer data for IO integrity.

Bryan said Oxide Computer rack bring up is 1) plug it in to core networking and power, 2) power it on, 3) attach a laptop to their service processor, 4) SSH into it, 5) Run a configuration script and your ready to assign VMs. He said that from the time an Oxide Rack hits your dock until you are up and firing up VMs, could be as short as an HOUR.

The Rust programming language is the other secret to Oxide’s success. More to the point their company is named after Rust (oxide get it). Apparently just about any software they developed is written in Rust.

The question for Oxide and every other computer and storage vendor is – do you believe that on premises computing will continue for the foreseeable future. The GreyBeards and Oxide believe yes. If not for compliance and better latency but also because it often costs less.

Bryan mentioned they have their own podcast, Oxide and Friends. On their podcast, they did a board bring up series (Tales from the Bring-Up Lab) and a series on taking their rack through FCC compliance (Oxide and the Chamber of Mysteries).

Bryan Cantrill, CTO, Oxide Computers

Bryan Cantrill is a software engineer who has spent over a quarter of a century at the hardware/software interface. He is the co-founder and CTO of Oxide Computer Company, the creator of the world’s first commercial cloud computer.

Prior to Oxide, he spent nearly a decade at Joyent, a cloud computing pioneer; prior to Joyent, he spent 14 years at Sun Microsystems.

Bryan received the Sc.B. magna cum laude with honors in Computer Science from Brown University, and is a MIT Technology Review 35 Top Young Innovators alumnus.

You can learn more about his work with Oxide at oxide.computer, or listen in on their weekly live show, Oxide and Friends (link above), on Discord or anywhere you get your podcasts.