169: GreyBeards talk AgenticAI with Luke Norris, CEO&Co-founder, Kamiwaza AI

Luke Norris (@COentrepreneur), CEO and Co-Founder, Kamiwaza AI, is a serial entreprenaur in Silverthorne CO, where the company is headquartered.. They presented at AIFD6 a couple of weeks back and the GreyBeards thought it would be interesting to learn more about what they were doing, especially since we are broadening the scope of the podcast, to now be GreyBeards on Systems.

Describing Kamiwaza AI is a bit of a challenge. They settled on “AI orchestration” for the enterprise but it’s much more than that. One of their key capabilities is an inference mesh which supports accessing data in locations throughout an enterprise across various data centers to do inferencing, and then gathering replies/responses together, aggregating them into one combined response. All this without violating HIPPA, GDPR or other data compliance regulations.

Kamiwaza AI offer an opinionated AI stack, which consists of 155 components today and growing that supplies a single API to access any of their AI services. They support multi-node clusters and multiple clusters, located in different data centers, as well as the cloud. For instance, they are in the Azure marketplace and plans are to be in AWS and GCP soon.

Most software vendors provide a proof of concept, Kamiwaza offers a pathway from PoC to production. Companies pre-pay to install their solution and then can use those funds when they purchase a license.

And then there’s their (meta-)data catalogue. It resides in local databases (and replicated maybe) throughout the clusters and is used to identify meta data and location information about any data in the enterprise that’s been ingested into their system.

Data can be ingested for enterprise RAG databases and other services. As this is done, location affinity and metadata about that data is registered to the data catalogue. That way Kamiwaza knows where all of an organization’s data is located, which RAG or other database it’s been ingested into and enough about the data to understand where it might be pertinent to answer a customer or service query.

Maybe the easiest way to understand what Kamiwaza is, is to walk through a prompt. 

  • A customer issues a prompt to a Kamiwaza endpoint which triggers,
  • A search through their data catalog to identify what data can be used to answer that prompt.
  • If all the data resides in one data center, the prompt can be handed off to the GenAI model and RAG services at that data center. 
  • But if the prompt requires information from multiple data centers,
  • Separate prompts are then distributed to each data center where RAG information germane to that prompt is located
  • As each of these generate replies, their responses are sent back to an initiating/coordinating cluster
  • Then all these responses are combined into a single reply to the customer’s prompt or service query.

But the key point is that data located in each data center used to answer the prompt are NOT moved to other data centers. All prompting is done locally, at the data center where the data resides.  Only prompt replies/responses are sent to other data centers and then combined into one comprehensive answer. 

Luke mentioned a BioPharma company that had genonome sequences located in various data regimes, some under GDPR, some under APAC equivalents, others under USA HIPPA requirements. They wanted to know information about how frequent a particular gene sequence occurred. They were able to issue this as a prompt at a single location which spun up separate, distributed prompts for each data center that held appropriate information. All those replies were then transmitted back to the originating prompt location and combined/summarized.

Kamiwaza AI also has an AIaaS offering. Any paying customer is offered one (AI agentic) outcome per month per cluster license. Outcomes could effectively be any AI application they would like to perform.

One outcome he mentioned included:

  • A weather-risk researcher had tons of old weather data in a multitude of formats, over many locations, that had been recorded over time.
  • They wanted to have access to all this data so they can tell when extreme weather events had occurred in the past.
  • Kamiwaza AI assigned one of their partner AI experts to work with the researcher to have an AI agent comb through these archives, transform and clean all the old weather data into HTML data more amenable to analysis . 
  • But that was just the start.. They really wanted to understand the risk of damage due to the extreme weather events. So the AI application/system was then directed to go and gather from news and insurance archives, any information that identified the extent of the damage from those weather events. 

He said that today’s AgenticAI can implement a screen mouse click and perform any function that an application or a human could do on a screen. Agentic AI can also import an API and infer where an API call might be better to use than a screen GUI interaction.

He mentioned that Kamiwaza can be used to generate and replace a lot of what enterprises do today with Robotics Process Automation (RPAs). Luke feels that anything an enterprise was doing with RPA’s can be done better with Kamiwaza AI agents.

SaaS solution tasks are also something AgenticAI can easily displace . Luke said at one customer they went from using SAP APIs to provide information to SAP, to using APIs to extract information from SAP, to completely replacing the use of SAP for this task at the enterprise. 

How much of this is fiction or real is subject of some debate in the industry. But Kamiwaza AI is pushing the envelope on what can and can’t be done. And with their AI aaS offering, customers are making use of AI like they never thought possible before. .

Kamiwaza AI has a community edition, a free download that’s functionally restricted, and provides a desktop experience of Kamiwaza AI’s stack. Luke sees this as something a developer could use to develop to Kamiwaza APIs and test functionality before loading on the enterprise cluster. 

We asked where they were finding the most success. Luke mentioned anyone that’s heavily regulated, where data movement and access were strictly constrained. And they were focused on large, multi-data center, enterprises.

Luke mentioned that Kamiwaza AI has been doing a number of hackathons with AI Tinkerers around the world. He suggested prospects take a look at what they have done with them and perhaps join them in the next hackathon in their area.

Luke Norris, CEO & Co-Founder, Kamiwaza AI

Luke Norris is the co-founder of Kamiwaza.AI, driving enterprise AI innovation with a focus on secure, scalable GenAI deployments. With extensive experience raising over $100M in venture capital and leading global AI/ML deployments for Fortune 500 companies.

Luke is passionate about enabling enterprises to unlock the full potential of AI with unmatched flexibility and efficiency.

168: GreyBeards Year End 2024 podcast

It’s time once again for our annual YE GBoS podcast. This year we have Howard back making a guest appearance with our usual cast of Jason and Keith in attendance. And the topic de jour seemed to be AI rolling out to the enterprise and everywhere else in the IT world. 

We led off with our discussion from last year, AI (again) but then it was all about new announcements, new capabilities and new functionality. This year it’s all about starting to take AI tools and functionality and make them available to help optimize organizational functionality.

We talked some about RAGs and Chatbots but these seemed almost old school.

Agentic AI

Keith mentioned Agentic AI which purports to improve businesses by removing/optimizing intermediate steps in business processes. If one can improve human and business productivity by 10%, the impact on the US and world’s economies would  be staggering.

And we’re not just talking about knowledge summarization, curation, or discussion, agentic AI takes actions that would have been previously done by a human, if done at all.  

Manufacturers could use AI agents to forecast sales, allowing the business to optimize inventory positioning to better address customer needs. 

Most, if not all, businesses have elaborate procedures which require a certain amount of human hand holding. Reducing human hand holding, even a little bit, with AI agents, that never slees, and can occasionally be trained to do better, could seriously help the bottom and top lines for any organization 

We can see evidence of Agentic AI proliferating in SAAS solutions, i.e., SalesForce, SAP, Oracle and all others are spinning out Agentic AI services.

I think it was Jason that mentioned GEICO, a US insurance company, is re-factoring, re-designing and re-implementing all their applications to take advantage of Agentic AI and other AI options. 

AI’s impact on HW & SW infrastructure

The AI rollout is having dramatic impacts on both software and hardware infrastructure. For example, customers are building their own OpenStack clouds to support AI training and inferencing.

Keith mentioned that AWS just introduced S3 Tables, a fully managed services meant to store and analyze massive amounts of tabular data for analytics. Howard mentioned that AWS’s S3 Tables had to make a number of tradeoffs to use immutable S3 object storage. VAST’s Parquet database provides the service without using immutable objects.

Software impacts are immense as AI becomes embedded in more and more applications and system infrastructure. But AI’s hardware impacts may be even more serious.

Howard made mention of the power zero sum game, meaning that most data centers have a limited amount of power they support. Any power saved from other IT activities are immediately put to use to supply more power to AI training and infererencing.

Most IT racks today support equipment that consumes 10-20Kw of power. AI servers will require much more

Jason mentioned one 6u server with 8 GPUS that cost on the order of 1 Ferrari ($250K US), draws 10Kw of power, with each GPU having 2-400 GigE links not to mention the server itself having 2-400 GigE links. So a single 6U (GPU) server has 18-400GbE links or could need 7.2Tb of bandwidth.

Unclear how many of these one could put in a rack but my guess is it’s not going to be fully populated. 6 of these servers would need >42Tb of bandwidth and over 60Kw of power and that’s not counting the networking and other infrastructure required to support all that bandwidth.  

Speaking of other infrastructure, cooling is the other side of this power problem. It’s just thermodynamics, power use generates heat, that heat needs to be disposed of. And with 10Kw servers we are talking a lot of heat. Jason mentioned that at this year’s SC24 conference, the whole floor was showing off liquid cooling.  Liquid cooling was also prominent at OCP.

At the OCP summit this year Microsoft was talking about deploying near term 150Kw racks and down the line 1Mw racks. AI’s power needs are why organizations around the world are building out new data centers in out of the way places that just so happen to have power and cooling nearby. 

Organizations have an insatiable appetite for AI training data. And good (training) data is getting harder to find. Solidigm latest 122TB SSD may be coming along just when the data needs for AI are starting to take off.

SCI is pivoting

We could have gone on for hours on AI’s impact on IT infrastructure, but I had an announcement to make.

Silverton Consulting will be pivoting away from storage to a new opportunity that is based in space. I discuss this on SCI’s website but the opportunities for LEO and beyond services are just exploding these days and we want to be a part of that. 

What that means for GBoS is TBD. But we may be transitioning to something more broader than just storage. But heck we have been doing that for years.

Stay tuned, it’s going to be one hell of a ride

Jason Collier, Principal Member Of Technical Staff at AMD, Data Center and Embedded Solutions Business Group

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology.

He was founder and CTO of Scale Computing and has been an innovator in the field of hyperconvergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years.

He’s on LinkedIN. He’s currently working with AMD on new technology and he has been a GreyBeards on Storage co-host since the beginning of 2022

Howard Marks, Technologist Extraordinary and Plenipotentiary at VAST Data

Howard Marks is Technologist Extraordinary and Plenipotentiary at VAST Data, where he explains engineering to customers and customer requirements to engineers.

Before joining VAST, Howard was an independent consultant, analyst, and journalist, writing three books and over 200 articles on network and storage topics since 1987 and, most significantly, a founding co-host of the Greybeards on Storage podcast.

Keith Townsend, President of The CTO Advisor, a Futurum Group Company

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.