Modal Title
Machine Learning / Operations

AI Talk at KubeCon

Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation talks about Generative AI and how it's eating up data using large language models (LLM) at KubeCon in Amsterdam.
May 24th, 2023 1:36pm by
Featued image for: AI Talk at KubeCon

What did engineers at KubeCon say about how AI is coming up in their work? That’s a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam.

Dolezal said AI did come up in conversation.

“I think that when it’s come to this, typically with KubeCons, and other CNCF and LF events, there’s always been one or two topics that have bubbled to the top,” Dolezal said. “But I think that this time around, it feels like there’s like five to seven. And that’s really interesting to me. I’m hearing WebAssembly, edge computing. I’m hearing internal development platforms…”

At its core, AI surfaces a data issue for users that correlates to data sharing issues, said Dolezal in this latest episode of The New Stack Makers.

“It’s what you’re putting in to create the end result,” Dolezal said. “It’s what you are putting into the function, and then its output. People are worried about putting proprietary source code in and that getting leaked. It sounds a lot more like a data issue to me personally.”

Dolezal: “What a time to be a lawyer.”

Generative AI is eating up data using large language models (LLM). These LLMs are built on foundation models trained on data that use unsupervised learning techniques, writes Janikiramn MSV for The New Stack. The “foundation models form the base for multiple variations of the model fine-tuned for a specific use case or scenario.” LLMs fare great for word completion. Have you used GPT 3.5 or Meta’s LLaMa? These LLMs use input strings that often then generate another string that generally follows the original .”

Dolezal said developers told him at KubeCon that people use large language models (LLMs) for tasks such as writing ReadMe files.

LLMs also serve as a way for thought experiments to develop different ways to break initial friction on a project, Dolezal said.

“Is this something that’s good to use kind of as a muse?” Dolezal said.

Or is it good enough to output and then put it into a project or approve a PR? he asked. How that may affect maintainers becomes a question because we want to avoid putting more burden on the maintainer who already has so much to do.

“It’s really about what you’re putting in to create the end result, what you’re putting into the function, and then its output,” Dolezal said.

As for the legal question — one of the biggies is about what if you are using proprietary code. Then how do you use an LLM?

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.