Modal Title
Janakiram MSV

Janakiram MSV is the Principal Analyst at Janakiram & Associates and an adjunct faculty member at the International Institute of Information Technology. He is also a Google Qualified Cloud Developer, an Amazon Certified Solution Architect, an Amazon Certified Developer, an Amazon Certified SysOps Administrator, and a Microsoft Certified Azure Professional. Janakiram is an Ambassador for the Cloud Native Computing Foundation, and also one of the first Certified Kubernetes Administrators and Certified Kubernetes Application Developers. His previous experience includes Microsoft, AWS, Gigaom Research, and Alcatel-Lucent.

How to Reduce the Hallucinations from Large Language Models
Google’s Generative AI Stack: An In-Depth Analysis
Prompt Engineering: Get LLMs to Generate the Content You Want
Beyond ChatGPT: Exploring the OpenAI Platform
Tutorial: Deploy Acorn Apps on an Amazon EKS Cluster
Acorn from the Eyes of a Docker Compose User
Acorn, a Lightweight, Portable PaaS for Kubernetes
Zero Trust Network Security with Identity-Aware Proxies
Ondat’s Unlimited Nodes for Kubernetes Stateful Workloads
Tutorial: Real-Time Object Detection with DeepStream on Nvidia Jetson AGX Orin
Tutorial: Edge AI with Triton Inference Server, Kubernetes, Jetson Mate
Jetson Mate: A Compact Carrier Board for Jetson Nano/NX System-on-Modules
Serve TensorFlow Models with KServe on Google Kubernetes Engine
KServe: A Robust and Extensible Cloud Native Model Server
Model Server: The Critical Building Block of MLOps
Tutorial: Speed ML Training with the Intel oneAPI AI Analytics Toolkit
Intel oneAPI’s Unified Programming Model for Python Machine Learning
VMware Tanzu Application Platform: A Portable PaaS for Kubernetes
Tutorial: A GitOps Deployment with Flux on DigitalOcean Kubernetes
5 AI Trends to Watch out for in 2022
5 Cloud Native Trends to Watch out for in 2022
Review: Build a ML Model with Amazon SageMaker Canvas
Tutorial: Deploying TensorFlow Models with Amazon SageMaker Serverless Inference
Explore Amazon SageMaker Serverless Inference for Deploying ML Models
Take Amazon SageMaker Studio Lab for a Spin
Amazon SageMaker Studio Lab from the Eyes of an MLOps Engineer
Deploy Nvidia Triton Inference Server with MinIO as Model Store
Install and Configure MinIO as a Model Registry on RKE2