Real-Time Inferencing RSS

AI Models, Ignite 2022, Real-Time Inferencing, Scalability -

Simplifying and accelerating AI model development workflows is hugely valuable, whether you have an army of data scientists or just a few developers. From adapting a model to fit your use-case to optimizing it for production deployment - it is a complex and iterative process. In this session, we'll show how easy it is to train and optimize an object detection model with NVIDIA TAO, a low-code AI toolkit, and deploy it for inference using the NVIDIA Triton Inference Server on Azure ML. AI captioning languages supported: Arabic, Bulgarian, Chinese Simplified, Czech, Danish, Dutch, English, Finnish, French, French Canadian, German, Greek,...

Read more

#WebChat .container iframe{ width: 100%; height: 100vh; }