Summary

  • Nvidia and Microsoft have partnered to improve the performance of AI processing on Nvidia RTX-based AI PCs, with the aim of transforming PC software into ‘breakthrough experiences’ such as digital humans and writing assistants.
  • The PCs will use TensorRT for RTX, which has been reimagined for RTX AI PCs and combines industry-leading TensorRT performance with just-in-time on-device engine building, as well as an 8x smaller package size for fast AI deployment.
  • TensorRT for RTX is supported natively by Windows ML, a new inference stack that provides app developers with hardware compatibility and state-of-the-art performance, whilst also being built on ONNX Runtime, working with any ONNX model.
  • Since it is pre-packaged with everything needed to run, Nvidia NIM makes it easier for app developers to integrate AI features, whilst top software applications from Autodesk, Bilibili, Chaos, LM Studio and Topaz are releasing updates to unlock RTX AI features and acceleration.

By Dean Takahashi

Original Article