GPU Reshape
GPU Reshape is a powerful tool that leverages on-the-fly instrumentation of GPU operations with instruction level validation of potentially undefined behavior.
We’re excited to announce the release of MinDXNN, a library for multi-layer perceptron (MLP) inference built natively for HLSL and DirectX® 12.

Recent research has demonstrated the power of integrating small MLPs into real-time rendering pipelines, enabling techniques like neural radiance caching, neural texture compression, and neural intersection function. However, most implementations rely on compute APIs like HIP, which requires interop when working with DirectX-based rendering engines. For game developers and graphics programmers working in DX12 environments, a native solution eliminates this friction entirely.
MinDXNN takes full advantage of AMD Radeon™ GPU matrix cores through cooperative vector APIs, delivering performance that rivals dedicated ML frameworks. While it’s technically possible to implement MLPs without these APIs, doing so would leave significant performance on the table. Cooperative vector APIs provide a developer-friendly path to hardware acceleration — simpler than writing raw WMMA instructions (available in HIP) while achieving similar performance gains.
This initial release includes:
Note: Cooperative vector support currently requires using AMD developer drivers and is supported on AMD Radeon RX 9000 Series graphics cards.
Training support is coming in our next release, enabling end-to-end workflows entirely within DirectX 12.
Visit GitHub to download MinDXNN and explore the samples. We would like to see what you are going to build with MiniDXNN and matrix cores on supported Radeon GPUs, and you can discuss and share you projects on the AMD Developer Community.
DirectX, Microsoft, and Windows are registered trademarks of Microsoft Corporation in the US and/or other countries.