Home » Blogs » How do you become a graphics programmer?

How do I become a graphics programmer? - A small guide from the AMD Game Engineering team

Picture of GPUOpen
GPUOpen

The home for games and graphics developers. Discover how our open source tools, SDKs, and effects can help you find your best graphics performance. Learn tips and tricks with our extensive tutorials and samples.

Introduction

Every now and then we get asked what a beginner-friendly website is for learning graphics programming. We’d love to recommend GPUOpen of course, but the truth is, the main target audience for GPUOpen is intermediate or advanced graphics programmers. For someone who just started to dive into the world of graphics, there are surely other websites more suitable for them.

As with so many things, there is no one right way to get into graphics. It mostly depends on potential pre-existing knowledge, how you like to learn, personal preference, available hardware, etc. Hence, this guide is more a collection of websites that we think are useful for beginners, and a small discussion weighing the pro and cons of the websites and what they teach.

Which programming language?

When we talk to students about our job, one of the questions we almost always get is: which programming language do you use? The short answer is: C++.

The long answer is … well, if you do graphics, you write code for the host CPU, usually in C++, but you also write code for the GPU in what the industry has come to call shader code, typically in a high-level shading language such as HLSL or GLSL. You also need a way to tie them both together, asking the CPU to ask the GPU to do something useful using shaders and the other data and metadata needed to make that happen. That’s where the graphics application programming interfaces (APIs) come in, which you drive usually as host CPU code written in C++.

So the more controversial question is: which graphics API you should start with. And we don’t have an answer for you, just a bunch of different opinions and considerations. We’ll narrow in on the common graphics APIs used to write most PC games or 3D applications at this point, ignoring things like games consoles with their (mostly) proprietary APIs.

There are different graphics APIs out there, but in 2023 it essentially boils down to:

  • OpenGL® / Microsoft® DirectX®11 / WebGL™

  • Vulkan® / Microsoft® DirectX®12 / Metal / WebGPU

OpenGL®, DirectX®11 and WebGL™ are legacy APIs. They’re based on a historical approach to programming the GPU. Nonetheless, they are still widely used across academia and industry since they’re simpler from the programmer’s perspective and thus easier to learn. Their simplicity comes at a cost however, since the underlying implementation of those APIs in any driver and runtime code became more complex over time, causing host-side bottlenecks and very complicated drivers for GPU vendors to write.

Between 2013 and 2016 things changed with AMD’s introduction of the Mantle graphics API on PC. Designed only for AMD GPUs in concert with EA DICE, Mantle shed the baggage of the legacy APIs to give the programmer much lower level GPU access and a thinner abstraction over how it works. The results — higher performance due to more efficient use of the GPU — opened the floodgates.

Metal in 2014 for Apple platforms, followed by DirectX®12 and Vulkan® in 2016, all took a similar lower-level and more explicit approach to programming the GPU. All of those APIs put a higher burden on the programmer being more clear and explicit about what they’d like the GPU to do, giving the programmer more control.

It effectively moves more of the traditional view of what a GPU driver is into the application. The tradeoff is more control for the programmer, and thus the opportunity for more performance and efficiency in what happens on the GPU.

Knowing how the GPU programming paradigm shifted, the decision about what API to pick might seem easy. If you want to learn graphics programming on Windows®, which is the primary target for PC games, then the instinct will be to pick Vulkan® or DirectX®12, as this is the new stuff, right?

However, not only are the legacy APIs still widely used, they are much simpler and easier to learn. So it might make sense to first learn the legacy APIs, and then later on move to the modern graphics APIs. Instead of immediately climbing Mount Everest, you first train at your local hill.

A simple comparison is how many lines of code are needed to draw a triangle on the screen, which is the “Hello, World!” equivalent in graphics programming. While OpenGL® or DirectX®11 need about 10 lines of code to get to the first triangle, Vulkan® and DirectX®12 require at least an order of magnitude more.

So, on the other hand, you can also say that Vulkan® and DirectX®12 require you to do more hand-holding for the GPU. You need 100 lines of code, but each line is clearer about what eventually happens on the GPU. While with OpenGL® and DirectX®11 a lot of the things are hidden and taken care for you, it’s easier for it to appear like a magic box and the few things you need to do might not make much sense in terms of understanding how the GPU works.

Vulkan® and DirectX®12 force you to deal with and gain that lower-level knowledge and understand early on, while OpenGL® and DirectX®11 act on a higher level and let you focus with the more productive aspects of graphics programming.

Let’s see what the AMD Game Engineering team have to say about whether one should start with a legacy API or with a modern API:

“Starting by learning graphics programming with DX12® or Vulkan® as a teenager would be rough.”

“We’re a long way away from the ol’ glVertex hello triangle.”

“Shame that OpenGL® is no longer super relevant tho.”

“I think it (OpenGL®) is still good enough to learn the basics imho.”

“I started out with DX10® which I think is not bad, but a lot of kids struggle with it if they are not proficient in C++.” – DX10® is the predecessor of DX11® and outdated – don’t learn DX10®!

“But there are many things in play. Learning the theory of rasterisation and ray tracing. Learning the traditional pipeline. Learning how to feed a GPU work etc… I don’t know in what order they should be learnt but I’m usually all-in C++ (there is no need to start in another language) so I should probably say just start with Vulkan®… but I’m not really sure to be honest.”

And of course, there is also the possibility to learn graphics programming without OpenGL® and DirectX®11 or Vulkan® and DirectX®12. We mentioned WebGL™ and WebGPU in the broad classes of graphics APIs that are available, both of which operate in the web browser domain which is a whole software ecosystem (and arguably a full operating system) in its own right.

You can do real-time graphics in the web browser via WebGL™ or WebGPU too! One advantage is that you don’t need C++ for the web (but can technically still use it to drive graphics in the browser with something like WebAssembly). In the browser, the dominant programming language of JavaScript is fine!

Or, if you want to focus on higher-level stuff first and not bother with the lower-level details of a graphics API and hand-authoring shaders, maybe even a game engine is the right choice for you. Unreal Engine is the biggest for PC games, with Unity® the most common alternative, along with a boatload of smaller, simpler, open-source engines like Godot to play with too.

Here’s some opinions from our AMD Game Engineering team:

“IMHO, getting newcomers to start out with DX12® or Vulkan® is rough, but maybe doable. I would say that starting out with WebGL™ just to understand the graphics pipeline is probably the easiest (and you focus on the right stuff).”

“WebGPU might give you concepts closer to modern graphics APIs without the extra low-level details.”

“WebGPU will probably get a lot better but right now I think you can expect a lot of kinks once you start writing nontrivial code. Still, it was very cool and it worked surprisingly well. I sincerely hope it gets better, but what exists right now is super promising. For prototyping stuff this is crazy good. And yeah, it would probably make a good API for learning / teaching too.”

“I know a few schools use Unity® for their first rendering courses which probably isn’t half bad, but there is a risk those students will never understand why they need to learn the lower-level stuff…”

“Starting with an explicit API is fine if you have a good handle on the modern graphics pipeline and the high-level of how GPU hardware implements it, or you want to pick up that GPU hardware understanding along the way. Otherwise start with something simpler to get going and then look at an explicit API later if you need or want to.”

“Personally, I feel that maybe acclimating to shaders through a popular game engine before moving to writing custom host code might be a good way of getting into it. It’s very hard to learn if you don’t have a clear view of what you want to achieve in a graphics API, otherwise you might just quit when you get to the triangle.”

Useful websites

So what do we recommend in terms of learning resources? These are some of the websites suggested by members of the AMD Game Engineering team:

Summary

So what now? If you haven’t made a decision by now about what programming language and graphics API ecosystem to start with, don’t worry. You can try out different approaches and then stick with the one that works for you best.

We’d also be very happy to hear your thoughts! How did you learn graphics programming? What can you recommend to fellow graphics programmers just getting started? We would love to add your contributions to this blog post to make it an even more useful starting point.

Please feel free to share your thoughts with us via Twitter/X either on this X thread, or you can DM us at @GPUOpen too. You can also reach us on Mastodon via DM or this Mastodon thread. If we use your contribution, we’ll reach out to confirm via DM that it’s okay to use the relevant social media handle in this blog.

Looking forward to hearing from you!

Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied. GD-5

GPU Work Graphs mesh nodes in Microsoft DirectX® 12

Mesh nodes are a new type of leaf node in work graphs that, unlike all other nodes, does not invoke a compute shader, but dispatches a mesh-shader graphics pipeline instead. This blog series covers how to get started with mesh nodes as well as best practices.

Picture of Lou Kramer
Lou Kramer

Lou is part of AMD's European Game Engineering Team. She is focused on helping game developers get the most out of Radeon™ GPUs using Vulkan® and DirectX®12 technologies.

Picture of Rys Sommefeldt
Rys Sommefeldt

Rys Sommefeldt is a GPU hardware architect at AMD.

Enjoy this blog post? If you found it useful, why not share it with other game developers?

You may also like...

Getting started: AMD GPUOpen software

New or fairly new to AMD’s tools, libraries, and effects? This is the best place to get started on GPUOpen!

AMD GPUOpen Getting Started Development and Performance

Looking for tips on getting started with developing and/or optimizing your game, whether on AMD hardware or generally? We’ve got you covered!

GPUOpen Manuals

Don’t miss our manual documentation! And if slide decks are what you’re after, you’ll find 100+ of our finest presentations here.

AMD GPUOpen Technical blogs

Browse our technical blogs, and find valuable advice on developing with AMD hardware, ray tracing, Vulkan®, DirectX®, Unreal Engine, and lots more.

AMD GPUOpen videos

Words not enough? How about pictures? How about moving pictures? We have some amazing videos to share with you!

AMD GPUOpen Performance Guides

The home of great performance and optimization advice for AMD RDNA™ 2 GPUs, AMD Ryzen™ CPUs, and so much more.

AMD GPUOpen software blogs

Our handy software release blogs will help you make good use of our tools, SDKs, and effects, as well as sharing the latest features with new releases.

AMD GPUOpen publications

Discover our published publications.