15

I see a lot of shader code on the internet where PI is simply defined as an approximate decimal near the top of the code. In general I try to let computers take care of doing math for me instead of approximating values with constants (i.e. (1./3.) instead of 0.333)

Pi seems like one of those things that would be easier to just use intrinsic functions like the radians() for.

I've heard that creating and destroying variables can be expensive in shader world, and I'm curious what the tradeoffs might be for using radians(180) inline in code instead of

#define PI 3.1415926538

at the top and then PI inline.

Thanks!

Ghost4Man
  • 115
florian
  • 161

1 Answers1

8

A define results basically in the same thing as typing out the complete value on each place, for all the compiler cares. Very cheap and no runtime cost.

When you use the radians function, best case scenario the shader compiler will recognise the possible optimisation and replace it with the actual value at compile time (wich will basically result in the same thing as using the define). Worst case scenario it will result in calculating the value on each occurrence at runtime.

In the case of glsl it is a bit tricky to make any assumptions on what the compiler will do since typically you don't know what compiler you are targetting when writing your code. It is also not easy to predict what runtime cost using radians() will bring you without actually benchmarking for each gpu.

This is probably why most of the time the safest option, the define, is chosen. This will guarantee the most optimal result.

Teimpz
  • 246