More and more, I've noticed that people prefer decimal over double for varying reasons. A decimal's precision is 28-29 significant digits, where double is "only" 15-16 significant digits. I'm not saying don't use decimal. Of course use it when needed. For the rest of this article, I will be referring to those who use decimal when a double would work fine.

There are a few downsides to using decimal over double. One major downside is performance. It can be 10X slower than double on some machines. Now just think about that. If you're using decimals in a loop, etc., it can cause a huge bottleneck in your code. A simple change from decimal to double may possibly yield you a nice performance boost.

On the other hand, double is bad for financial applications because they are represented in ternally as "Base 2" (binary). Whereas decimal is represented internally as "Base 10" (decimal).

But then there's also some additional methods available that are not available when using decimal. For instance, there is a

and a few others.

NaN stands for not a number. Try a divide by zero scenario in javascript and you'll see the wonderful NaN.

One caveat is that the statement (double.NaN == double.NaN) with result in a false. But, object.Equals( numerator / denominator, double.NaN ) will result in true.

But when would you use these methods? A quick example would be:

Of course, you can do this a dozen other ways. This was just meant as a means to show one of many deeply hidden gems in the .NET Framework.

There are a few downsides to using decimal over double. One major downside is performance. It can be 10X slower than double on some machines. Now just think about that. If you're using decimals in a loop, etc., it can cause a huge bottleneck in your code. A simple change from decimal to double may possibly yield you a nice performance boost.

On the other hand, double is bad for financial applications because they are represented in ternally as "Base 2" (binary). Whereas decimal is represented internally as "Base 10" (decimal).

But then there's also some additional methods available that are not available when using decimal. For instance, there is a

**double.NaN****double.NegativeInfinity****double.PositiveInfinity**and a few others.

NaN stands for not a number. Try a divide by zero scenario in javascript and you'll see the wonderful NaN.

One caveat is that the statement (double.NaN == double.NaN) with result in a false. But, object.Equals( numerator / denominator, double.NaN ) will result in true.

But when would you use these methods? A quick example would be:

void WriteQuotient( double numerator, double denominator ) { // if( numberator / denominator == double.NaN ) <= will never work // if( object.Equals( numerator / denominator, double.NaN ) <= this WILL work, because of the inner workings of object.Equals if( ! double.IsNaN( numberator / denominator ) Console.WriteLine( numberator / denominator ); else ..... }

Of course, you can do this a dozen other ways. This was just meant as a means to show one of many deeply hidden gems in the .NET Framework.

Yes, this is a think commonly seen, it's like nobody cares to study the cases.

Nice entry les,