TOC

The community is working on translating this tutorial into Malay, but it seems that no one has started the translation process for this article yet. If you can help us, then please click "More info".

Data types:

Floating points

Floating-point numbers are numbers that have fractional parts (usually expressed with a decimal point). You might wonder why there's isn't just a single data type for dealing with numbers (fractions or no fractions), but that's because it's a lot faster for the computer to deal with whole numbers than with numbers containing fractions. Therefore it makes sense to distinguish between them - if you know that you will only be dealing with whole numbers, pick an integer data type. Otherwise, use one of the floating point data types, as described in this article.

Using a floating point value is just as easy as using an integer, even though there are quite a few more concerns with floating point values, which we'll discuss later. For now, let's see what it looks like when declaring one of the most commonly used floating point data type: the double.

double number;

Just like an integer, you can of course assign a value to it at the same time as declaring it:

double number = 42.0;

The same goes for the float and decimal types, which will discuss in just a second, but here, the notation is slightly different:

double doubleVal = 42.0;
float floatVal = 42.0f;
decimal decimalVal = 42.0m;

Notice the "f" and "m" after the numbers - it tells the compiler that we are assigning a float and a decimal value. Without it, C# will interpret the numbers as double, which can't be automatically converted to either a float or decimal.

float, double or decimal?

Dealing with floating point values in programming has always caused a lot of questions and concerns. For instance, C# has at least three data types for dealing with non-whole/non-integer numbers:

  • float (an alias for System.Single)
  • double (an alias for System.Double)
  • decimal (an alias for System.Decimal)

The underlying difference might be a bit difficult to understand, unless you have a lot of knowledge about how a computer works internally, so let's stick to the more practical stuff here.

In general, the difference between the float, double and decimal data types lies in the precision and therefore also in how much memory is used to hold them. The float is the least expensive one - it can represent a number with up to 7 digits. The double is more precise, with up to 16 digits, while the decimal is the most precise, with a whooping maximum of 29 digits.

You might wonder what you need all that precision for but the answer is "math stuff". The classic example to understand the difference is to divide 10 with 3. Most of us will do that in the head and say that the result is 3.33, but a lot of people also know that it's not entirely accurate. The real answer is 3.33 followed by an amount of extra 3's - how many, when doing this calculation with C#, is determined by the data type. Check out this example:

float a = 10f / 3;
Console.WriteLine(a);
double b = 10d / 3;
Console.WriteLine(b);
decimal c = 10m / 3;
Console.WriteLine(c);

I do the exact same calculation, but with different data types. The result will look like this:

a: 3.333333
b: 3.33333333333333
c: 3.3333333333333333333333333333

The difference is quite clear, but how much precision do you really need for most tasks?

How to choose

First of all, you to consider how many digits you need to store. A float can only contain 7 digits, so if you need a bigger number than that, you may want to go with a double or decimal instead.

Second of all, both the float and double will represent values as an approximation of the actual value - in other words, it might not be comparable down to the very last digit. That also means that once you start doing more and more calculations with these variables, they might not be as precise anymore, which basically means that two values which should check out as equals will suddenly not be equal anyway.

So, for situations where precision is the primary concern, you should go with the decimal type. A great example is for representing financial numbers (money) - you don't want to add 10 amounts in your bookkeeping, just to find out that the result is not what you expected. On the other hand, if performance is more important, you should go with a float (for small numbers) or a double (for larger numbers). A decimal is, thanks to its extra precision, much slower than a float - some tests shows that its up to 20 times slower!

Summary

When dealing with floating point values, you should use a float or a double data type when precision is less important than performance. On the other hand, if you want the maximum amount of precision and you are willing to accept a lower level of performance, you should go with the decimal data type - especially when dealing with financial numbers.

If you want to know more about the underlying differences between these data types, on a much more detailed level, you should have a look at this very detailed article: What Every Computer Scientist Should Know About Floating-Point Arithmetic


This article has been fully translated into the following languages: Is your preferred language not on the list? Click here to help us translate this article into your language!