Software Engineering

Why 0.1 + 0.2 ≠ 0.3 and What It Means for Your Money

Ilya NixanIlya Nixan
March 14, 2026
Why 0.1 + 0.2 ≠ 0.3 and What It Means for Your Money

March 14th is an unofficial holiday in the programming world: Pi Day (3.14). It's the perfect occasion to talk about how computers handle fractional numbers, why arithmetic results sometimes look unexpected, and what consequences this can have in real projects — especially when money is involved.

The experiment that surprises every beginner

Open an interactive console in any programming language and evaluate the simplest expression:

0.1 + 0.2

Dart, JavaScript, Python, Go, C++, Ruby — the result is the same: 0.30000000000000004. Not 0.3, as common sense would suggest, but a number with a long tail of zeros and a four at the end.

This is not a bug in any particular language, nor an error in your code. It is a fundamental property of how modern hardware represents fractional numbers in memory.

How computers store fractional numbers

All information in a computer is stored in binary — sequences of zeros and ones. For integers, this works flawlessly: 5 in binary is 101, 10 is 1010. Every integer has an exact binary representation.

Fractional numbers are a fundamentally different story. To convert a decimal fraction to binary, we repeatedly multiply by 2 and take the integer part. Let's see what happens with 0.1:

0.1 × 2 = 0.2 → 0
0.2 × 2 = 0.4 → 0  ← cycle starts
0.4 × 2 = 0.8 → 0
0.8 × 2 = 1.6 → 1
0.6 × 2 = 1.2 → 1  ← cycle ends
0.2 × 2 = 0.4 → 0  ← cycle repeats
...

The result is 0.0(0011) — an infinite repeating binary fraction. This is analogous to how 1/3 becomes the infinite 0.3333... in decimal — it is impossible to write it exactly in a finite number of digits.

The number 0.2 faces the same problem. Both numbers involved in our addition already contain a rounding error the moment they are stored in memory.

The IEEE 754 standard: a trade-off between precision and performance

The vast majority of modern processors use the IEEE 754 standard for floating-point arithmetic. The double type (64-bit double precision), which is the default in most programming languages, is structured as follows:

  • 1 bit for the sign (positive or negative)
  • 11 bits for the exponent (the order of magnitude)
  • 52 bits for the mantissa (the significant digits)

52 bits of mantissa translates to roughly 15–17 significant decimal digits. When the binary representation of a fraction is infinite, it gets truncated at the 52nd bit. This is the exact moment the representation error is introduced.

When you add two numbers that each contain a rounding error, those errors accumulate. This is how 0.1 + 0.2 becomes 0.30000000000000004.

When the error is acceptable

To be fair, for most tasks this error is completely negligible. We are talking about an error on the order of 10⁻¹⁶ — one quadrillionth.

double works perfectly well in the following domains:

  • Computer graphics and rendering — a difference in the sixteenth decimal place is invisible to the human eye
  • Physics simulations in games — objects behave realistically, and the error has no perceptible effect
  • Scientific computing — results are typically rounded to the required precision anyway
  • Statistics and machine learning — models operate on approximations by nature

When the error becomes critical: financial calculations

The situation changes drastically when money is involved. In financial systems, every cent must be accounted for exactly. An error in the sixteenth decimal place may seem insignificant for a single transaction, but when processing thousands and millions of operations, these microscopic rounding errors accumulate and turn into real discrepancies in balances.

Consider a concrete example. Suppose a system processes 100,000 transactions per day, and each one introduces an error of 0.000000000000004. Per day, that's 0.0000000004 — a negligible amount. But multiply that by a year, add more complex operations involving multiplication and division where errors grow significantly faster, and you end up with sums that neither an accountant nor an auditor can explain.

Reliable approaches to handling monetary amounts

Approach one: minor units as integers

The idea is straightforward: instead of storing an amount in dollars or euros as a fractional number, it is stored in the smallest currency units (cents, pennies, kopecks) as an integer.

1999 cents is always exactly $19.99. No rounding, no surprises, no 0.000000004 trailing behind. Integer arithmetic in computers is absolutely precise.

This approach is widely used in payment systems. Stripe, for instance, handles all amounts exclusively in the smallest currency units.

It is also worth mentioning an alternative for APIs: transmitting monetary amounts as strings. In this case, the interface remains human-readable ("19.99" is clearer than 1999), while the responsibility for parsing and choosing the appropriate numeric type falls on the client side.

Approach two: arbitrary-precision libraries (BigDecimal)

The second reliable option is to use specialized data types that store decimal numbers without converting them to binary:

  • Dart — the decimal package
  • Javajava.math.BigDecimal
  • Python — the decimal module from the standard library
  • JavaScript — the Decimal proposal is under consideration; in the meantime, libraries like decimal.js
  • Go — the shopspring/decimal package

These types represent numbers in decimal form, completely avoiding binary approximations. The expression 0.1 + 0.2 == 0.3 when using BigDecimal returns true — guaranteed.

The trade-off for precision is performance: operations with BigDecimal are significantly slower than with double. However, in the context of financial calculations where correctness matters more than speed, this is a perfectly acceptable compromise.

Which approach to choose

Both approaches solve the problem, but they suit different situations:

CriterionMinor units (int)BigDecimal
PerformanceMaximumLower
Ease of implementationHighMedium (depends on language)
Fractional unit supportNo (integers only)Yes
Multi-currency supportRequires knowing decimal placesWorks out of the box

For most e-commerce and payment systems, the minor units approach is optimal. BigDecimal is preferable in scenarios that require intermediate fractional calculations — for example, computing interest, taxes, or currency conversions.

Conclusion

If your project deals with monetary amounts, stick to one of two rules:

  • Store and transmit amounts in minor units as integers (int)
  • Use arbitrary-precision types (BigDecimal and its equivalents)

Using double for financial calculations is technical debt that may not manifest for months, but will eventually lead to discrepancies whose root cause will be extremely difficult to diagnose.

Choose tools that match the task, and pay attention to how your programming language represents numbers in memory. This is one of those things worth knowing about before it becomes a production incident.

Happy Pi Day! 🥧