Welcome to exploring one of the most surprising behaviors in programming. When we type zero point one plus zero point two in Python, we expect to get zero point three. However, Python returns zero point three zero zero zero zero zero zero zero zero zero zero zero zero zero zero four. This isn't a bug - it's how computers handle decimal numbers using binary representation.
The root cause lies in how number systems work. Computers use binary, base two, while we use decimal, base ten. Just like one third cannot be written exactly in decimal and becomes zero point three three three repeating forever, zero point one cannot be written exactly in binary. When we convert zero point one to binary, we get zero point zero zero zero one one zero zero one one zero zero one repeating infinitely.
Computers use the IEEE seven five four standard for floating-point numbers, which provides limited precision. Since infinite binary decimals must be rounded to fit in memory, each decimal number becomes an approximation. Zero point one is stored as approximately zero point one zero zero zero zero zero zero zero zero zero zero zero zero zero zero five five five one one. Zero point two becomes approximately zero point two zero zero zero zero zero zero zero zero zero zero zero zero zero one one one zero two two. When we add these approximations, we get a different result than the approximation of zero point three.