Your language isn’t broken, it’s doing floating point math. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation comes with some degree of inaccuracy. That’s why, more often than not, .1 + .2 != .3.

  • jdnewmil@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    26 days ago

    Why downvote? This is an often overlooked trap for programmers… especially those of the “data science” variety, but certainly not restricted to that subset.