Computers need to store real-numbered values, but how do they do it? There are multiple choices for how we could represent real-numbered values, but the floating-point representation standardized in IEEE 754 is the most common choice. Here, we explore how that representation works, the difference between single- and double-precision values, and what the tradeoffs are.
Discussion about this post
No posts