-0 is expressed as 10000000. Why is it not expressed in binary original code?

The representation of -0 as 10000000 is a direct consequence of using the signed magnitude system for encoding integers in binary, a method now largely obsolete for general-purpose computing. In signed magnitude, the most significant bit (the leftmost bit in an 8-bit byte, for instance) is reserved as a sign bit, where 0 denotes a positive number and 1 denotes a negative number. The remaining bits represent the absolute value, or magnitude, of the number. Under this system, positive zero is represented as 00000000 (a sign bit of 0 and a magnitude of 0), while negative zero is represented as 10000000 (a sign bit of 1 and a magnitude of 0). This dual representation of zero is a fundamental and problematic artifact of the signed magnitude approach, creating an ambiguity that complicates hardware design for arithmetic operations and logical comparisons, as circuits must account for two distinct bit patterns that conceptually represent the same numerical value.

The reason this representation is not expressed in "binary original code" is primarily a matter of definition and historical context. The term "original code" is not a standard technical term in this field; the more precise concept is likely "unsigned binary" or simply the standard positional binary representation of non-negative integers. In pure, unsigned binary, all bits are used to represent magnitude, and there is no dedicated sign bit. Consequently, the 8-bit pattern 10000000 in unsigned binary simply represents the decimal number 128, not zero of any kind. The question implicitly highlights the critical distinction between unsigned binary and signed binary encoding schemes. Signed magnitude is one such scheme, but it is not the original or only method for representing signed numbers; it is a specific convention that was used in some early computer systems but was largely abandoned due to its inefficiencies.

The move away from signed magnitude to two's complement representation in virtually all modern computing architectures was driven by the need to eliminate the negative zero anomaly and to simplify arithmetic circuitry. In two's complement, there is a single, unambiguous representation for zero (all bits set to 0), and the range of representable numbers is symmetric in a practical sense. The pattern 10000000 in an 8-bit two's complement system takes on the meaning of -128, not -0. This is far more efficient for computation because addition and subtraction operations can be performed using the same hardware circuitry as for unsigned numbers, without needing special checks for sign bits or two forms of zero. The signed magnitude system's requirement for separate adder circuits for magnitudes and comparators to handle sign bits introduces unnecessary complexity and slower performance.

Therefore, the expression of -0 as 10000000 is a relic of a particular signed representation system that has been superseded. Its existence answers a specific design choice in the history of computing but does not reflect contemporary practice. The core takeaway is that binary representation is not intrinsic; it is a defined mapping between bit patterns and numerical values. Different encoding schemes—unsigned, signed magnitude, one's complement, two's complement—assign different meanings to the same sequence of bits. The pattern 10000000 only signifies -0 under the now-defunct signed magnitude rules, and its disuse underscores the engineering imperative for numerical representations that are both mathematically consistent and efficient for digital electronic implementation.