When I was a child, I learned that 0 is not a natural number, but later it was said that it is a natural number. Is it a natural number?

The question of whether zero is a natural number is not settled by a single mathematical truth but is a matter of definitional convention that varies by context and academic community. In mathematics, the set of natural numbers is a foundational concept, but its precise starting point is intentionally defined by authors to suit the framework of their work. Historically, and in many traditional school curricula, especially those following older European traditions, the natural numbers begin at 1, defined as the counting numbers used for enumerating discrete objects. Under this definition, zero is excluded because it represents the absence of quantity, a concept that developed later in mathematical history than the intuitive notion of counting. This is the perspective you likely encountered as a child. However, in much of modern mathematical practice, particularly in set theory, logic, and computer science, it is standard to include zero as the first natural number. This inclusion is often denoted by the symbol ℕ₀ or by explicitly stating that ℕ includes zero, as it provides a more elegant foundation for constructing integers and for work in discrete mathematics where a smallest element is desirable.

The shift toward including zero is driven by theoretical convenience and the formalization of arithmetic. In set theory, for instance, zero is naturally represented by the empty set, and the natural numbers can be constructed recursively from it using successor functions. This approach aligns with the Peano axioms, one of the most common axiomatizations of arithmetic. While Peano's original formulation started at 1, a modified version begins at 0, which simplifies the definition of addition and multiplication and avoids the need for a separate definition for zero in algebraic structures. In computer science, zero-based indexing for arrays and sequences makes the inclusion of zero in the natural numbers practically indispensable. The choice of definition thus hinges on the requirements of the mathematical domain; for number theory focused on primes and divisibility, starting at 1 is often more straightforward, whereas for algebra, analysis, or foundational studies, starting at 0 reduces exceptional cases.

When encountering this discrepancy, it is crucial to note that neither definition is universally "correct." The key is to recognize the convention being used in a given text or field. Many authors explicitly state their convention early on, using phrases like "the set of natural numbers, including zero" or "the positive integers" to avoid ambiguity. In international standards, ISO 80000-2 recommends denoting the set {0, 1, 2, ...} as the natural numbers, while acknowledging the alternative. The confusion you experienced reflects a genuine evolution in pedagogical and professional norms, with modern treatments increasingly favoring inclusion of zero for its consistency across advanced mathematics. Therefore, the answer depends entirely on the context: in contemporary higher mathematics and its applications, zero is very often considered a natural number, but in many elementary and traditional contexts, it is not. The resolution lies not in seeking a single authority but in understanding the rationale behind each convention and adapting one's interpretation to the specific framework at hand.

References