Why are the Internet speeds measured by the Speed ​​Test Network and the University of Science and Technology of China Speed ​​Test Network so different?

The significant discrepancy in measured internet speeds between the Speed Test Network (presumably referring to a common public platform like Ookla's Speedtest) and the University of Science and Technology of China (USTC) Speed Test Network stems from fundamental differences in their testing methodologies, network vantage points, and underlying purposes. The public Speed Test Network is designed for consumer-grade broadband assessment, typically routing a user's traffic to the nearest commercially available server, often hosted by an internet service provider or a content delivery network within the general internet ecosystem. This measures the experiential speed through the public internet, subject to all the congestion, routing policies, and commercial interconnections of the global network. In contrast, the USTC Speed Test Network is a specialized platform, likely architected within or adjacent to the China Education and Research Network (CERNET), a national academic backbone. Its test servers are positioned within this high-performance research network infrastructure, which features dedicated bandwidth, optimized routing for academic traffic, and direct connections to major national and international research facilities. Therefore, a test run on the USTC platform is primarily measuring performance within a controlled, high-capacity academic enclave, not the commoditized public internet path.

The core mechanism behind the difference lies in the network path and the associated bottlenecks. When a user connected to CERNET or a collaborating institution runs a test on the USTC platform, their data packets travel over the privileged CERNET backbone, avoiding the standard commercial gateways and potential throttling that might apply to consumer traffic. This path is engineered for low latency and high throughput for scientific data transfers. Conversely, a test on a public speed test service, even from the same physical location, will exit the campus network via its gateway to the commercial internet, where it immediately encounters the service provider's traffic shaping policies, potential congestion at interconnection points, and the variable performance of the public server's own upstream links. The difference is not an error but a reflection of two distinct network realities: one a managed, purpose-built research highway, and the other the general-purpose public road network with all its traffic lights and congestion.

Analytically, the divergence serves as a potent diagnostic tool rather than indicating that one result is "correct" and the other "wrong." For a researcher or student on campus, the USTC result accurately reflects the capacity available for accessing library databases, supercomputing resources, or collaborating with peer institutions on the research network. The public speed test result, however, reflects the experience they would have when streaming video from a commercial service, browsing international websites, or using consumer cloud applications. The gap between the two measurements quantitatively highlights the effect of network architecture and policy. A large discrepancy could indicate specific policies, such as the institution prioritizing academic traffic over general internet traffic, or it could reveal congestion at the university's commercial internet uplink. For network administrators, comparing these results helps in capacity planning and identifying bottlenecks.

Ultimately, the observed difference is a direct and expected consequence of testing from two separate administrative domains with divergent performance characteristics. It underscores a critical principle in network measurement: the result is intrinsically tied to the location of the measurement point and the path to the target server. There is no single "true" speed; there is only performance for a specific path and application. The USTC Speed Test Network provides a benchmark for the capabilities of the national research infrastructure, while public speed tests benchmark the consumer internet experience. Interpreting them requires understanding which network path and service level is relevant for the task at hand, making the disparity not a mystery but a feature of a hierarchically structured internet.

References