270 likes | 536 Views
Internet Topology. Caterina Scoglio KSU. Why need for Internet Topology models. To evaluate performance of algorithms and protocols Realistic models at different levels of detail Distributed nature of Internet (thousands of smaller networks) no complete picture of the whole topology
E N D
Internet Topology Caterina Scoglio KSU
Why need for Internet Topology models • To evaluate performance of algorithms and protocols • Realistic models at different levels of detail • Distributed nature of Internet (thousands of smaller networks) no complete picture of the whole topology • Discover Internet Topology
Which Topology are they studying? • Internet Topology at router level for a single ISP • ISP are designed in isolation and connected by engineering and business considerations
Random Models • Georgia Tech Internetwork Topology Models (GT-ITM). • this model places nodes at random in a two-dimensional space and adds linksprobabilistically between each pair of nodes in a manner that is inversely proportional to their distance
Why random models are not good • They do not consider nonrandom structure of Internet, particularly hierarchy and locality, as part of the network design
Powel Law Models • heavy-tailed distributions in node degree • heavy-tailed distributions conform to power-law distributions • scale free network
Scale Free Networks • In scale-free networks, • some nodes = highly connected hubs" (high degree), • most nodes are of low degree. • Scale-free networks' structure and dynamics are independent of the the number of nodes. In other words, a network that is scale-free will have the same properties no matter what the number of its nodes is. • Their most distinguishing characteristic is that their degree distribution follows a power law relationship P(k)=k-g where the coefficient γ may vary approximately from 2 to 3 for most real networks.
Robustness of Scale Free Networks • Prof. Barabási and his colleagues point out that hub based networks are vulnerable to attack. From this, they conclude that the Internet, which they consider being a hub based network, is also vulnerable to attack on a relatively small collection of hub node.
Need for better models • Recent work has shown that the perspective offered from the degree-based models is both incomplete and can sometimes be misleading or even flawed. • Robustness example
Understanding Internet Topology: Principles, Models, and Validation IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 13, NO. 6, DECEMBER 2005 1205 David Alderson, Lun Li, Walter Willinger, and John C. Doyle.
How is the new study performed • Using a “First Principles Approach” • Networking Technology • Economic of Network Design • Metrics used for comparison • Performance Inspired • Likelihood related
Networking Technology • There is an inherent tradeoff between the number of physical link connections (i.e., node degree) and connection speeds (i.e., bandwidth) at each router. • Core routers tend to specialize in supporting the highest available link speeds, but can only handle a relatively few such connections. • Access routers are designed to support many more connections (but at necessarily lower speeds). • This contradicts the basic characteristic of Scale Free network, with hub nodes with high degree, at least in the core network
Economic of Network design • Based on the high variability in population density, it is expected that there exists high variability in network connectivity. • However, the latter is by and large due to the connectivity observed at the network’s edge and cannot possibly be the result of the connectivity pattern typically encountered in the network core • This supports the thesis that the access topology can be described as a Power Law Topology
Metrics Used for Comparison • Performance Inspired • measurement for how different networks handle the same traffic demand matrix. • Likelihood related • to differentiate between networks (raw connectivity structures) modeled by simple and connected graphs having the same vertex set and the same degree • There is an explicit relationship between graphs with high values of s(g) and graphs having a “hub-like” core (i.e., high connectivity vertices forming a cluster in the center of the network).
Heuristically Optimal Topologies (HOT) • core is constructed as a sparsely connected mesh of high-speed, low-connectivity routers which carry heavily aggregated traffic over high-bandwidth links. • Accordingly, this mesh-like core is supported by a hierarchical tree-like structure at the edges whose purpose is to aggregate traffic through high connectivity
Comparison Preferential Attachment (PA): nodes are added successively and connected to the existing graph with probability proportional to each existing node’s current degree HOT Five networks having the same node degree distribution: identical from a degree-based perspective, but opposites in terms of engineering performance.
HOT model • HOT network is not only more robust to worst-case deletions (here, worst-case are low-connectivity core nodes), but also shows high tolerance to deleting other nodes, particularly high-degree edge routers. • The scale-free network has such poor nominal performance to start with, it is worse intact than the HOT network after the latter has sustained substantial damage.
Robustness • to random failures or fragility to targeted attacks • the presence of high-connectivity nodes in the core of the network (power laws) attacks on them can destroy network connectivity as a whole • Contradicts Internet’s legendary and most clearly understood robustness property, namely its ability, in the presence of router or link failures, to “see damage and work around it”
Robust yet Fragile • “robust yet fragile” nature of the Internet’s actual router-level topology is provided in [J. C. Doyle, D. Alderson, L. Li, S. Low, M. Roughan, S. Shalunov, R.Tanaka, and W. Willinger, “The “robust yet fragile” nature of the Internet,”Proc. Nat. Acad. Sci., vol. 102, no. 41, pp. 14497–14502, 2005.
Robustness • robustness to router failures, defining this robustness as the remaining performance of the network after routers are removed and after rerouting of traffic residual performance after successive deletion of worst-case nodes (deleting the worst 20 vertices corresponds to removing 20% of the routers)
Internet Robustness • Internet is simple and robust because of: • a layered architecture • multiple forms of feedback control that enable robust performance in the presence of frequent disruptions and enormous heterogeneity.