Rice Networks Group Research Project Archive


TAPs

We have designed and implemented a FPGA-based hardware platform for clean-slate design of wireless network protocols. The platform is fully programmable at all layers from the physical layer on up. The platform supports cross-layer protocol implementation and has a high-performance physical layer for "at scale" experiments.

100x100

This project is driven by the vision of providing 100 Mb/sec to 100 Million households and small businesses. By providing a greater than two order of magnitude increase in end-to-end interconnection speed, such an infrastructure will provide a platform that enables radically new content, applications, and services to emerge over the next decades. We are devising a new architecture and protocols that jointly provide (1) cost-effective and resilient access networks that utilize both fiber-to-the-home and wireless last-hop access, (2) a scalable, fault-tolerant backbone network having simple logical structure and predictable performance, (3) economic efficiency that ensures sustained competition, and (4) security that strikes a balance between accountability vs. anonymity, as well as connectivity vs. isolation.

Resilience to Denial of Service Attack

We are studying DoS resilience in transport protocols, multi-hop wireless networks, peer-to-peer networks, and Internet Data Centers. In each case, we analyze protocol and system vulnerabilities to develop an understanding of the extent to which attackers can disrupt a system. Such studies include analysis of "attack scalability" which characterizes the relationship between the number of attackers and their performance impact on the victims. We study and develop families of counter-DDoS mechanisms. For example, for application layer attacks, we showed that an effective strategy is request scheduling (determination of when and if to service a request) driven by suspicion measures that characterize deviation from a normal workload. In peer-to-peer networks, we showed that reputation systems are unfortunately ineffective, unless they operate with near perfection.

Rice Packet Ring

End-to-end performance bottlenecks are increasingly encountered at the network edges. In metropolitan backbones, such networks are most often configured as rings due to their fault tolerance properties. While widely studied through the 1980's in the context of token protocols, circuit switching, and fault tolerance, today's packet rings are formed via point-to-point packet-based duplex connections of Gigabit Ethernet or protocols such as Resilient Packet Ring (IEEE 802.17). Our goal in this project is to devise distributed ring scheduling algorithms which are able to ensure a traffic class (perhaps with multiple access points) fair and/or guaranteed rate access throughout the ring. We have devised a new protocol termed DVSR and developed a 1 Gb/sec network processor prototype and testbed implementation.

Denali: Scalable Services for the Internet

Realizing new services on the Internet requires edge-based solutions for both deployability and scalability. In this domain, we are exploring (1) QoS management. We are studying the effectiveness of real-time flow aggregation and edge-based admission control to simultaneously achieve high utilization and scalability. We have developed a simple model of aggregation to show that it is an effective replacement to per-flow reservations provided that the time scale of demand variation and the time scale of signaling are separated by at least two orders of magnitude. Moreover, we have shown that end-point admission control via host probing also represents a compelling alternative to IntServ. (2) Bandwidth inference and traffic control. We are developing new techniques to infer a network or system's available bandwidth from the endpoints, with applications to both QoS control and performance optimization.

Wide Area Redirection of Dynamic Content

Traditional approaches to mirroring, caching, and content distribution have an underlying assumption that minimizing network hop count minimizes client latency. However, with uncongested backbones and potentially high-latency service times for dynamic content, such techniques are of limited effectiveness. In this work, we are developing an architecture in which dispatchers at an overloaded Internet Data Center (IDC) redirect requests for dynamic content to a geographically remote but less loaded IDC. We show with both analytical modeling as well as testbed experiments that the delay savings of redirecting requests to a lightly loaded IDC can far outweigh the overhead in inter-IDC network latency. Consequently, client end-to-end delays are significantly reduced without requiring modifications to clients, servers, or DNS.