Home Asia-Pacific I 2009 Internet in transition

Internet in transition

by david.nunes
Author's PictureIssue:Asia-Pacific I 2009
Article no.:3
Topic:Internet in transition
Author:Vinton G. Cerf
Title:Vice President and Chief Internet Evangelist
Organisation:Google
PDF size:358KB

About author

Vinton G. Cerf is Vice President and Chief Internet Evangelist for Google. He is responsible for identifying new enabling technologies and applications on the Internet and other platforms for the company. From 1994-2005, Dr Cerf served as Senior Vice President at MCI. Prior to that, he was Vice President of the Corporation for National Research Initiatives (CNRI), and from 1982-86 he served as Vice President of MCI. At the US Department of Defense’s Advanced Research Projects Agency (DARPA) from 1976-1982, Dr Cerf played a key role leading the development of Internet and Internet-related data packet and security technologies. Widely known as a Father of the Internet, Dr Cerf is the co-designer with Robert Kahn of TCP/IP protocols and basic architecture of the Internet. President Clinton recognized their work with the US National Medal of Technology. Dr Cerf and Dr Kahn also received the highest civilian honour bestowed in the US, the Presidential Medal of Freedom. Dr Cerf served as Chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and is a Visiting Scientist at the Jet Propulsion Laboratory. He served as Founding President of the Internet Society (ISOC) and was on the ISOC board until 2000. Dr Cerf is a Fellow of the IEEE, ACM, AAAS, the American Academy of Arts and Sciences, the International Engineering Consortium, the Computer History Museum and the National Academy of Engineering. Dr Cerf has received numerous awards and commendations in connection with his work on the Internet, including the Marconi Fellowship, Charles Stark Draper award of the National Academy of Engineering, the Prince of Asturias award for science and technology, the Alexander Graham Bell Award presented by the Alexander Graham Bell Association for the Deaf, the A.M. Turing Award from the Association for Computer Machinery, the Silver Medal of the International Telecommunications Union, and the IEEE Alexander Graham Bell Medal, among many others. Vinton G. Cerf holds a PhD. in Computer Science from UCLA and more than a dozen honorary degrees.

Article abstract

The Internet, the interconnection of disparate networks using a common protocol (TCP/IP), was originally conceived to let a small closed group of professionals communicate. Today, a substantial part of the world’s population uses the Internet and it has dramatically changed the world’s economy, how businesses operate, even how we make friends and keep up with our families. Not surprisingly, the Internet needs updating to meet the calls for more addresses, international domain names, better security, and the demands of new uses.

Full Article

As the first decade of the 21st century comes to a close, it is apparent that the Internet is in transition from its largely historical implementation to something different. At least, it is clear that a transition is needed, although its design, implementation and deployment remain to be determined. This essay outlines some of the most visible motivations for change. It is perhaps ironic that as a new administration whose watchword is ‘change’ takes shape in the United States, the Internet is, itself, experiencing a similar need for change. Problems and issues – a shopping list Internet address space The original design of the Internet (IPv4) allowed for 32 bits of Internet Protocol Address Space, yielding a maximum of about 4.2 billion unique addresses or termination points on the Internet. This limit has been extended, albeit rather awkwardly, with Network Address Translation devices that allow routable IP addresses to be shared by many devices that adopt a ‘local’ IP address space that is only locally significant and that has to be mapped into publicly routable addresses for any transactions between local devices and those on the public Internet. Network Address Translation (NAT) boxes can be nested although this introduces inefficiencies, some end-to-end security fragility, and makes servers inside the local networks difficult if not impossible to reach from the public Internet. IPv6 (IP Version 6) has been a standard since roughly 1996 and uses 128 bits of address space (about 340 trillion trillion trillion addresses). Because IPv4 and IPv6 are not directly interoperable, either all devices need to be capable of serving both protocol formats or some form of gateway is needed to support interworking. The latter notion often leads to Application Layer Gateway designs in which IPV4 and IPv6 TCP/IP connections or UDP/IP transactions are terminated at the gateway and re-opened in the appropriate alternative protocol. This is a non-trivial design problem especially if the gateway needs to know about the original Domain Name that led to a look up in the first place. For the World Wide Web, this is solved using proxy servers that can be aware of the necessary information. A proxy can be designed to operate on behalf of clients or on behalf of servers or both. For other protocols serving File Transfers or other applications, specific solutions may be required for each protocol. The uniformity of the IPv4 Internet is significantly changed in a dual-protocol environment. Assumptions about the completeness of connectivity of the global Internet under IPv4 and IPv6 protocols also have a profound effect on usable solutions. The IPv4 Internet started out in a fully connected fashion and the introduction of Virtual Private Networking to protect corporate network assets selectively broke that full connectivity. IPv6 is starting out in a relatively disconnected fashion with islands of IPv6 capability not always connected through IPv6 routing. This situation has required various forms of tunnelling through the global IPv4 Internet and that has introduced its own forms of fragility. On top of the problems posed by interoperability concerns, the imminent exhaustion (circa 2010) of the IPv4 address space is bound to lead to economic side effects such as the attempted sale of IPv4 address space for premium prices. The potential for fragmenting the address space and increasing the size of the IP routing tables is clear and could affect the successful operation of Internet Service Providers around the world. To add to the problems, IPv6 itself does not automatically solve the problem of increasingly large routing tables and the routing protocols serving them. This problem is compounded by the potential need for routers to contain both IPv6 and IPv4 routing and forwarding tables at the same time. In summary, the introduction of IPv6 as a solution to the limited address space of IPv4 will be a non-trivial process taking years to implement and will require the introduction of intermediate mechanisms that, themselves, produce potential vulnerabilities and brittleness in the Internet. It is nonetheless essential to introduce the expanded address space, without which the Internet cannot continue to support expansion in the longer term. Internationalising the Domain Name System The introduction of non-Latin characters into the Domain Name space using the Unicode character set brings its own set of benefits and risks. The benefits are clear: populations of users for whom their natural languages are not expressible in Latin characters will benefit from the introduction of scripts that are appropriate to their preferences. The risks derive from the introduction of potentially confusing symbols drawn from among the tens of thousands found in the Unicode system. Moreover, the need to map the Unicode strings into reversibly coded strings that contain only lower case Latin characters ‘a-z’, the hyphen symbol ‘-’ and digits ‘0-9’ before entry into the Domain Name System places additional burdens on software that needs to recognize or display Domain Names expressed in Unicode form. It is the author’s opinion that the risks are outweighed by the benefits, but the process is non-trivial and will place a greater burden on parties operating Domain Name Servers at all levels in the system to exercise discretion in allowing or disallowing specific registrations that might otherwise lead to ambiguity or confusion for users. The Internet Corporation for Assigned Names and Numbers (ICANN), which has responsibility for overseeing the general Domain Name System, and assignment of IP address space, has announced plans to introduce non-Latin Top Level Domain names (TLDs) to complement the approximately 250 existing Latin-character Top Level Domains (such as .com, .mx, etc.). The introduction of additional top-level domains may have side effects beyond simply increasing choice for users. It will certainly add to incentives for the so-called ‘domainer’ community that registers millions of domain names either for speculative sale or for generation of advertising revenue from ‘parking lots’. Expansion of the TLD space may also pose challenges for trademark holders who will be concerned about poaching of trademarks in any new TLDs. Internet Security Domain Name System The Domain Name System was introduced into the Internet around 1984, shortly after the Internet itself went live within the user community sponsored by the US Defense Advanced Research Projects Agency (DARPA) in January 1983. The basic design of the Domain Name System had very little security built into it. As the Internet has continued to expand, become accessible to the general public, and has become a major economic phenomenon, exploitation of vulnerabilities in the system have become more frequent and of increasing concern. To deal with some of these vulnerabilities, a system called DNSSEC (Domain Name System Security) was developed. The basic idea is to use Public Key Cryptography to digitally sign entries in the Domain Name System so that Domain Name lookups can be validated, at least insofar as the integrity of the response that is received. That is, the IP address associated with the Domain Name can be shown to be identical to the one originally placed in the system by the holder of that Domain Name. DNSSEC is being deployed rather spottily around the Internet. Like the introduction of IPv6, this spotty implementation brings with it some awkwardness. The so-called root zone of the Domain Name System has not yet been ‘signed’; signing would provide a kind of ‘anchor’ for the rest of the hierarchical system. There continues to be substantial debate as to the proper organizational implementation of DNSSEC. It is important to resolve this debate to provide a base from which to propagate DNSSEC to the rest of the Internet. Some very serious vulnerabilities in popular DNS implementations have also been recently exposed, apart from those dealt with by DNSSEC. For example, it was recently shown that an algorithmic way exists to produce two distinct Domain Name certificates that have the same cryptographic hash under MD5 (Message Digest version 5). This would allow a hacker to register a legitimate certificate signed by a recognized certificate authority and then substitute a false certificate for literally any chosen Domain Name. This would be the ultimate in so-called ‘phishing’ and ‘pharming’ attacks against the Domain Name System. Remedies for these and other design problems have been found, but must also be widely propagated at all levels in the Domain Name System. Operating system vulnerabilities No operating system is entirely invulnerable to attack whether by way of the Internet or through more direct means such as infected media (e.g. floppy disks, optical CDs and DVDs, memory sticks and thumb drives). The history of computing is rife with examples of the exploitation of bugs found in operating system software (e.g. buffer overruns, protocol vulnerabilities, and execution of downloaded ‘malware’). It is becoming increasingly desirable to emphasize research and development of operating systems with substantially better security features, ability to confine user software to minimise access to system resources, stronger authentication of users and downloaded software, stronger access control of system assets and information, and so on. The rapid proliferation of programmable devices (e.g. personal digital assistants, mobile phones, appliances of all kinds, network automobiles, ‘smart’ homes and office buildings) dictates a strong need for better and safer operating system platforms. There is no dearth of motivation for research and development, but new ideas are needed and perhaps re-application of some older ones. For example, the reinforcement of security policy through a combination of hardware and software that was a hallmark of the 1960s’ MIT Project MAC might well be timely to revisit. Browser vulnerabilities It has become very clear that the vulnerability of browsers to downloaded ‘malware’ is a major source of risk in today’s Internet. The so-called ‘botnet armies’ made up of millions of compromised computers on the Internet are often a consequence of downloading executable software from an infected web site. Sometimes these sites are deliberately outfitted with invasive software, but often the server site itself may have been vulnerable to hacking, becoming an inadvertent participant in the infection of computers on the Internet. Chrome recently released a new browser whose software is available in source form to anyone interested in using it. Part of the motivation for this open-source development was to provide a higher performance browser, but another aspect was to build stronger protections from downloaded malware or cross-interference between distinctly executing threads of code in the browser. There is little doubt in the author’s mind that further research and development on browser and operating system security is essential to future safe use of the Internet. Routing vulnerabilities The routers of the global Internet exchange information with one another using the Border Gateway Protocol (BGP). In essence, most routers believe the information they receive from others on the Internet. However, it is becoming increasingly clear, especially as we enter the exhaustion period of the IPv4 address space, that strongly authenticating the right of any router to ‘announce’ that it is connected to specific parts of the Internet is a desirable feature for the Internet of the future. The so-called Regional Internet Registries (RIRs) are working along with the Internet Engineering Task Force (IETF) to develop standard means by which to verify that any particular party has the right to announce specific parts of the Internet address space. The resulting filtering effect on incoming routing update messages should contribute materially to protecting against the ‘hijacking’ of Internet address space. Validation of routing update information would also protect against inappropriate ‘black holing’ of addresses, whether accidental or deliberate, by parties incorrectly announcing that they provide access to specific ranges of Internet address space. As should be clear from this very sketchy compendium of security issues, the Internet is in need of some serious development to make it a safer and more secure environment in which to conduct business, carry out personal communication, and to support the kind of privacy needed for financial and medical information transactions. New directions Cloud computing The term, ‘Cloud computing’, recently introduced into the Internet vocabulary, is in some ways a re-invention of an older term, ‘computing utility’ that was popular in the 1960s when computers were scarce, large and very expensive to own and operate. From the perspective of the early 21st century, computers are cheap, plentiful, and becoming an increasingly common part of almost everything we do. The giant-central, time-shared computer of the past has been replaced by giant, networked data centres each containing tens of thousands of computers. The Internet, or ‘cloud’, has become home to literally hundreds of millions of computers – perhaps more than 1.5 billion if one includes laptops, desktops, personal digital assistants, mobile phones and servers. Cloud computing is simply using one or more data centres to carry out computational tasks or to process and store information beyond the capacity of a single personal or departmental computer or server. For some applications, the cloud is even more powerful than the largest existing supercomputers. What makes cloud computing so interesting is the flexibility it brings to serving very large numbers of users whose requirements vary dramatically from moment to moment, both on an individual basis and as an aggregate. Users typically run more than one application at a time and each application may have widely varying transmission, storage and computational requirements from moment to moment. For example, while ‘surfing’ the World Wide Web, a user might be sending short messages indicating mouse clicks and then may suddenly activate a large file transfer or a streaming video. One might be reading email messages and then click on one that has a large attachment. With tens of thousands of processors available, a computing cloud can dynamically respond to individual and aggregate user demands even more flexibly than can a traditional supercomputer. Moreover, owing to its shared architecture, users may be able to collaborate more effectively in a cloud computing environment since all of them might be sharing access to the same document or database, allowing for real-time discussions and content updates that can be seen by all collaborating parties at the same time. Many organizations are building cloud computing systems and are offering their services to clients in the business sector, in government and among the general public. What most cloud systems may not have is a vocabulary for referring to other clouds, for exchange of information or transfer of functional tasks from one cloud to another. This is an area for serious research and development: vocabulary, nomenclature and interface standards to allow transactions among clouds that are operated by distinct entities. One looks for ways to establish portable access controls so that data originating in one cloud can be moved to another while preserving the security of the data. What can one cloud ask another to do? How can data be exchanged without loss of integrity, chain of custody, and auditable transfers of authority and responsibility? These are just some of the many questions awaiting serious analysis by cloud computing researchers. Embedded systems Increasingly, devices and systems are becoming networked and linked to the Internet. These systems may comprise collections of sensors or controllers. They may be used to monitor environmental conditions or to act on measurements to control electrical demand or to support mobile, networked systems such as navigational systems and vehicle instrumentation. The rising use of embedded systems is increasing the demand for Internet address space and for increased capacity and geographic coverage of wireless access to the Internet. It is likely that such applications will multiply during the second decade of the 21st century. Broadband Internet access Increased demand for higher capacity access to the Internet is driving the need for new broadband technologies and stressing the economics of Internet access provision. Especially in lightly populated rural areas, it is not clear how many alternative access providers are economically sustainable. In the absence of traditional competition, it may be necessary to accept monopoly provisioning of service to accommodate economic conditions but there is a concomitant need then to introduce suitable regulatory practices to protect users of broadband Internet services from abusive or anti-competitive behaviour. It seems clear that no one technology is suited to every broadband requirement leading to the conclusion that the primary issues will be economic and regulatory and not technical. Innovative business models will be needed to account for the apparent fact that new Internet-based revenues are generally associated with applications well above the basic Internet Protocol layer. A good case in point is the use of advertising to support a wide range of Internet services. The last mile access providers may need multiple business models to account for the cost of providing non-discriminatory access for transport of Internet Protocol packets and separately for applications typically paid for directly by users or indirectly by way of advertising. Convergence of all digital content The Internet’s ability to transport all forms of digital information including voice, video, text, imagery, and endless forms of data suggest that traditionally distinct services on distinct networks will be transported in the future on a common packet-switched Internet. While this does not rule out the use of dedicated networks as has historically been the case, the business and cost models for combined services may change the balance of implementations of these services from predominantly dedicated networks to predominantly shared packet-switched networks. The presence of all forms of media in the same network will stimulate a cornucopia of applications that would have been impossible to combine using distinct transport networks. Searching video for segments in which particular captioned speech text appears will be quite normal. Search imagery for particular kinds of images will also be quite common. Automatic translation of email, voice mail, conference calls, for both text and audio will be at least a target if not a consequence of research in this area. Voice interaction with the contents of the Internet will also be quite commonplace as well as voiced interactions with an endless array of devices on the Internet. Search Searching the Internet has been transformed by developments starting with Gopher, Archie, the Wide Area Information System, Alta-Vista, Yahoo!, Lycos, Google and others. As we become more sophisticated in our ability to discover and index increasingly complex data objects, we will find that search becomes an even more useful tool than it has been to date. Automatic background searches such as those supported by Google Alert or Google Trends will become much more common. There is still much to be done to make more searchable imagery, audio material, complex data structures, and the like. Semantic searching will become increasingly necessary to make searches and results more reflective of user needs. Therein lie many potential research topics for PhD. students eager to isolate a dissertation topic from the rest of the universe! Internet governance The Internet Governance Forum (IGF), spun off from the World Summit on the Information Society (WSIS) will continue to meet annually for the next several years. The discussions in the IGF mirror similar discussions in policymaking bodies around the world. These discussions focus on Internet business practices, privacy protection, law enforcement, technological support for online commerce, management of shared Internet resources (such as Domain Names, Internet Address Space), policies for interconnection of Internet Service Providers, taxation policies, intellectual property protection and many, many other matters of mutual interest. There are many opportunities for improving domestic and international policy frameworks affecting the Internet’s ability to support a growing international user community and innovative new applications and services. Interplanetary Internet The successful demonstration of new Delay and Disruption Tolerant Networking (DTN) protocols at interplanetary distances for use in space exploration has initiated a process that may result in standards for deep space communication over the next decades. In tests using the EPOXI spacecraft during late 2008, NASA successfully exchanged data between a ground station on Earth and a spacecraft about 75-80 light seconds away. The DTN protocols contemplate an automatically routed, interplanetary scale network spanning the solar system. Successful conclusion of testing of the DTN protocols on board the International Space Station during 2009 will set the stage for potential adoption of the new DTN protocols by the Consultative Committee for Space Data Systems that comprises all the space-faring countries of the world. The era of interplanetary internetworking has begun. By any reasonable measure, the Internet is about to undergo the most dramatic set of architectural and functional changes in its twenty-five year history since 1983 when it was first deployed. Successful solutions to the many issues outlined in this essay would seem to place the Internet on a path towards continued expansion and provision of significant new services to all sectors of society. By the same token, failure to resolve these many issues may stymie Internet expansion and use and force a re-thinking of the network design to match foreseen and unforeseen needs. In fact, such a re-design effort is already underway in several quarters. In the United States an effort called Future Internet Design (FIND), supported by the US National Science Foundation, is considering what the Internet might look like if it were designed ab initio today. A similar effort is underway in Europe. From these speculative studies may emerge either an entirely new Internet design or, possibly, evolutionary changes to the existing design that would lead to the more secure and flexible environment that the issues outlined in this essay, among others, may call for.

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More