Thursday, April 26, 2007

Microsoft Internet Explorer

Microsoft Internet Explorer is a web browser developed by Microsoft and included as part of the Microsoft Windows line of operating systems starting in 1995.

After the first release for Windows 95, additional versions of Internet Explorer were developed for other operating systems: Internet Explorer for Mac and Internet Explorer for UNIX (the latter for use through the X Window System on Solaris and HP-UX).

Only the Windows version remains in active development; the Mac OS X version is no longer supported.

It has been the most widely used web browser since 1999, peaking at nearly 90% market share with IE6 in the early 2000s—corresponding to over 900 million users worldwide by 2006.

Though released in 1995 as part of the initial OEM release of Windows 95, Internet Explorer was not included in the first retail, or shrink-wrap, release of Windows 95.

The most recent release is version 7.0, which is available as a free update for Windows XP with Service Pack 2, and Windows Server 2003 with Service Pack 1, and is included with Windows Vista.

Versions of Internet Explorer prior to 6.0 SP2 are also available as a separate download for versions of Windows prior to Windows XP.

An embedded OEM version called Internet Explorer for Windows CE (IE CE) is also available for WinCE based platforms and is currently based on IE6.

Another Windows CE/ Windows Mobile browser known as Pocket Internet Explorer is from a different codebase and should not be confused with desktop versions of the browser.

Thursday, February 22, 2007

Origin of Unicode

Unicode has the explicit aim of transcending the limitations of traditional character encodings, such as those defined by the ISO 8859 standard which find wide usage in various countries of the world but remain largely incompatible with each other. Many traditional character encodings share a common problem in that they allow bilingual computer processing (usually using Roman characters and the local language) but not multilingual computer processing (computer processing of arbitrary languages mixed with each other).
Unicode, in intent, encodes the underlying characters — graphemes and grapheme-like units — rather than the variant glyphs (renderings) for such characters. In the case of Chinese characters, this sometimes leads to controversies over distinguishing the underlying character from its variant glyphs.

In text processing, Unicode takes the role of providing a unique code point — a number, not a glyph — for each character. In other words, Unicode represents a character in an abstract way and leaves the visual rendering (size, shape, font or style) to other software, such as a web browser or word processor. This simple aim becomes complicated, however, by concessions made by Unicode's designers in the hope of encouraging a more rapid adoption of Unicode.
The first 256 code points were made identical to the content of ISO 8859-1 so as to make it trivial to convert existing western text. A lot of essentially identical characters were encoded multiple times at different code points to preserve distinctions used by legacy encodings and therefore allow conversion from those encodings to Unicode (and back) without losing any information. For example, the "fullwidth forms" section of code points encompasses a full Latin alphabet that is separate from the main Latin alphabet section. In Chinese, Japanese and Korean (CJK) fonts, these characters are rendered at the same width as CJK ideographs rather than at half the width. For other examples, see Duplicate characters in Unicode.
Also, while Unicode allows for combining characters it also contains precomposed versions of most letter/diacritic combinations in normal use. These make conversion to and from legacy encodings simpler and allow applications to use Unicode as an internal text format without having to implement combining characters. For example é can be represented in Unicode as U+0065 (Latin small letter e) followed by U+0301 (combining acute) but it can also be represented as the precomposed character U+00E9 (Latin small letter e with acute).
The Unicode standard also includes a number of related items, such as character properties, text normalisation forms and bidirectional display order (for the correct display of text containing both right-to-left scripts, such as Arabic or Hebrew, and left-to-right scripts).

Unicode

Unicode is an industry standard designed to allow text and symbols from all of the writing systems of the world to be consistently represented and manipulated by computers. Developed in tandem with the Universal Character Set standard and published in book form as The Unicode Standard, Unicode consists of a character repertoire, an encoding methodology and set of standard character encodings, a set of code charts for visual reference, an enumeration of character properties such as upper and lower case, a set of reference data computer files, and rules for normalization, decomposition, collation and rendering.

The Unicode Consortium, the non-profit organization that coordinates Unicode's development, has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingual environments. Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including XML, the Java programming language and modern operating systems.

ICANN

The Internet Corporation for Assigned Names and Numbers (ICANN) is the authority that coordinates the assignment of unique identifiers on the Internet, including domain names, Internet protocol addresses, and protocol port and parameter numbers. A globally unified namespace (i.e., a system of names in which there is one and only one holder of each name) is essential for the Internet to function. ICANN is headquartered in Marina del Rey, California, but is overseen by an international board of directors drawn from across the Internet technical, business, academic, and non-commercial communities.

The US government continues to have the primary role in approving changes to the root zone file that lies at the heart of the domain name system. Because the Internet is a distributed network comprising many voluntarily interconnected networks, the Internet, as such, has no governing body. ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body on the global Internet, but the scope of its authority extends only to the Internet's systems of domain names, Internet protocol addresses, and protocol port and parameter numbers.

On Nov. 16, 2005, the World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to discuss Internet-related issues.

Internet Structure

There have been many analyses of the Internet and its structure. For example, it has been determined that the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.

Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as:
GEANT GLORIAD Abilene Network JANET (the UK's Joint Academic Network aka UKERNA) These in turn are built around relatively smaller networks. See also the list of academic computer network organizations

In network schematic diagrams, the Internet is often represented by a cloud symbol, into and out of which network communications can pass.

Creation of the Internet

The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA, later known as the Defense Advanced Research Projects Agency, or DARPA) in February 1958 to regain a technological lead. ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution.

In 1950, Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT where he served on a committee that established MIT Lincoln Laboratory. He worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to Circuit switching) to make a network highly robust and survivable. After much work, the first node went live at UCLA on October 29, 1969 on what would be called the ARPANET, one of the "eve" networks of today's Internet. Following on from this, the British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981.

The first TCP/IP wide area network was operational by 1 January 1983, when the United States' National Science Foundation (NSF) constructed a university network backbone that would later become the NSFNet. (This date is held by some to be technically that of the birth of the Internet.) It was then followed by the opening of the network to commercial interests in 1985. Important, separate networks that offered gateways into, then later merged with, the NSFNet include Usenet, BITNET and the various commercial and educational X.25 Compuserve and JANET. Telenet (later called Sprintnet), was a large privately-funded national computer network with free dialup access in cities throughout the U.S. that had been in operation since the 1970s. This network eventually merged with the others in the 1990s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over these pre-existing communication networks, especially the international X.25 IPSS network, allowed for a great ease of growth. Use of the term "Internet" to describe a single global TCP/IP network originated around this time.

The network gained a public face in the 1990s. On August 6, 1991 CERN, which straddles the border between France and Switzerland publicized the new World Wide Web project, two years after Tim Berners-Lee had begun creating HTML, HTTP and the first few Web pages at CERN.
An early popular Web browser was ViolaWWW based upon HyperCard. It was eventually replaced in popularity by the Mosaic Web Browser. In 1993 the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign released version 1.0 of Mosaic and by late 1994 there was growing public interest in the previously academic/technical Internet. By 1996 the word "Internet" was coming into common daily usage, frequently misused to refer to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks such as FidoNet have remained separate). This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.

Internet and WWW

The Internet and the World Wide Web are not synonymous: the Internet is a collection of interconnected computer networks, linked by copper wires, fiber-optic cables, wireless connections, etc.; the Web is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. The World Wide Web is accessible via the Internet, as are many other services including e-mail, file sharing, and others described below.

The best way to define and distinguish between these terms is with reference to the Internet protocol suite. This collection of standards and protocols is organized into layers such that each layer provides the foundation and the services required by the layer above. In this conception, the term Internet refers to computers and networks that communicate using IP (Internet protocol) and TCP (transfer control protocol). Once this networking structure is established, then other protocols can run “on top.” These other protocols are sometimes called services or applications. Hypertext transfer protocol, or HTTP, is the application layer protocol that links and provides access to the files, documents and other resources of the World Wide Web.

Wednesday, January 10, 2007

High-Speed Dial-Up

What is often advertised as "high-speed dial-up Internet" or "accelerated dial-up" by service providers such as Earthlink and NetZero in the United States is a form of dial-up access that utilizes the newer modem standard v.92 to shorten the log-on (or handshake) process, and then once a connection has been established the provider will selectively compress, filter, and cache data being sent to the users home with the overall effect of increasing the speed of browsing most standard web pages (see also proxy server).

The term high speed is misleading as these processes do not increase the overall throughput of the line, only making more efficient use of the bandwidth that is already there. Certain applications cannot be accelerated, such as SHTTP, streaming media, or file transfers. The compression of certain files such as pictures can have a negative affect on the browsing experience of the user, due to the lower quality that it imposes.

Dial-Up Access

Dial-up access uses a modem connected to a computer and a telephone line to dial into an Internet service provider's node to establish a modem-to-modem link, which is then routed to the Internet.

Despite the advent of widely available broadband Internet access in most parts of the Western world, many people worldwide still connect via dial-up simply because there is no high speed Internet in their area.

Dial-up requires time to establish a telephone connection (approximately several seconds, depending on the location) and perform handshaking before data transfers can take place. In locales with telephone connection charges, each connection incurs an incremental cost. If calls are time-charged, the duration of the connection incurs costs.

Dial-up access is a transient connection, because either the user or the ISP terminates the connection. Internet service providers will often set a limit on connection durations to prevent hogging of access, and will disconnect the user — requiring reconnection and the costs and delays associated with that.

Dial-up requires no additional infrastructure on top of the telephone network. As telephone points are available throughout the world, dial-up remains useful to travelers. Dial-up is usually the only choice available for most rural or remote areas where getting a broadband connection is impossible due to low population and demand. Sometimes dial-up access may also be an alternative to people who have limited budgets as it is offered for free by some, though broadband is now increasingly available at lower prices in countries such as the United States and Canada due to market competition.

Monday, January 8, 2007

Operation Theory of DSL

The local loop of the Public Switched Telephone Network was initially designed to carry POTS voice communication and signaling, since the concept of data communications as we know it today did not exist. For reasons of economy, the phone system nominally passes audio between 300 and 3,400 Hz, which is regarded as the range required for human speech to be clearly intelligible. This is known as voiceband or commercial bandwidth.

At the local telephone exchange (UK terminology) or central office (US terminology) the speech is generally digitized into a 64 kbit/s data stream in the form of an 8 bit signal using a sampling rate of 8,000 Hz, therefore – according to the Nyquist theorem – any signal above 4,000 Hz is not passed by the phone network (and has to be blocked by a filter to prevent aliasing effects).
The laws of physics - specifically, the Shannon limit - caps the speed of data transmission. For a long time, it was believed that a conventional phone line couldn't be pushed beyond the low speed limits (typically under 9600 bps). However, in the 1930s techniques were developed for broadband communications that allowed the limit to be greatly pushed.

The local loop connecting the telephone exchange to most subscribers is capable of carrying frequencies well beyond the 3.4 kHz upper limit of POTS. Depending on the length and quality of the loop, the upper limit can be tens of megahertz. DSL takes advantage of this unused bandwidth of the local loop by creating 4312.5 Hz wide channels starting between 10 and 100 kHz, depending on how the system is configured. Allocation of channels continues at higher and higher frequencies (up to 1.1 MHz for ADSL) until new channels are deemed unusable. Each channel is evaluated for usability in much the same way an analog modem would on a POTS connection. More usable channels equates to more available bandwidth, which is why distance and line quality are a factor. The pool of usable channels is then split into two groups for upstream and downstream traffic based on a preconfigured ratio. Once the channel groups have been established, the individual channels are bonded into a pair of virtual circuits, one in each direction. Like analog modems, DSL transceivers constantly monitor the quality of each channel and will add or remove them from service depending on whether or not they are usable.
ADSL supports two modes of transport: fast channel and interleaved channel. Fast channel is preferred for streaming multimedia, where an occasional dropped bit is acceptable, but lags are less so. Interleaved channel works better for file transfers, where transmission errors are impermissible, even though resending packets may increase latency.

Because DSL operates at above the 3.4kHz voice limit it can not be passed through a load coil. Load coils are in essence filters that block out any non-voice frequency. They're commonly set at regular intervals in lines placed only for POTS service. A DSL signal can not pass through a properly installed and working load coil nor can voice service be maintained past a certain distance without them. Some areas that are within range for DSL service are disqualified from eligibility because of load coil placement. Because of this phone companies are efforting to remove load coils on copper loops that can operate without them and conditioning lines to not need them through the use of FTTN

History of DSL

Digital subscriber line technology was originally implemented as part of the ISDN specification, thus can operate on a BRI ISDN line as well as an analog phone line.
Joe Lechleider at Bellcore (now Telcordia Technologies) developed ADSL in 1988 by placing wideband digital signals above the existing baseband analog voice signal carried between telephone company central offices and customers on conventional twisted pair cabling.

US telephone companies promote DSL to compete with cable modems. DSL service was first provided over a dedicated "dry loop", but when the FCC required ILECs to lease their lines to competing providers such as Earthlink, shared-line DSL became common. Also known as DSL over UNE), this allows a single pair to carry data (via a DSLAM) and analog voice (via a circuit switched telephone switch) at the same time. Inline low-pass filter/splitters keep the high frequency DSL signals out of the user's telephones. Although DSL avoids the voice frequency band, the nonlinear elements in the phone would otherwise generate audible intermodulation products and impair the operation of the data modem.

Older ADSL standards can deliver 8 Mbit/s to the customer over about 2 km (1.25 miles) of unshielded twisted pair copper wire. The latest standard, ADSL2+, can deliver up to 24 Mbit/s, depending on the distance from the DSLAM. Some customers, however, are located farther than 2 km (1.25 miles) from the central office, which significantly reduces the amount of bandwidth available (thereby reducing the data rate) on the wires.

DSL (Digital Subscriber Line)

DSL or xDSL, is a family of technologies that provide digital data transmission over the wires of a local telephone network. DSL originally stood for digital subscriber loop, although in recent years, many have adopted digital subscriber line as a more marketing-friendly term for the most popular version of DSL, ADSL.

Typically, the download speed of DSL ranges from 640 kilobits per second (kbit/s) to 3,000, or exceptionally from 128 to 24,000 kbit/s depending on DSL technology and service level implemented. Typically, upload speed is lower than download speed for Asymmetric Digital Subscriber Line (ADSL) and equal to download speed for the rarer Symmetric Digital Subscriber Line (SDSL).

Virtual ISP

A Virtual ISP (vISP) purchases services from another ISP (sometimes called a wholesale ISP or similar within this context) that allow the vISP's customers to access the Internet via one or more Points of Presence (PoPs) that are owned and operated by the wholesale ISP. There are various models for the delivery of this type of service, for example, the wholesale ISP could provide network access to end users via its dial-up modem PoPs or DSLAMs installed in telephone exchanges, and route, switch, and/or tunnel the end user traffic to the vISP's network, whereupon they may route the traffic toward its destination. In another model, the wholesale ISP does not route any end user traffic, and needs only provide AAA (Authentication, Authorization and Accounting) functions, as well as any "value-add" services like email or web hosting. Any given ISP may use their own PoPs to deliver one service, and use a vISP model to deliver another service, or, use a combination to deliver a service in different areas. The service provided by a wholesale ISP in a vISP model is distinct from that of an upstream ISP, even though in some cases, they may both be one and the same company. The former provides connectivity from the end user's premises to the Internet or to the end user's ISP, the latter provides connectivity from the end user's ISP to all or parts of the rest of the Internet.
A vISP can also refer to a completely automated white label service offered to anyone at no cost or for a minimal set-up fee. The actual ISP providing the service generates revenue from the calls and may also share a percentage of that revenue with the owner of the vISP. All technical aspects are dealt with leaving the owner of vISP with the task of promoting the service. This sort of service is however declining due to the popularity of unmetered internet access also known as flatrate.

How ISPs Connect to the Internet

Just as their customers pay them for Internet access, ISPs themselves pay upstream ISPs for Internet access. In the simplest case, a single connection is established to an upstream ISP using one of the technologies described above, and the ISP uses this connection to send or receive any data to or from parts of the Internet beyond its own network; in turn, the upstream ISP uses its own upstream connections, or connections to its other customers (usually other ISPs) to allow the data to travel from source to destination.

In reality, the situation is often more complicated. For example, ISPs with more than one Point of Presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of multiple upstream ISPs and have connections to each one at one or more of their PoPs. ISPs may engage in peering, where multiple ISPs interconnect with one another at a peering point or Internet exchange point (IX), allowing the routing of data between their networks, without charging one another for that data - data that would otherwise have passed through their upstream ISPs, incurring charges from the upstream ISP. ISPs who require no upstream, and have only customers and/or peers, are called Tier 1 ISPs, indicating their status as ISPs at the top of the Internet hierarchy. Routers, switches, Internet routing protocols, and the expertise of network administrators all have a role to play in ensuring that data follows the best available route and that ISPs can "see" one another on the Internet.

Sunday, January 7, 2007

An Internet service provider

An Internet service provider (abbr. ISP, also called Internet access provider or IAP) is a business or organization that provides to consumers access to the Internet and related services. In the past, most ISPs were run by the phone companies. Now, ISPs can be started by just about any individual or group with sufficient money and expertise. In addition to Internet access via various technologies such as dial-up and DSL, they may provide a combination of services including Internet transit, domain name registration and hosting, web hosting, and colocation.