Subject: Internet technologies.



Content:Basic concepts of the Internet. Universal Resource Identifier (URI), and its purpose parts. The DNS service. Web-technologies: HTTP, DHTML, CSS, and JavaScript. E-mail.Message Format. Protocols SMTP, POP3, IMAP.

 

The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link billions of devices worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, voice over IP telephony, and peer-to-peer networks for file sharing.

The WWW is a collection of Internet sites that can be accessed by using a hypertext interface. Hypertext documents on the web contain links to other documents located anywhere on the web. By clicking on a link, you are immediately taken to another file or site to access relevant materials. The interesting thing about Hypertext links is that the links might take you to related material on another computer located anywhere in the world, rather than just to a file on your local hard drive.

Basic WWW Concepts:

1) BROWSER - A WWW browser is software on your computer that allows you to access the World Wide Web. Examples include Netscape Navigator and Microsoft Internet Explorer. Please know that a browser can’t work its magic unless you are somehow connected to the Internet. At home, that is normally accomplished by using a modem that is attached to your computer and your phone line and allows you to connect to, or dial-up, an Internet Service Provider (ISP). At work, it may be accomplished by connecting your workplace’s local area network to the Internet by using a router and a high speed data line.

2) HYPERTEXT AND HYPERMEDIA - Hypertext is text that contains electronic links to other text. In other words, if you click on hypertext it will take you to other related material. In addition, most WWW documents contain more than just text. They may include pictures, sounds, animations, and movies. Documents with links that contain more than just text are called hypermedia.

3) HTML (HYPERTEXT MARKUP LANGUAGE) - HTML is a set of commands used to create world wide web documents. The commands allow the document creator to define the parts of the document. For example, you may have text marked as headings, paragraphs, bulleted text, footers, etc. There are also commands that let you import images, sounds, animations, and movies as well as commands that let you specify links to other documents.

4) URL (UNIFORM RESOURCE LOCATOR)- Links between documents are achieved by using an addressing scheme. That is, in order to link to another document or item (sound, picture, movie), it must have an address. That address is called its URL. The URL identifies the host computer name, directory path, and file name of the item. It also identifies the protocol used to locate the item such as hypertext, gopher, ftp, telnet or news.

5) HTTP (HYPERTEXT TRANPORT PROTOCOL) - HTTP is the protocol used to transfer hypertext or hypermedia documents.

6) HOME PAGE - A home page is usually the starting point for locating information at a WWW site.

7) CLIENTS AND SERVERS - If a computer has a web browser installed, it is known as a client. A host computer that is capable of providing information to others is called a server. A server requires special software in order to provide web documents to others.

IP Addresses.

In order to identify all the computers and other devices (printers and other networked peripherals) on the Internet, each connected machine has a unique number, called an "IP address". IP stands for "Internet Protocol," the common language used by machines on the Internet to share information.

An IP address is written as a set of 4 numbers, separated by periods, as in: 203.183.184.10

This representation is sometimes referred to as dotted-octet representation of an IP address.

One single network, for example, might typically have all the IP addresses starting with 203.183.184 (203.183.184.0 through 203.183.184.255). This has been a customary way of distributing IP numbers - in chunks of 256 addresses. This is referred to as a "Class C" network. If numbers are distributed as Class C networks it means there are only a possible 256 to the 3rd power different networks - or just 16 million networks - in the whole world.

Basically you should understand that you need a fixed, predefined IP address assigned to each machine that acts as a server to the outside world. You can get an IP address assignment from your network administrator, who receives them in turn from your network's Internet provider.

Domains & Sub-domains.

Each machine having its own unique IP address is great for machines communicating with each other, but quite difficult for humans to remember.

For example, the mail server at Web Crossing Harbor has an IP address 210.226.166.200. You could send email to doug@210.226.166.200, but it isn't very convenient for a lot of reasons. For one thing, it is difficult to remember. For another, we might need to move the mail server to a different machine (with a different IP address) someday. Then we would have to tell everybody our new IP address in order to receive mail.

To solve this problem a system of giving easy-to-remember names to IP addresses was created. This system is called the domain system. There are several top-level domains, and all other names fall under that in a hierarchy of sub-domains.

There are two basic kinds of top-level domains - those based on type of activity and those based on geographical location:

 

 

SomeActivityBasedDomains

.com Perhaps the most well-known top-level domain. Originally it was designated for use by companies and commercial activities. Now it can be used by anybody for any purpose.
.org Originally designated for use by nonprofit organizations and individuals, now it can be used for any purpose.
.net Originally designated for use by network organizations (such as Internet providers). Nowitcanbeusedforanypurpose.
.gov For governmental organizations in the United States.
.mil For military organizations in the United States.
.edu For four-year degree-granting colleges and universities only.

 

Servers

A server is just a host that serves something. Some examples are:

- web servers - computers that serve web pages. People connect to web servers using browsers, such as Netscape Navigator or Internet Explorer.

- FTP servers - People connect to them for file transfer, using a browser or a specialized FTP program, such as Fetch (on a Mac) or FTP Explorer (on Windows).

- mail servers - People connect to them to send and receive mail, using such programs as Eudora, Netscape Mail, Claris Mail and Microsoft Outlook Express.

- Web Crossing - a server that lets users create and use online communities, including forums and chat and other services.

DNS.

The Domain Name System (DNS) is a hierarchical decentralized naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for the purpose of locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System is an essential component of the functionality of the Internet.

The Domain Name System delegates the responsibility of assigning domain names and mapping those names to Internet resources by designating authoritative name servers for each domain. Network administrators may delegate authority over sub-domains of their allocated name space to other name servers. This mechanism provides distributed and fault tolerant service and was designed to avoid a single large central database.

The Domain Name System also specifies the technical functionality of the database service which is at its core. It defines the DNS protocol, a detailed specification of the data structures and data communication exchanges used in the DNS, as part of the Internet Protocol Suite. Historically, other directory services preceding DNS were not scalable to large or global directories as they were originally based on text files, prominently the HOSTS.TXT resolver. The Domain Name System has been in use since the 1980s.

The Internet maintains two principal namespaces, the domain name hierarchy and the Internet Protocol (IP) address spaces. The Domain Name System maintains the domain name hierarchy and provides translation services between it and the address spaces. Internet name servers and a communication protocol implement the Domain Name System. A DNS name server is a server that stores the DNS records for a domain; a DNS name server responds with answers to queries against its database.

The most common types of records stored in the DNS database are for Start of Authority (SOA), IP addresses (A and AAAA), SMTP mail exchangers (MX), name servers (NS), pointers for reverse DNS lookups (PTR), and domain name aliases (CNAME). Although not intended to be a general purpose database, DNS can store records for other types of data for either automatic lookups, such as DNSSEC records, or for human queries such as responsible person (RP) records. As a general purpose database, the DNS has also been used in combating unsolicited email (spam) by storing a real-time blackhole list. The DNS database is traditionally stored in a structured zone file.

HTTP.

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web.

Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Standards development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs). The first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was obsoleted by RFC 2616 in 1999.

A later version, the successor HTTP/2, was standardized in 2015, and is now supported by major web servers.

JavaScript

JavaScript is a high-level, dynamic, untyped, and interpreted programming language. It has been standardized in the ECMAScript language specification. Alongside HTML and CSS, it is one of the three core technologies of World Wide Web content production; the majority of websites employ it and it is supported by all modern Web browsers without plug-ins. JavaScript is prototype-based with first-class functions, making it a multi-paradigm language, supporting object-oriented, imperative, and functional programming styles. It has an API for working with text, arrays, dates and regular expressions, but does not include any I/O, such as networking, storage, or graphics facilities, relying for these upon the host environment in which it is embedded.

Although there are strong outward similarities between JavaScript and Java, including language name, syntax, and respective standard libraries, the two are distinct languages and differ greatly in their design.

JavaScript is also used in environments that are not Web-based, such as PDF documents, site-specific browsers, and desktop widgets. Newer and faster JavaScript virtual machines (VMs) and platforms built upon them have also increased the popularity of JavaScript for server-side Web applications. On the client side, JavaScript has been traditionally implemented as an interpreted language, but more recent browsers perform just-in-time compilation. It is also used in game development, the creation of desktop and mobile applications, and server-side network programming.

E-mail.

Electronic mail is a method of exchanging digital messages between computer users; Email first entered substantial use in the 1960s and by the 1970s had taken the form now recognised as email. Email operates across computer networks, which in the 2010s is primarily the Internet. Some early email systems required the author and the recipient to both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver, and store messages. Neither the users nor their computers are required to be online simultaneously; they need to connect only briefly, typically to a mail server, for as long as it takes to send or receive messages.

POP3.

In computing, the Post Office Protocol (POP) is an application-layer Internet standard protocol used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection. POP has been developed through several versions, with version 3 (POP3) being the last standard in common use before largely being made obsolete by the more advanced IMAP. In POP3, e-mails are downloaded from the server's inbox to your computer. E-mails are available when you are not connected.

SMTP

Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (email) transmission. First defined by RFC 821 in 1982, it was last updated in 2008 with the Extended SMTP additions by RFC 5321—which is the protocol in widespread use today.

SMTP by default uses TCP port 25. The protocol for mail submission is the same, but uses port 587. SMTP connections secured by SSL, known as SMTPS, default to port 465 (nonstandard, but sometimes used for legacy reasons).

Although electronic mail servers and other mail transfer agents use SMTP to send and receive mail messages, user-level client mail applications typically use SMTP only for sending messages to a mail server for relaying. For retrieving messages, client applications usually use either POP3 or IMAP.

Although proprietary systems (such as Microsoft Exchange and IBM Notes) and webmail systems (such as Outlook.com, Gmail and Yahoo! Mail) use their own non-standard protocols to access mail box accounts on their own mail servers, all use SMTP when sending or receiving email from outside their own systems.

IMAP.

In computing, the Internet Message Access Protocol (IMAP) is an Internet standard protocol used by e-mail clients to retrieve e-mail messages from a mail server over a TCP/IP connection. IMAP is defined by RFC 3501.

IMAP was designed with the goal of permitting complete management of an email box by multiple email clients, therefore clients generally leave messages on the server until the user explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL (IMAPS) is assigned the port number 993.

Virtually all modern e-mail clients and servers support IMAP. IMAP and the earlier POP3 (Post Office Protocol) are the two most prevalent standard protocols for email retrieval, with many webmail service providers such as Gmail, Outlook.com and Yahoo! Mail also providing support for either IMAP or POP3.

 

Lecture №10

Subject:Cloud and Mobile technologies.

Content:Data Centers. Trends in the development of modern infrastructure solutions. The principles of cloud computing. Virtualization technologies.The Web-service Cloud.Basic terms and concepts of mobile technology.Mobile services.mobile technology standards.

 

Cloud technologies.Today in area of information technologies start to be in the lead "cloudy technologies". They not only transforming the face of the information and communication technologies, but also radically altered activity of the society.

This idea was formulated in 60 years, J. McCarthy, but after forty years of service calculation has been supported by companies, Amazon, Apple, Google, HP, IBM, Microsoft and Oracle. The cloud is actually a model that enables the user in any time convenient to them to gain access to computer resources. For example, Apple with 2011 offers the opportunity to its users in the network cloud to store music, video, movies, and personal information.

The specificity of these technologies is that data processing takes place not on a personal computer and the Internet is served on special. The specificity of these technologies is that data processing is not on the desktop, and on special server the Internet. Developers in the field of computer simulation we can post programmatic complexes on network resources. If the user had to keep buying the product, now he's become the tenant of various services.

As with any new technology, this new way of working-new risks and challenges, especially when considering the security and privacy of information to be stored and processed within the cloud.

One of risks of cloudy technologies, that users, who are owners of the information, loses the control over the data when they provide data in a cloud for processing. And communications with it the risk of disclosing of data, so also a trust problem to them significantly increases. In fact nobody can guarantee today to the user, that its data will not be looked through and be analyzed by the company which renders cloudy services. Cloud services are a way to get access to the information resources of any level and any power using only Internet access and a Web browser. There are three models of cloud: SaaS (software as a service) and IaaS (infrastructure as a service), and DaaS (data-as-a-service) to use in business, education, medicine, science, leisure, etc.

Cloud computing.Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

Advocates claim that cloud computing allows companies to avoid upfront infrastructure costs (e.g., purchasing servers). As well, it enables organizations to focus on their core businesses instead of spending time and money on computer infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables Information Technology (IT) teams to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay as you go" model. This will lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.

In 2009, the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing led to a growth in cloud computing. Companies can scale up as computing needs increase and then scale down again as demands decrease. In 2013, it was reported that cloud computing had become a highly demanded service or utility due to the advantages of high computing power, cheap cost of services, high performance, scalability, accessibility as well as availability. Some cloud vendors are experiencing growth rates of 50% per year, but being still in a stage of infancy, it has pitfalls that need to be addressed to make cloud computing services more reliable and user friendly.

Mobile technology.Mobile technology is a collective term used to describe the various types of cellular communication technology. Mobile CDMA technology has evolved quite rapidly over the past few years. Since the beginning of this millennium, a standard mobile device has gone from being no more than a simple two-way pager to being a cellular phone, GPS navigation system, an embedded web browser, and Instant Messenger client, and a hand-held video gaming system. Many experts argue that the future of computer technology rests in mobile/wireless computing.

Standards of mobile technology:

1) GSM (Global System for Mobile Communications, originally GroupeSpécialMobile), is a standard developed by the European Telecommunications Standards Institute (ETSI) to describe the protocols for second-generation (2G) digital cellular networks used by mobile phones, first deployed in Finland in July 1991. As of 2014 it has become the default global standard for mobile communications - with over 90% market share, operating in over 219 countries and territories.

2G networks developed as a replacement for first generation (1G) analog cellular networks, and the GSM standard originally described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport, then by packet data transport via GPRS (General Packet Radio Services) and EDGE (Enhanced Data rates for GSM Evolution or EGPRS).

Subsequently, the 3GPP developed third-generation (3G) UMTS standards followed by fourth-generation (4G) LTE Advanced standards, which do not form part of the ETSI GSM standard.

2) General Packet Radio Service (GPRS) is a packet oriented mobile data service on the 2G and 3G cellular communication system's global system for mobile communications (GSM). GPRS was originally standardized by European Telecommunications Standards Institute (ETSI) in response to the earlier CDPD and i-mode packet-switched cellular technologies. It is now maintained by the 3rd Generation Partnership Project (3GPP).

GPRS usage is typically charged based on volume of data transferred, contrasting with circuit switched data, which is usually billed per minute of connection time. Usage above the bundle cap is charged per megabyte, speed limited, or disallowed.

GPRS is a best-effort service, implying variable throughput and latency that depend on the number of other users sharing the service concurrently, as opposed to circuit switching, where a certain quality of service (QoS) is guaranteed during the connection. In 2G systems, GPRS provides data rates of 56–114 kbit/second. 2G cellular technology combined with GPRS is sometimes described as 2.5G, that is, a technology between the second (2G) and third (3G) generations of mobile telephony. It provides moderate-speed data transfer, by using unused time division multiple access (TDMA) channels in, for example, the GSM system. GPRS is integrated into GSM Release 97 and newer releases.

3) Enhanced Data rates for GSM Evolution (EDGE) (also known as Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC), or Enhanced Data rates for Global Evolution) is a digital mobile phone technology that allows improved data transmission rates as a backward-compatible extension of GSM. EDGE is considered a pre-3G radio technology and is part of ITU's 3G definition. EDGE was deployed on GSM networks beginning in 2003 – initially by Cingular (now AT&T) in the United States.

EDGE is standardized also by 3GPP as part of the GSM family. A variant, so called Compact-EDGE, was developed for use in a portion of Digital AMPS network spectrum.

Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection.

EDGE can be used for any packet switched application, such as an Internet connection.

Evolved EDGE continues in Release 7 of the 3GPP standard providing reduced latency and more than doubled performance e.g. to complement High-Speed Packet Access (HSPA). Peak bit-rates of up to 1 Mbit/s and typical bit-rates of 400 kbit/s can be expected.

 

Lecture №11.


Дата добавления: 2018-04-15; просмотров: 3679; Мы поможем в написании вашей работы!

Поделиться с друзьями:






Мы поможем в написании ваших работ!