Who proposed the World Wide Web. World Wide Web: why is the Internet called that?

Home / Laptops

"World Wide Web" (World Wide Web, WWW)

The World Wide Web (WWW) is the most popular and interesting Internet service, a popular and convenient means of working with information. The most common name for a computer on the Internet today is www; more than half of the Internet data flow comes from WWW. The number of WWW servers today cannot be estimated accurately, but according to some estimates there are more than 30 million. The growth rate of WWW is even higher than that of the Internet itself.

WWW is a worldwide information repository in which information objects are linked by a hypertext structure. Hypertext is primarily a system of cross-referenced documents, a way of presenting information using links between documents. Since the WWW system allows you to include in these documents not only texts, but also graphics, sound and video, hyper text document turned into a hypermedia document.

A little WWW history. The World Wide Web (WWW) is one of the important components of the World Wide Web. And she has her own story.

This is interesting. The European Particle Physics Laboratory (CERN) is located in Switzerland. In 1980, a man named Tim Bernes-Lee, who was then working at CERN, began developing a project for a global computer network that would provide physicists around the world with access to various information. It took nine years. In 1989, after many years of technical experiments, Mr. Tim proposed a specific option, which was the beginning of the World Wide Web, or WWW for short.

Over time, many realized that such services could be used by different people, not just physicists. WWW began to grow rapidly. Many people helped her in this: some developed hardware, others created software that developed WWW, and others improved communication lines. All this allowed it to become what it is now - the "World Wide Web".

Principles of client and server operation. WWW works on the client-server principle, or more precisely, client-servers: there are many servers that, at the client’s request, return to him a hypermedia document - a document consisting of parts with a diverse representation of information (text, sound, graphics, three-dimensional objects, etc.). ), in which each element can be a link to another document or part of it. Links in WWW documents are organized in such a way that each information resource on the Internet is uniquely addressed, and the document you are reading in at the moment, is able to link both to other documents on the same server, and to documents (and Internet resources in general) on other Internet computers. Moreover, the user does not notice this and works with the entire information space of the Internet as a single whole.

WWW links point not only to documents specific to the WWW itself, but also to other services and information resources on the Internet. Moreover, most WWW client programs (browsers, navigators) not only understand such links, but are also client programs for the corresponding services: FTP, gopher, Usenet network news, email etc. Thus, software WWW are universal for various Internet services, and the information system WWW plays an integrating role.

Let's list some terms used on the WWW.

The first term - html - is a set of control sequences of commands contained in an html document and defining the actions that the viewer (browser) should perform when loading this document. This means that each page is a regular text file containing text that is visible to everyone, and some instructions for the program that are invisible to people in the form of links to other pages, images, servers. Thus, questionnaires and registration cards are filled out, and sociological surveys are conducted.

The second term is URL (uniform resource locator - a universal pointer to a resource). This is what those links to information resources on the Internet are called.

Another term is http (hypertext transfer protocol). This is the name of the protocol by which the client and WWW server interact.

WWW is a direct access service that requires full connection to the Internet and, moreover, often requiring fast communication lines in case the documents you are reading contain a lot of graphics or other non-text information.

The rapid development of the Internet, which began in the early 90s, is largely due to the emergence new technology www. This technology is based on hypertext technology, which has been extended to all computers connected to the Internet.

When using hypertext technology, the text is structured and link words are highlighted in it. When a link is activated (for example, using the mouse), a transition occurs to the text fragment specified in the link or to another document. So, we could convert our text into hypertext by highlighting the words “hypertext technology” in the first paragraph and recording that when this link is activated, a transition will occur to the beginning of the second paragraph.

WWW technology allows transitions not only within the source document, but also to any document located on this computer and, most importantly, to any document on any computer currently connected to the Internet. Documents implemented using WWW technology are called Web pages.

Structuring documents and creating Web pages is carried out using HTML (Hyper Text Markup Language). Text editor Word allows you to save documents as Web pages. Web pages are viewed using special browser viewing programs. Currently the most common browsers are Internet Explorer, Netscape Navigator, Opera.

If your computer is connected to the Internet, you can download one of the browsers and go on a journey through the World Wide Web. First, you need to download a Web page from one of the Internet servers, then find the link and activate it. As a result, a Web page will be loaded from another Internet server, which may be located in another part of the world. In turn, you can activate the link on this Web page, the next Web page will load, etc.

The Internet is growing at a very fast pace, and it is becoming increasingly difficult to find the necessary information among tens of millions of documents. To search for information, special search servers are used, which contain accurate and constantly updated information about the content of tens of millions of Web pages.

Hello, dear readers of the blog site. We all live in the era of the global Internet and use the terms site, web, www (World Wide Web - World Wide Web, global network) quite often and without particularly going into what it is.

I observe the same thing from other authors, and even ordinary interlocutors. “Site”, “Internet”, “network” or the abbreviation “WWW” have become such common concepts for us that it doesn’t even occur to us to think about their essence. However, the first website was born only some twenty years ago. What is the Internet?

After all, it has a rather long history, however, before the advent of the global network (WWW), 99.9% of the planet's inhabitants did not even suspect its existence, because it was the lot of specialists and enthusiasts. Now even the Eskimos know about the World Wide Web, in whose language this word is identified with the ability of shamans to find answers in the layers of the universe. So let's discover for ourselves what the Internet, website, World Wide Web, and everything else is.

What is the Internet and how it differs from the World Wide Web

The most remarkable fact that can now be stated is that Internet has no owner. In essence, this is an association of individual local networks (thanks to the common standards once adopted, namely the TCP/IP protocol), which is maintained in working order by network providers.

It is believed that due to the ever-increasing media traffic (video and other heavy content moving in tons on the network), the Internet will soon collapse due to its currently limited bandwidth. The main problem in this regard is updating the network equipment that makes up the global web to a higher speed one, which is primarily constrained by the additional costs required. But I think that the problem will be solved as the collapse matures, and there are already separate segments of the network operating at high speeds.

In general, in light of the fact that the Internet is essentially no one’s, it should be mentioned that many states, trying to introduce censorship on the global network, want to identify it (namely its currently most popular component WWW) with.

But there is actually no basis for this desire, because The Internet is just a means of communication or, in other words, a storage medium comparable to a telephone or even plain paper. Try applying sanctions to paper or its distribution around the planet. In fact, individual states can only apply certain sanctions to sites (islands of information on the network) that become available to users via the World Wide Web.

The first prerequisites for the creation of the global web and the Internet were undertaken... What year do you think? Surprisingly, it was already in the dense 1957. Naturally, the military (and, naturally, the United States, where would we be without them) needed such a network for communication in the event of military operations involving the use of nuclear weapons. It took quite a long time to create the network (about 12 years), but this can be explained by the fact that at that time computers were in their infancy.

But nevertheless, their power was quite enough to create an opportunity between the military departments and leading US universities by 1971. Thus, the Email transfer protocol became first way to use the Internet for user needs. After a couple more, overseas people already knew what the Internet was. By the beginning of the 80x, the main data transfer protocols were standardized (mail, ), and the protocol of the so-called Usenet news conferences appeared, which was similar to mail, but made it possible to organize something similar to forums.

And a few years later, the idea of ​​​​creating a domain name system (DNS - will play a crucial role in the formation of WWW) appeared and the world's first protocol for communication via the Internet in real time - IRC (in colloquial Russian - irka) appeared. It allowed you to chat online. Science fiction that was accessible and interesting to a very, very small number of inhabitants of planet Earth. But that's just for now.

At the junction of the 80s and 90s, such significant events took place in the history of infrastructure development that they, in fact, predetermined its future fate. In general, such a spread of the global network in the minds of modern inhabitants of the planet is due to almost one single person - Tim Berners-Lee:

Berners-Lee is an Englishman, born into a family of two mathematicians who dedicated their lives to creating one of the world's first computers. It was thanks to him that the world learned what the Internet, website, email, etc. are. Initially, he created the World Wide Web (WWW) for the needs of nuclear research at Cern (they have the same collider). The task was to conveniently place all the scientific information available to the concern in their own network.

To solve this problem, he came up with everything that is now the fundamental elements of the WWW (what we consider the Internet, without understanding its essence a little). He took as a basis the principle of organizing information called. What is it? This principle was invented long before and consisted of organizing the text in such a way that the linearity of the narrative was replaced by the ability to navigate through different links (connections).

The Internet is hypertext, hyperlinks, URLs and hardware

Thanks to this, hypertext can be read in different sequences, thereby obtaining various options linear text (well, this should be clear and obvious to you, as experienced Internet users, now, but then it was a revolution). The role of hypertext nodes should have been, which we now simply call links.

As a result, all the information that now exists on computers can be represented as one large hypertext, which includes countless nodes (hyperlinks). Everything developed by Tim Berners-Lee was transferred from the local CERN grid to what we now call the Internet, after which the Web began to gain popularity at a breakneck pace (the first fifty million users of the World Wide Web were registered in just first five years of existence).

But to implement the principle of hypertext and hyperlinks, it was necessary to create and develop several things from scratch. Firstly, we needed a new data transfer protocol, which is now known to all of you HTTP protocol(at the beginning of all website addresses you will find a mention of it or its secure HTTPs version).

Secondly, it was developed from scratch, the abbreviation of which is now known to all webmasters in the world. So, we have received tools for transferring data and creating sites (a set of web pages or web documents). But how can one refer to these same documents?

The first allowed you to identify a document on a separate server (site), and the second allowed you to mix a URI into the identifier domain name(received and clearly indicating that the document belongs to a website hosted on a specific server) or IP address (a unique digital identifier of absolutely all devices in a global or local network). Read more about it at the link provided.

There is only one step left to take for the World Wide Web to finally work and become in demand by users. Do you know which one?

Well, of course, we needed a program that could display on the user’s computer the contents of any web page requested on the Internet (using a URL). It became such a program. If we talk about today, there are not so many main players in this market, and I managed to write about all of them in a short review:

  1. (IE, MSIE) - the old guard is still in service
  2. (Mazila Firefox) - another veteran is not going to give up his position without a fight
  3. (Google Chrome) - an ambitious newcomer who managed to take the lead in the shortest possible time
  4. - a browser beloved by many in RuNet, but gradually losing popularity
  5. - a messenger from the apple mill

Timothy John Berners-Lee independently wrote the program for the world's first Internet browser and called it, without further ado, the World Wide Web. Although this was not the limit of perfection, it was from this browser that the victorious march of the worldwide WWW web around the planet.

In general, of course, it is striking that all the necessary tools for the modern Internet (meaning its most popular component) were created by just one person in such a short time. Bravo.

A little later, the first graphical browser Mosaic appeared, from which many of the modern browsers (Mazila and Explorer) originated. It was Mosaic that became the drop that was missing to there was an interest in the Internet(namely to the World Wide Web) among ordinary residents of planet Earth. A graphical browser is a completely different matter than a text browser. Everyone loves to look at pictures and only a few love to read.

What is noteworthy is that Berners-Lee did not receive any terribly large sums of money, which, for example, as a result received or, although he probably did more for the global network.

Yes, over time, in addition to the Html language developed by Berners-Lee, . Thanks to this, some of the operators in Html were no longer needed, and they were replaced by much more flexible tools for cascading style sheets, which made it possible to significantly increase the attractiveness and design flexibility of the sites being created today. Although CSS rules are, of course, more complex to learn than markup language. However, beauty requires sacrifice.

How do the Internet and the global network work from the inside?

But let's see what is Web (www) and how information is posted on the Internet. Here we will come face to face with the very phenomenon called website (web is a grid, and site is a place). So, what is a “place on the network” (analogous to a place in the sun in real life) and how to actually get it.

What is intet? So, it consists of channel-forming devices (routers, switches) that are invisible and of little importance to users. The WWW network (what we call the Web or World Wide Web) consists of millions of web servers, which are programs running on slightly modified computers, which in turn must be connected (24/7) to the global web and use the HTTP protocol for data exchange.

The web server (program) receives a request (most often from the user’s browser, which opens the link or entered the Url in the address bar) to open a document hosted on this very server. In the simplest case, a document is a physical file (with the html extension, for example), which lies on the server’s hard drive.

In a more complex case (when using ), the requested document will be generated programmatically on the fly.

To view the requested page of the site, special software is used on the client (user) side called a browser, which can draw the downloaded fragment of hypertext in a digestible form on the information display device where this same browser is installed (PC, phone, tablet, etc. ). In general, everything is simple, if you don’t go into details.

Previously, each individual website was physically hosted on a separate computer. This was mainly due to the weak computing power of the PCs available at that time. But in any case, a computer with web program The server and the site hosted on it must be connected to the Internet around the clock. Doing this at home is quite difficult and expensive, so they usually use the services of hosting companies specializing in this to store websites.

Hosting service Due to the popularity of the WWW, it is now quite in demand. Thanks to the growing power of modern PCs over time, hosters have the opportunity to host many websites on one physical computer (virtual hosting), and hosting one website on one physical PC has become called a service.

When using virtual hosting, all websites located on a computer (the one called a server) can be assigned one IP address, or each one can have a separate one. This does not change the essence and can only indirectly affect the Website located there (a bad neighborhood on one IP can have a bad effect - search engines sometimes treat everyone with the same brush).

Now let's talk a little about website domain names and their meaning on the World Wide Web. Every resource on the Internet has its own domain name. Moreover, a situation may arise when the same site may have several domain names (the result is mirrors or aliases), and also, for example, the same domain name may be used for many resources.

Also, for some serious resources there is such a thing as mirrors. In this case, the site files may be located on different physical computers, and the resources themselves have different domain names. But these are all nuances that only confuse novice users.

As the Internet developed, more and more information was involved in its circulation, and navigating the Internet became increasingly difficult. Then the task arose to create a simple and clear way organizing information posted on Internet sites. I fully coped with this task new service www (world wide web - World Wide Web).

World Wide Web is a system of documents with text and graphic information posted on Internet sites and interconnected by hyperlinks. Perhaps this particular service is the most popular and for many users it is synonymous with the word INTErNET itself. Often, novice users confuse two concepts - the Internet and WWW (or Web). It should be recalled that WWW is just one of the many services provided to Internet users.

The main idea that was used in the development of the www system was is the idea of ​​accessing information using hypertext links. Its essence lies in the inclusion in the text of a document of links to other documents, which can be located either on the same or on remote information servers.

The history of www begins from the moment when, in 1989, an employee of the famous scientific organization CErN Berners-Lee proposed to his management to create a database in the form of an information network that would consist of documents that included both the information itself and links to other documents. Such documents are nothing more than hypertext.

Another feature that distinguishes www from other types of services is that through this system you can access almost all other types of Internet services, such as FTP, Gopher, Telnet.

WWW is a multimedia system. This means that using www you can, for example, watch a video about historical monuments or find out information about the World Cup. It is possible to access library information and recent photographs of the globe taken five minutes ago by meteorological satellites, along with.

The idea of ​​organizing information in the form of hypertext is not new. Hypertext lived long before the advent of computers. The simplest example non-computer hypertext – these are encyclopedias. Some words in articles are marked in italics. This means that you can refer to the relevant article and get more detailed information. But if in a non-computer hypertext you need to turn pages, then on the monitor screen, following a hypertext link is instantaneous. You just need to click on the link word.

The main merit of the above-mentioned Tim Berners-Lee is that he not only put forward the idea of ​​​​creating an information system based on hypertext, but also proposed a number of methods that formed the basis of the future www service.

In 1991, the ideas that originated in CErN began to be actively developed by the Center for Supercomputing Applications (NCSA). It is NCSA that creates the hypertext language html documents, as well as the Mosaic program designed to view them. Mosaic, developed by Mark Andersen, became the first browser and opened new class software products.

In 1994, the number of www servers began to grow rapidly and the new Internet service not only received worldwide recognition, but also attracted a huge number of new users to the Internet.

Now let's give the basic definitions.

www– this is a set of web pages located on Internet sites and interconnected by hyperlinks (or simply links).

web page is a structural unit of www, which includes the actual information (text and graphic) and links to other pages.

website– these are web pages physically located on one Internet node.

The www hyperlink system is based on the fact that some selected sections of one document (which can be parts of text or illustrations) act as links to other documents that are logically related to them.

In this case, those documents to which links are made can be located both locally and on remote computer. In addition, traditional hypertext links are also possible - these are links within the same document.

Linked documents may, in turn, contain cross-references to each other and to other information resources. Thus, it is possible to collect documents on similar topics into a single information space. (For example, documents containing medical information.)

Architecture www

The architecture of www, like the architecture of many other types of Internet services, is built on the principle client-server.

The main task of the server program is the organization of access to information stored on the computer on which this program is running. After startup, the server program works in the mode of waiting for requests from client programs. Typically, web browsers are used as client programs, which are used by ordinary www users. When such a program needs to obtain some information from the server (usually these are documents stored there), it sends a corresponding request to the server. With sufficient access rights, a connection is established between the programs, and the server program sends a response to the request to the client program. After which the connection established between them is broken.

To transfer information between programs, the HTTP protocol (Hypertext Transfer Protocol) is used.

www server functions

www-server is a program that runs on the host computer and processes requests coming from www clients. When receiving a request from a www client, this program establishes a connection based on the TCP/IP transport protocol and exchanges information using the HTTP protocol. In addition, the server determines access rights to the documents that are located on it.

To access information that cannot be processed by the server directly, it is used lock system. Using a special CGI (Common Gateway Interface) interface to exchange information with gateways, the www server has the ability to receive information from sources that would be inaccessible to other types of Internet service. At the same time, for the end user, the operation of the gateways is “transparent”, i.e., when viewing web resources in his favorite browser, an inexperienced user will not even notice that some information was presented to him using the gateway system

www client functions

There are two main types of www clients: web browsers and utility applications.

web browsers are used to directly work with www and obtain information from there.

Service web applications can communicate with the server either to obtain some statistics or to index the information contained there. (This is how information gets into databases search engines.) In addition, there are also service web clients, the work of which is related to the technical side of storing information on a given server.

World Wide Web (WWW)

World Wide Web(English) World Wide Web) - a distributed system that provides access to interconnected documents located on various computers connected to the Internet. The word web is also used to refer to the World Wide Web. web"web") and the abbreviation WWW. The World Wide Web is the largest worldwide multilingual repository of information in electronic form: tens of millions of interconnected documents located on computers located around the globe. It is considered the most popular and interesting service on the Internet, which allows you to access information regardless of its location. To find out the news, learn something or just have fun, people watch TV, listen to the radio, read newspapers, magazines, and books. The World Wide Web also offers its users radio broadcasting, video information, press, books, but with the difference that all this can be obtained without leaving home. It doesn’t matter in what form the information you are interested in is presented (text document, photograph, video or sound fragment) and where this information is located geographically (in Russia, Australia or the Ivory Coast) - you will receive it in a matter of minutes on your computer.

The World Wide Web is made up of hundreds of millions of web servers. Most of the resources on the World Wide Web are hypertext. Hypertext documents posted on the World Wide Web are called web pages. Several web pages, united by a common theme, design, and also interconnected by links and usually located on the same web server, are called a website. Used to download and view web pages special programs- browsers. The World Wide Web has caused a real revolution in information technology and the boom in Internet development. Often, when talking about the Internet, they mean the World Wide Web, but it is important to understand that they are not the same thing.

History of the World Wide Web

Tim Berners-Lee and, to a lesser extent, Robert Caillot are considered the inventors of the World Wide Web. Tim Berners-Lee is the originator of HTTP, URI/URL and HTML technologies. In 1980, he worked for the European Council for Nuclear Research (Conseil Européen pour la Recherche Nucléaire, CERN) as a software consultant. It was there, in Geneva (Switzerland), that he wrote the Enquire program for his own needs, which used random associations to store data and laid the conceptual basis for the World Wide Web.

In 1989, while working at CERN on the organization's intranet, Tim Berners-Lee proposed the global hypertext project, now known as the World Wide Web. The project involved the publication of hypertext documents linked by hyperlinks, which would facilitate the search and consolidation of information for CERN scientists. To implement the project, Tim Berners-Lee (together with his assistants) invented URIs, the HTTP protocol, and the HTML language. These are technologies without which it is no longer possible to imagine the modern Internet. Between 1991 and 1993 Berners-Lee improved technical specifications these standards and published them. But, nevertheless, the official year of birth of the World Wide Web should be considered 1989.

As part of the project, Berners-Lee wrote the world's first web server, httpd, and the world's first hypertext web browser, called WorldWideWeb. This browser was also a WYSIWYG editor (short for What You See Is What You Get). Its development began in October 1990 and was completed in December of the same year. The program ran in the NeXTStep environment and began to spread across the Internet in the summer of 1991.

The world's first website was hosted by Berners-Lee on August 6, 1991, on the first web server, accessible at http://info.cern.ch/. The resource defined the concept of the World Wide Web, contained instructions for installing a web server, using a browser, etc. This site was also the world's first Internet directory, because Tim Berners-Lee later posted and maintained a list of links to other sites there.

Since 1994, the main work on the development of the World Wide Web has been taken over by the World Wide Web Consortium (W3C), founded and still led by Tim Berners-Lee. This consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. W3C Mission: “Unleash the full potential of the World Wide Web by establishing protocols and principles to ensure the long-term development of the Web.” Two other major goals of the consortium are to ensure full “internationalization of the Web” and to make the Web accessible to people with disabilities.

The W3C develops common principles and standards for the Internet (called “recommendations”, English W3C Recommendations), which are then implemented by software and hardware manufacturers. In this way, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more advanced, universal and convenient. All recommendations of the World Wide Web consortium are open, that is, they are not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

Structure and principles of the World Wide Web

The World Wide Web is made up of millions of Internet web servers located around the world. A web server is a program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive and sends it over the network to the requesting computer. More sophisticated web servers are capable of dynamically generating documents in response to an HTTP request using templates and scripts.

To view information received from the web server, go to client computer a special program is used - a web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Most of the information on the Internet is hypertext.

To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML (HyperText Markup Language) is traditionally used. The work of creating (marking up) hypertext documents is called layout, it is done by a webmaster or a separate markup specialist - a layout designer. After HTML markup, the resulting document is saved to a file, and such HTML files are the main type of resources on the World Wide Web. Once an HTML file is made available to a web server, it is called a “web page.” A collection of web pages makes up a website.

The hypertext of web pages contains hyperlinks. Hyperlinks help World Wide Web users easily navigate between resources (files), regardless of whether the resources are located on local computer or on a remote server. Uniform Resource Locators (URLs) are used to determine the location of resources on the World Wide Web. For example, the full URL of the main page of the Russian section of Wikipedia looks like this: http://ru.wikipedia.org/wiki/Main_page. Such URL locators combine URI (Uniform Resource Identifier) ​​identification technology and the DNS (Domain Name System) domain name system. Domain name (in in this case ru.wikipedia.org) as part of a URL denotes a computer (more precisely, one of its network interfaces) that executes the code of the desired web server. The URL of the current page can usually be seen in the browser's address bar, although many modern browsers they prefer to show only the domain name of the current site by default.

World Wide Web Technologies

To improve the visual perception of the web, CSS technology has become widely used, which allows you to specify uniform styles design for many web pages. Another innovation worth paying attention to is the URN (Uniform Resource Name) resource naming system.

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable to computers. The Semantic Web is a concept of a network in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find themselves necessary resources, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely adopted and implemented wisely, the Semantic Web has the potential to spark a revolution on the Internet. To create a machine-readable description of a resource on the Semantic Web, the RDF (Resource Description Framework) format is used, which is based on XML syntax and uses URIs to identify resources. New in this area are RDFS (RDF Schema) and SPARQL (Protocol And RDF Query Language), a new query language for quick access to RDF data.

Basic terms used on the World Wide Web

Working with the browser

Today, ten years after the invention of the HTTP protocol, which formed the basis of the World Wide Web, the browser is a highly complex piece of software that combines ease of use and a wealth of capabilities.
The browser not only opens the user to the world of hypertext resources on the World Wide Web. It can also work with other web services such as FTP, Gopher, WAIS. Along with the browser, a program for using e-mail and news services is usually installed on the computer. Essentially, the browser is the main program for accessing Internet services. Through it you can access almost any Internet service, even if the browser does not support working with this service. For this purpose, specially programmed web servers are used that connect the World Wide Web with this Network service. An example of this kind of web servers are numerous free mail servers with a web interface (see http://www.mail.ru)
Today there are many browser programs created by various companies. The most widely used and recognized browsers are Netscape Navigator and Internet Explorer. It is these browsers that constitute the main competition with each other, although it is worth noting that these programs are in many ways similar. This is understandable, because they work according to the same standards - Internet standards.
Working with the browser begins with the user typing in the address bar (address) the URL of the resource he wants to access and pressing the Enter key.

The browser sends a request to the specified Internet server. As elements of the user-specified web page arrive from the server, it gradually appears in the working browser window. The process of receiving page elements from the server is displayed in the bottom “status” line of the browser.

Text hyperlinks contained in the resulting web page are typically highlighted in a different color from the rest of the document text and are underlined. Links pointing to resources that the user has not yet viewed and links to resources that have already been visited usually have different colors. Images can also function as hyperlinks. Regardless of whether the link is a text link or a graphic link, if you hover your mouse over it, its shape will change. At the same time, the address to which the link points will appear in the browser status bar.

When you click on a hyperlink, the browser opens the resource to which it points in the working window, and the previous resource is unloaded from it. The browser keeps a list of viewed pages and the user, if necessary, can go back along the chain of viewed pages. To do this, click on the "Back" button in the browser menu - and it will return to the page you were viewing before opening the current document.
Each time you click this button, the browser will go back one document in the list of visited documents. If you suddenly go back too far, use the "Forward" button in the browser menu. It will help you move forward through the list of documents.
The "Stop" button will stop loading the document. The "Reload" button allows you to reload the current document from the server.
The browser can only show one document in its window: to display another document, it unloads the previous one. It is much more convenient to work in several browser windows at the same time. Opening a new window is done using the menu: File – New – Window (or the key combination Ctrl+N).

Working with a document

The browser allows you to perform a set of standard operations on a document. The web page loaded into it can be printed (in Internet Explorer this is done using the “Print” button or from the menu: File – Print...), saved to disk (menu: File – Save As...). You can find the piece of text you are interested in in the loaded page. To do this, use the menu: Edit – Find on this page.... And if you are interested in what it looks like this document in the original hypertext that the browser processed, select from the menu: View - As HTML.
When, while browsing the Internet, a user finds a page that is particularly interesting to him, he uses the ability provided in browsers to set bookmarks (similar to bookmarks that mark interesting places books).
This is done through the menu: Favorites – Add to Favorites. After this, the new bookmark appears in the list of bookmarks, which can be viewed by clicking the “Favorites” button on the browser panel or through the Favorites menu.
Existing bookmarks can be deleted, edited, or organized into folders using the menu: Favorites – Organize favorites.

Working through a proxy server

Netscape Navigator and Microsoft Internet Explorer also provide a mechanism for embedding additional features independent manufacturers. Modules that extend the capabilities of the browser are called plug-ins.
Browsers run on computers running a wide variety of operating systems. This gives grounds to talk about the independence of the World Wide Web from the type of computer and operating system used by the user.

Searching for information on the Internet

IN lately The World Wide Web is seen as a new powerful mass media, the audience of which is the most active and educated part of the planet's population. This vision corresponds to the real state of affairs. On days of significant events and upheavals, the load on network news nodes increases sharply; in response to reader demand, resources dedicated to the incident that just happened instantly appear. Thus, during the August crisis of 1998, news appeared on the Internet page of the CNN television and radio company (http://www.cnn.com) much earlier than the Russian media reported about them. At the same time, the RIA RosBusinessConsulting server (http://www.rbc.ru), which provides the latest information from financial markets and the latest news, became widely known. Many Americans watched the vote to impeach US President Bill Clinton online rather than on their television screens. The development of the war in Yugoslavia was also immediately reflected in a variety of publications reflecting a variety of points of view on this conflict.
Many people who are more familiar with the Internet from hearsay believe that you can find any information on the Internet. This is true in the sense that there you can come across the most unexpected resources in form and content. Indeed, the modern Web is able to offer its user a lot of information of a wide variety of profiles. Here you can get acquainted with the news, have an interesting time, and gain access to a variety of reference, encyclopedic and educational information. It is only necessary to emphasize that although the overall information value of the Internet is very great, the information space itself is heterogeneous in terms of quality, since resources are often created in haste. If, when preparing a paper publication, its text is usually read by several reviewers and adjustments are made to it, then on the Internet this stage of the publishing process is usually absent. So in general case Information gleaned from the Internet should be treated with slightly more caution than information found in a printed publication.
However, the abundance of information also has a negative side: as the amount of information grows, it becomes more and more difficult to find the information that is needed at the moment. Therefore, the most important problem that arises when working with the Network is to quickly find the necessary information and understand it, to evaluate the information value of a particular resource for your purposes.

To solve the problem of finding the necessary information on the Internet, there is a separate type of network service. We are talking about search servers, or search engines.
Search servers are quite numerous and varied. It is customary to distinguish between search indexes and directories.
Index servers They work as follows: they regularly read the content of most web pages on the Internet (“index” them), and place them in whole or in part into a common database. Search engine users have the ability to search this database using keywords related to the topic of interest to them. The search results usually consist of excerpts of pages recommended for the user's attention and their addresses (URL), formatted as hyperlinks. It is convenient to work with search servers of this type if you have a clear idea of ​​the subject of your search.
Directory servers In essence, they represent a multi-level classification of links, built on the principle of “from general to specific.” Sometimes links are accompanied by a brief description of the resource. As a rule, you can search in the names of headings (categories) and descriptions of resources using keywords. Catalogs are used when they don’t know exactly what they are looking for. Moving from the most general categories to more specific ones, you can determine which particular Internet resource you should familiarize yourself with. It is appropriate to compare search catalogs with thematic library catalogs or classifiers. The maintenance of search catalogs is partially automated, but until now the classification of resources is carried out mainly manually.
Search directories are common appointments And specialized. Search directories general purpose include resources of a wide variety of profiles. Specialized directories combine only resources devoted to a specific topic. They often manage to achieve better coverage of resources in their field and build more adequate categories.
Recently, general purpose search directories and indexing search servers have been intensively integrated, successfully combining their advantages. Search technologies also do not stand still. Traditional indexing servers search a database for documents containing keywords from the search query. With this approach, it is very difficult to assess the value and quality of the resource provided to the user. An alternative approach is to look for web pages that are linked to by other resources on the topic. The more links to a page there are on the Web, the more likely you are to find it. This kind of meta-search is carried out by a search engine. Google server (http://www.google.com/), which appeared quite recently, but has already proven itself to be excellent.

Working with search servers

Working with search servers is not difficult. In the address bar of the browser, type its address, in the query line, type in the desired language the keywords or phrase corresponding to the resource or resources of the Network that you want to find. Then click on the "Search" button and working window browser loads the first page with search results.

Typically, a search server produces search results in small portions, for example, 10 per search page. Therefore, they often take up more than one page. Then under the list of recommended links there will be a link offering to move to the next “portion” of search results (see figure).

Ideally, the search server will place the resource you are looking for on the first page of search results, and you will immediately recognize the desired link by brief description. However, you often have to look through several resources before finding the right one. Typically, the user views them in new browser windows without closing the browser window with the search results. Sometimes searching and viewing found resources is carried out in the same browser window.
The success of searching for information directly depends on how competently you composed your search query.
Let's look at a simple example. Let's say you want to buy a computer, but you don't know what modifications exist today and what their characteristics are. To get the required information, you can use the Internet by asking a search engine. If we enter the word “computer” in the search bar, the search result will be more than 6 million (!) links. Naturally, among them there are pages that meet our requirements, but it is not possible to find them among such a large number.
If you write “what modifications of computers exist today,” the search server will offer you to view about two hundred pages, but none of them will strictly correspond to the request. In other words, they contain individual words from your request, but we may not be talking about computers at all, but, say, about existing modifications washing machines or the number of computers available in a company’s warehouse on that day.
In general, it is not always possible to successfully ask a question to a search server the first time. If the query is short and contains only frequently used words, a lot of documents can be found, hundreds of thousands and millions. On the contrary, if your request turns out to be too detailed or very rare words are used, you will see a message stating that no resources matching your request were found in the server database.
Gradually narrowing or expanding the focus of your search by increasing or decreasing the list of keywords, replacing unsuccessful search terms with more successful ones will help you improve search results.
In addition to the number of words, their content plays an important role in the query. The keywords that make up a search query are usually simply separated by spaces. It is important to remember that different search engines interpret this differently. Some of them select only documents containing all keywords for such a request, that is, they perceive the space in the request as a logical connective “and”. Some interpret the space as a logical "or" and search for documents that contain at least one of the keywords.
When forming a search query, most servers allow you to explicitly specify logical connectives that combine keywords and set some other search parameters. Logical connectives are usually denoted using the English words "AND", "OR", "NOT". When forming an extended search query, different search servers use different syntax - the so-called query language. Using a query language, you can specify which words must appear in the document, which should not be present, and which are desirable (that is, they may or may not exist).
As a rule, modern search engines use all possible word forms of the words used when searching. That is, no matter in what form you used the word in the query, the search takes into account all its forms according to the rules of the Russian language: for example, if the query is “go”, then the search result will find links to documents containing the words “go” , “goes”, “walked”, “went”, etc.
Typically, on the title page of a search server there is a “Help” link, by clicking on which the user can familiarize themselves with the search rules and query language used on this server.
Another very important point– this is the choice of a search server suitable for your tasks. If you are looking for a specific file, it is better to use a specialized search server that indexes not web pages, but file archives on the Internet. An example of such search servers is FTP Search (http://ftpsearch.lycos.com), and to search for files in Russian archives it is better to use the Russian analogue - http://www.filesearch.ru.
To search for software, use software archives such as http://www.tucows.com/, http://www.windows95.com, http://www.freeware.ru.
If the web page you are looking for is located on the Russian part of the Internet, it may be worth using Russian search engines. They work better with Russian speakers search queries, equipped with an interface in Russian.
Table 1 provides a list of some of the most well-known general purpose search engines. All of these servers currently offer both full-text and category search, thus combining the advantages of an indexing server and a directory server.

Http, which will allow you to support a long-term connection, data transmission in multiple streams, distribution of data transmission channels and their management. If it is implemented and supported by the standard software WWW, then this will remove the above-mentioned disadvantages. Another way is to use navigators that can locally execute programs in interpreted languages, such as Sun Microsystems' Java project. Another solution to this problem is to use AJAX technology, based on XML and JavaScript. This allows you to receive additional data from the server when the WWW page has already been loaded from the server.

Currently, there are two trends in the development of the World Wide Web: the semantic web and

There is also a popular concept Web 2.0, which summarizes several directions of development of the World Wide Web.

Web 2.0

The development of the WWW has recently been significantly carried out through the active introduction of new principles and technologies, collectively called Web 2.0 (Web 2.0). The term Web 2.0 itself first appeared in 2004 and is intended to illustrate the qualitative changes in the WWW in the second decade of its existence. Web 2.0 is a logical improvement of the Web. Main feature is to improve and speed up the interaction of websites with users, which has led to a rapid increase in user activity. This showed up in:

  • participation in Internet communities (in particular, in forums);
  • posting comments on websites;
  • maintaining personal journals (blogs);
  • placing links on the WWW.

Web 2.0 introduced active data exchange, in particular:

  • export news between sites;
  • active aggregation of information from websites.
  • using an API to separate site data from the site itself

From the point of view of website implementation, Web 2.0 increases the requirements for the simplicity and convenience of websites for ordinary users and is aimed at a rapid decline in user qualifications in the near future. Compliance with the list of standards and consensuses (W3C) is brought to the fore. This is in particular:

  • standards for the visual design and functionality of websites;
  • standard requirements (SEO) of search engines;
  • XML and open information exchange standards.

On the other hand, Web 2.0 has dropped:

  • requirements for “brightness” and “creativity” of design and content;
  • needs for comprehensive websites ([http://ru.wikipedia.org/wiki/%D0%98%D0%BD%D1%82%D0%B5%D1%80%D0%BD%D0%B5%D1 %82-%D0%BF%D0%BE%D1%80%D1%82%D0%B0%D0%BB ]);
  • the importance of offline advertising;
  • business interest in large projects.

Thus, Web 2.0 recorded the transition of the WWW from single, expensive complex solutions to highly typed, cheap, easy-to-use sites with the ability to effectively exchange information. The main reasons for this transition were:

  • critical lack of quality information content;
  • the need for active self-expression of the user on the WWW;
  • development of technologies for searching and aggregating information on the WWW.

The transition to a set of Web 2.0 technologies has the following consequences for the global WWW information space, such as:

  • the success of the project is determined by the level active communication users of the project and the level of quality of information content;
  • websites can achieve high performance and profitability without large investments due to successful positioning on the WWW;
  • individual WWW users can achieve significant success in implementing their business and creative plans on the WWW without having their own websites;
  • the concept of a personal website is inferior to the concept of “blog”, “author’s column”;
  • fundamentally new roles for active WWW users appear (forum moderator, authoritative forum participant, blogger).

Web 2.0 Examples
Here are a few examples of sites that illustrate Web 2.0 technologies and that have actually changed the WWW environment. This is in particular:

In addition to these projects, there are other projects that shape the modern global environment and are based on the activity of their users. Sites, the content and popularity of which are formed, first of all, not by the efforts and resources of their owners, but by a community of users interested in the development of the site, constitute a new class of services that determine the rules of the global WWW environment.

History of the creation and development of the Internet.

The Internet owes its origins to the US Department of Defense and its secret research conducted in 1969 to test methods that would allow computer networks survive during hostilities using dynamic message rerouting. The first such network was the ARPAnet, which combined three networks in California with a network in Utah under a set of rules called the Internet Protocol (IP for short).

In 1972, access was opened to universities and research organizations, as a result of which the network began to unite 50 universities and research organizations that had contracts with the US Department of Defense.

In 1973, the network grew to an international scale, combining networks located in England and Norway. A decade later, IP was expanded to include a set of communication protocols supporting both local and global networks. This is how TCP/IP was born. Shortly thereafter, the National Science Foundation (NSF) launched NSFnet with the goal of linking 5 supercomputing centers. Simultaneously with the introduction of the TCP/IP protocol new network soon replaced ARPAnet as the backbone of the Internet.

Well, how did the Internet become so popular and developed, and the impetus for this, as well as for turning it into an environment for doing business, was given by the emergence of the World Wide Web (World Wide Web, WWW, 3W, ve-ve-ve, three double) - systems hypertext, which made surfing the Internet fast and intuitive.

But the idea of ​​linking documents through hypertext was first proposed and promoted by Ted Nelson in the 1960s, but the level of computer technology existing at that time did not allow it to be brought to life, although who knows how it would have ended if Has this idea found application?!

The foundations of what we understand today as the WWW were laid in the 1980s by Tim Berners-Lee while working on a hypertext system at the European Laboratory for Particle Physics (European Nuclear Research Centre). ).

As a result of these works, in 1990 the scientific community was presented with the first text browser (browser), allowing viewing of hyperlinks. text files on-line. The browser was made available to the general public in 1991, but its adoption outside academia has been slow.

A new historical stage in the development of the Internet is due to the release of the first Unix version of the graphical browser Mosaic in 1993, developed in 1992 by Marc Andreessen, a student who interned at the National Center for Supercomputing Applications (NCSA), USA.

Since 1994, after the release of versions of the Mosaic browser for operating systems Windows systems and Macintosh, and soon after that - the Netscape Navigator and Microsoft Internet Explorer browsers, began the explosive spread of the popularity of the WWW, and as a consequence of the Internet, among the general public, first in the United States and then throughout the world.

In 1995, NSF transferred responsibility for the Internet to the private sector, and since that time the Internet has existed as we know it today.


Internet services.

Services are types of services that are provided by Internet servers.
In the history of the Internet, there have been different types of services, some of which are no longer in use, others are gradually losing their popularity, while others are experiencing their heyday.
We list those services that have not lost their relevance at the moment:
-World Wide Web - the World Wide Web - a service for searching and viewing hypertext documents, including graphics, sound and video. -E-mail – electronic mail – service for transmitting electronic messages.
-Usenet, News – teleconferences, news groups – a type of online newspaper or bulletin board.
-FTP – file transfer service.
-ICQ is a service for real-time communication using a keyboard.
-Telnet is a service for remote access to computers.
-Gopher – service for accessing information using hierarchical directories.

Among these services we can highlight services designed for communication, that is, for communication, transfer of information (E-mail, ICQ), as well as services whose purpose is to store information and provide access to this information for users.

Among latest services The leading position in terms of the volume of stored information is occupied by the WWW service, since this service is the most convenient for users and the most progressive in technical terms. In second place is the FTP service, since no matter what interfaces and conveniences are developed for the user, the information is still stored in files, access to which is provided by this service. The Gopher and Telnet services can currently be considered “dying”, since almost no new information is received on the servers of these services and the number of such servers and their audience is practically not increasing.

World Wide Web - World Wide Web

World Wide Web (WWW) is a hypertext, or more precisely, hypermedia information system for searching Internet resources and accessing them.

Hypertext is an information structure that allows you to establish semantic connections between text elements on a computer screen in such a way that you can easily transition from one element to another.
In practice, in hypertext, some words are highlighted by underlining or coloring them in a different color. Highlighting a word indicates that there is a connection between this word and some document in which the topic associated with the highlighted word is discussed in more detail.

Hypermedia is what happens if you replace the word “text” in the definition of hypertext with “any type of information”: sound, graphics, video.
Such hypermedia links are possible because, along with text information You can also associate any other binary information, for example, encoded sound or graphics. So, if a program displays a world map and if the user selects a continent on this map with the mouse, the program can immediately provide graphic, sound and text information about it .

The WWW system is built on a special data transfer protocol called the HyperText Transfer Protocol (HTTP).
All content of the WWW system consists of WWW pages.

WWW pages are hypermedia documents of the World Wide Web system. They are created using the hypertext markup language HTML (Hypertext markup language). One WWW page is actually usually a set of hypermedia documents located on one server, intertwined with mutual links and related in meaning (for example, containing information about one educational institution or one museum). Each page document, in turn, can contain multiple screen pages of text and illustrations. Each WWW page has its own “title page” (English “homepage”) - a hypermedia document containing links to the main components of the page. "Title page" addresses are distributed on the Internet as page addresses.

A set of Web pages interconnected by links and designed to achieve a common goal is called a Web site.

E-mail.

Email appeared about 30 years ago. Today it is the most widespread means of exchanging information on the Internet. The ability to receive and send email can be useful not only for communicating with friends from other cities and countries, but also in a business career. For example, when applying for a job, you can quickly send out your resume using e-mail to various companies. In addition, on many sites where you need to register (on-line games, online stores, etc.) you often need to provide your e-mail. In a word, e-mail is a very useful and convenient thing.

Electronic mail (Electronic mail, English mail - mail, abbreviated e-mail) is used for transmitting text messages within the Internet, as well as between other email networks. (Figure 1.)

Using e-mail, you can send messages, receive them in your email inbox, respond to letters from correspondents, send copies of letters to several recipients at once, forward a received letter to another address, use logical names instead of addresses, create several subsections mailbox for various types of correspondence, include various sound and graphic files, as well as binary files - programs.

To use E-mail, the computer must be connected to the telephone network via a modem.
A computer connected to a network is considered a potential sender and receiver of packets. Each Internet node, when sending a message to another node, breaks it into fixed-length packets, usually 1500 bytes in size. Each packet is provided with a recipient address and a sender address. Packets prepared in this way are sent over communication channels to other nodes. When receiving any packet, the node analyzes the recipient's address and, if it matches its own address, the packet is accepted, in otherwise goes further. Received packets related to the same message are accumulated. Once all packets of one message are received, they are concatenated and delivered to the recipient. Copies of packets are stored on sending nodes until a response is received from the recipient node indicating successful delivery of the message. This ensures reliability. To deliver a letter to the recipient, you only need to know his address and the coordinates of the nearest mailbox. On the way to the addressee, the letter passes several post offices (nodes).

FTP service

Internet service FTP (file transfer protocol) stands for protocol
file transfer, but when considering FTP as an Internet service there is
not just a protocol, but a service - access to files in file
archives.

IN UNIX systems FTP- standard program, operating over the TCP protocol,
always supplied with the operating system. Its original purpose is
transferring files between different computers running on TCP/IP networks: on
On one of the computers the server program is running, on the second the user runs
a client program that connects to the server and sends or receives
FTP files (Figure 2)

Figure 2. FTP protocol diagram

FTP protocol optimized for file transfer. That's why FTP programs steel
part of a separate Internet service. The FTP server can be configured like this
way that you can connect with him not only under a specific name, but also under
conditional name anonymous - anonymous person. Then not all information becomes available to the client.
file system computer, but a certain set of files on the server, which
composes the contents of an anonymous ftp server - a public file archive.

Today, public file archives are organized primarily as servers
anonymous ftp. A huge amount of information is available on such servers today.
and software. Almost everything that can be provided
to the public in the form of files, accessible from anonymous ftp servers. These are programs -
freeware and demo versions and multimedia, it's finally
just texts - laws, books, articles, reports.

Despite its popularity, FTP has many disadvantages. Programs-
FTP clients may not always be convenient or easy to use. It's not always possible
understand what kind of file is in front of you - whether it is the file that you are looking for or not. No
a simple and universal search tool for anonymous ftp servers - although for
This is why there are special programs and services, but they don’t always provide
the desired results.

FTP Servers can also organize access to files under a password - for example,
to your clients.

TELNET service

The purpose of the TELNET protocol is to provide a fairly general, bidirectional, eight-bit byte-oriented means of communication. Its main purpose is to allow terminal devices and terminal processes to communicate with each other. It is intended that this protocol can be used for terminal-to-terminal communication ("bundling") or for process-to-process communication ("distributed computing").

Figure 3. Telnet terminal window

Although a Telnet session has a client side and a server side, the protocol is actually completely symmetrical. After establishing a transport connection (usually TCP), both ends of it play the role of “network virtual terminals” (English). Network Virtual Terminal, NVT) exchanging two types of data:

Application data (that is, data that goes from the user to the text application on the server side and back);

Telnet protocol commands, a special case of which are options that serve to understand the capabilities and preferences of the parties (Figure 3).

Although a Telnet session running over TCP is full duplex, the NVT should be considered a half-duplex device that operates in line buffered mode by default.

Application data passes through the protocol without changes, that is, at the output of the second virtual terminal we see exactly what was entered at the input of the first. From a protocol point of view, the data is simply a sequence of bytes (octets), which by default belong to the ASCII set, but when the option is enabled Binary- any. Although extensions have been proposed to identify a character set, they are not used in practice.

All application data octet values ​​except \377 (decimal: 255) are transmitted as is over the transport. The \377 octet is transmitted as a \377\377 sequence of two octets. This is because the \377 octet is used at the transport layer to encode options.

The protocol provides minimal functionality by default and a set of options that extend it. The principle of negotiated options requires negotiations to take place when each option is included. One party initiates the request, and the other party can either accept or reject the offer. If the request is accepted, the option takes effect immediately. Options are described separately from the protocol itself, and their support by software is optional. The protocol client (network terminal) is instructed to reject requests to enable unsupported and unknown options.

Historically, Telnet was used for remote access to the interface command line operating systems. Subsequently, it began to be used for other text interfaces, including MUD games. Theoretically, even both sides of the protocol can be not only people, but also programs.

Sometimes telnet clients are used to access other protocols based on the TCP transport, see Telnet and other protocols.

The telnet protocol is used in the control FTP connection, that is, logging into the server with the command telnet ftp.example.net ftp to perform debugging and experimentation is not only possible, but also correct (unlike using telnet clients to access HTTP, IRC and most other protocols).

The protocol does not provide for the use of either encryption or data authentication. Therefore, it is vulnerable to any type of attack to which its transport, that is, the TCP protocol, is vulnerable. For the functionality of remote access to the system, it is currently used network protocol SSH (especially its version 2), when created, the emphasis was placed specifically on security issues. So keep in mind that a Telnet session is very insecure unless it is done on a fully controlled network or with network-level security (various VPN implementations). Due to the unreliability of Telnet as a management tool operating systems They gave up a long time ago.

© 2024 ermake.ru -- About PC repair - Information portal