What is client-server technology. Fundamentals of client-server technology Network login

The client-server technology is connection method between the client (user's computer) and the server ( powerful computer or equipment that provides data), in which they interact directly with each other.

What is a “client-server”?

The general principles of data transfer between the components of a computer network are established by the network architecture. The “client-server” technology is a system in which information is stored and processed on the server side, while the formation of a request and the receipt of data is provided to the client side. Unlike client-server technology, where data is extracted from files, in client-server networks, data is stored on the machine where it is installed. server application network database.

At the same time, client-server technologies provide for the presence special software– client and server. These programs interact using special network data transfer protocols. As a rule, the client and server are installed on different computers, but sometimes they can be installed on the same machine.

The server software is configured to receive and process requests from the user, providing the result in the form of data or functions (e-mail, chatting or browsing the Internet). The computer on which this program is installed must have high performance and high technical parameters.

How the client-server architecture works

The software from the client machine sends the request to the server, where it is processed and the finished result is sent to the client. This technology works on the same principle as a database: request - processing - transferring the result.

The server executes the following features:

  • data storage;
  • processing a request from a client using procedures and triggers;
  • sending the result to the client.

Functions that are implemented client side:

  • formation and sending of a request to the server;
  • receiving results and sending additional commands (requests to add, delete or update information).

Advantages and disadvantages

The client-server architecture has the following advantages:

  • high data processing speed;
  • the ability to quickly work with a large number of clients;
  • separation of the program code of server and client applications.

Multiple users can work at the same time with data through transactions (a sequence of operations presented as a single block) and locks (isolation of data from editing by other users).

Flaws client-server technology:

  • high requirements for hardware and software characteristics of server hardware due to the fact that data processing occurs on the server side;
  • the need for a system administrator who controls the uninterrupted operation of server equipment.

Layered client-server architecture

The multi-level "client-server" technology provides for the allocation of separate server equipment for data processing. Data storage, processing and output operations are performed on different servers. This division of responsibilities increases the efficiency of the network.

An example layered architecture is a three-tier technology. In such a network, in addition to the client and application server, there is an additional database server.

The following three levels:

  1. Lower. This link includes client software with a user interface and interaction system with the next level of data processing.
  2. Average. Requests from client programs are processed by the application server, which processes and prepares information for transmission between the top-level server and the client. It allows you to offload the data warehouse from unnecessary load and distribute requests from different users.
  3. Upper. It is an independent database server where all information is stored. It receives a prepared request from the application server and provides it with the necessary information without interacting directly with client applications.

Dedicated server network

A dedicated server architecture is a local area network in which all interacting devices are controlled by one or more servers. In this case, clients (workstations) send a request to resources through the server software. A dedicated server does not have a client side and functions only as a server to process requests from clients and protect data. In the presence of multiple servers, functions between them can be distributed with the definition for each individual duties.

The client-server architecture is used in a large number of network technologies used to access various network services. Let's take a quick look at some types of such services (and servers).

Web servers

Initially, they provided access to hypertext documents via HTTP (Huper Text Transfer Protocol). Now they support advanced features, in particular, working with binary files (images, multimedia, etc.).

Application servers

Designed for centralized solution of applied problems in a certain subject area. To do this, users have the right run server programs for execution. The use of application servers reduces client configuration requirements and simplifies overall network management.

Database servers

Database servers are used to process user requests in the SQL language. In this case, the DBMS is located on the server, to which client applications connect.

File servers

File server stores information in the form of files and provides users with access to it. As a rule, a file server also provides a certain level of protection against unauthorized access.

Proxy server

First, it acts as an intermediary, helping users obtain information from the Internet while securing the network.

Secondly, it stores frequently requested information in a cache on a local disk, quickly delivering it to users without re-accessing the Internet.

Firewalls(firewalls)

Firewalls that analyze and filter passing network traffic in order to ensure network security.

Mail servers

They provide services for sending and receiving electronic mail messages.

Remote Access Servers (RAS)

These systems provide communication with the network via dial-up lines. A remote employee can use corporate LAN resources by connecting to it using a regular modem.

These are just a few types of the entire variety of client-server technologies used in both local and global networks.

To access certain network services, clients are used, the capabilities of which are characterized by the concept of "thickness". It defines the hardware configuration and software that the customer has. Consider the possible boundary values:

"Thin" client

This term defines a client whose computing resources are only sufficient to run the required network application through a web interface. The user interface of such an application is formed by means of static HTML (JavaScript execution is not provided), all application logic is performed on the server.
For the thin client to work, it is enough just to provide the ability to launch a web browser, in the window of which all actions are carried out. For this reason, the web browser is often referred to as the "universal client".

"Fat" client

This is a workstation or personal computer running its own disk operating system and having the necessary set of software. “Fat” clients turn to network servers mainly for additional services (for example, access to a web server or corporate database).
A “thick” client also means a client network application running under the local OS. Such an application combines a data presentation component (OS graphical user interface) and an application component (computing power of the client computer).

Recently, another term has been used more and more often: "rich"-client. "Rich"-client is a kind of compromise between "thick" and "thin" client. Like the "thin" client, the "rich" client also presents a graphical interface, already described by means of XML and including some functionality of thick clients (for example, a drag-and-drop interface, tabs, multiple windows, drop-down menus, etc.)

The application logic of the “rich” client is also implemented on the server. The data is sent in a standard exchange format based on the same XML (SOAP, XML-RPC protocols) and interpreted by the client.

Some of the major XML-based rich client protocols are listed below:

  • XAML (eXtensible Application Markup Language) - developed by Microsoft, used in applications on the .NET platform;
  • XUL (XML User Interface Language) is a standard developed by the Mozilla project, used, for example, in mail client Mozilla Thunderbird or Mozilla Firefox browser;
  • Flex is an XML-based multimedia technology developed by Macromedia/Adobe.

Conclusion

So, the main idea of ​​the client-server architecture is to divide the network application into several components, each of which implements a specific set of services. The components of such an application can run on different computers, performing server and/or client functions. This improves the reliability, security, and performance of network applications and the network as a whole.

test questions

1. What is the main idea of ​​K-S interaction?

2. What is the difference between the concepts of "client-server architecture" and "client-server technology"?

3. List the components of the K-C interaction.

4. What tasks does the presentation component perform in the KS architecture?

5. What is the purpose of database access tools as a separate component in the CS architecture?

6. Why is business logic singled out as a separate component in K-S architecture?

7. List the models of client-server interaction.

8. Describe the file server model.

9. Describe the database server model.

10. Describe the "app server" model

11. Describe the "terminal server" model

12. List the main types of servers.

The nature of the interaction of computers in a local network is usually associated with their functional purpose. As in the case of a direct connection, LANs use the concept of a client and a server. Client-server technology is a special way of interaction between computers on a local network, in which one of the computers (server) provides its resources to another computer (client). Accordingly, a distinction is made between peer-to-peer networks and server networks.

With a peer-to-peer architecture, there are no dedicated servers in the network; each workstation can perform the functions of a client and a server. In this case, the workstation allocates a part of its resources for common use to all network workstations. As a rule, peer-to-peer networks are created on the basis of computers of the same power. Peer-to-peer networks are quite simple to set up and operate. In the case when the network consists of a small number of computers and its main function is the exchange of information between workstations, a peer-to-peer architecture is the most appropriate solution. Such a network can be quickly and easily implemented using such a popular operating system as Windows 95.

The presence of distributed data and the ability to change its server resources by each workstation complicates the protection of information from unauthorized access, which is one of the disadvantages of peer-to-peer networks. Understanding this, developers are beginning to pay special attention to information security issues in peer-to-peer networks.

Another disadvantage of peer-to-peer networks is their lower performance. This is due to the fact that network resources are concentrated on workstations, which have to simultaneously perform the functions of clients and servers.

In server networks, there is a clear division of functions between computers: some of them are constantly clients, while others are servers. Given the variety of services provided by computer networks, there are several types of servers, namely: network server, file server, print server, mail server, etc.

A network server is a specialized computer focused on performing the bulk of computational work and computer network management functions. This server contains the core of the network operating system, which controls the operation of the entire local network. The network server has a fairly high speed and a large amount of memory. With such a network organization, the functions of workstations are reduced to the input-output of information and its exchange with a network server.

The term file server refers to a computer whose primary function is to store, manage, and transfer data files. It does not process or modify the files it saves or transmits. The server may not "know" whether the file is a text document, an image, or a spreadsheet. In general, the file server may even lack a keyboard and monitor. All changes to data files are made from client workstations. To do this, clients read the data files from the file server, make the necessary changes to the data, and return them back to the file server. Such an organization is most effective when a large number of users work with a common database. Within large networks, several file servers can be used simultaneously.

A print server (print server) is a printing device that is connected to a transmission medium using a network adapter. Such a network printing device is self-contained and operates independently of other network devices. The print server serves print requests from all servers and workstations. Special high-performance printers are used as print servers.

With a high intensity of data exchange with global networks, mail servers are allocated within local networks, with the help of which messages are processed Email. Web servers can be used to communicate effectively with the Internet.

Network technologies

Ethernet is the most popular technology for building local area networks. Based on the IEEE 802.3 standard, Ethernet transmits data at 10 Mbps. In an Ethernet network, devices check for the presence of a signal on the network channel ("listen" to it). If no other device is using the channel, then the Ethernet device transmits data. Each workstation on this LAN segment analyzes the data and determines if it is intended for it. Such a scheme is most effective with a small number of users or a small number of messages transmitted in a segment. With an increase in the number of users, the network will not work as efficiently. In this case, the optimal solution is to increase the number of segments to serve groups with fewer users. In the meantime, there has been a recent trend to provide dedicated 10 Mbps lines to every desktop system. This trend is driven by the availability of inexpensive Ethernet switches. Packets transmitted over an Ethernet network can be of variable length.

Fast Ethernet uses the same basic technology as Ethernet - Carrier Sense Multiple Access with Collision Detection (CSMA/CD). Both technologies are based on the IEEE 802.3 standard. As a result, both types of networks can use (in most cases) the same type of cable, the same network devices and applications. Fast Ethernet networks allow you to transfer data at a speed of 100 Mbps, that is, ten times faster than Ethernet. As applications become more complex and the number of users accessing the network increases, this increased throughput can help eliminate bottlenecks that cause network response times to increase.

Benefits of 10/100 Mbps Networking Solutions

Recently, a new solution has emerged that provides both 10Mbps Ethernet and 100Mbps Fast Ethernet broad compatibility. "Dual-speed" 10/100-Mbps Ethernet/Fast Ethernet technology allows devices such as NICs, hubs, and switches to operate at either of these speeds (depending on which device they are connected to). If you connect a PC with a 10/100Mbps Ethernet/Fast Ethernet NIC to the 10Mbps hub port, it will operate at 10Mbps. If you connect it to a 10/100 Mbps port on a hub (such as the 3Com SuperStack II Dual Speed ​​Hub 500), it will automatically recognize the new speed and support 100 Mbps. This makes it possible to gradually, at the right pace, move to higher performance. It also simplifies network client and server hardware to support the next generation of bandwidth-intensive and network service-intensive applications.

gigabit ethernet

Gigabit Ethernet networks are compatible with Ethernet and Fast Ethernet network infrastructure, but operate at 1000 Mbps - 10 times faster than Fast Ethernet. Gigabit Ethernet is a powerful solution that eliminates bottlenecks in the main network (where network segments connect and where servers are located). Bottlenecks arise from the emergence of bandwidth-demanding applications, the increasing growth of unpredictable intranet traffic flows, and multimedia applications. Gigabit Ethernet provides a way to seamlessly migrate Ethernet and Fast Ethernet workgroups to new technology. Such a transition has a minimal impact on their operations and allows them to achieve higher productivity.

ATM (Asynchronous Transfer Mode) or asynchronous transfer mode is a switching technology that uses fixed-length cells to transfer data. Operating at high speeds, ATM networks support the integrated transmission of voice, video, and data on a single channel, acting as both local and wide area networks. Since their operation is different from the Internet and requires a special infrastructure, such networks are mainly used as backbone networks that connect and unite network segments.

Technologies with ring architecture

Token Ring and FDDI technologies are used to create relay networks with token access. They form a continuous ring in which a special sequence of bits, called a token, circulates in one direction. The token is passed around the ring, bypassing each workstation on the network. A workstation that has the information to send can add a data frame to the token. Otherwise (if there is no data), it simply passes the token to the next station. Token Ring networks operate at 4 or 16 Mbps and are used primarily in the IBM environment.

FDDI (Fiber Distributed Data Interface) is also a ring technology, but it is designed for fiber optic cable and is used in backbone networks. This protocol is similar to Token Ring and provides for the transfer of a token around the ring from one workstation to another. Unlike Token Ring, FDDI networks usually consist of two rings whose tokens circulate in opposite directions. This is done to ensure the uninterrupted operation of the network (usually on a fiber optic cable) - to protect it from failures in one of the rings. FDDI networks support 100 Mbps and long distance data transmission. The maximum circumference of the FDDI network is 100 km, and the distance between workstations is 2 km.

Both ring technologies are being used in the latest network installations as an alternative to ATM and various flavors of Ethernet.

We will create further distributed computing systems using client-server technology. This technology provides a unified approach to the exchange of information between devices, whether they are computers located on different continents and connected via the Internet or Arduino boards lying on the same table and connected by a twisted pair.

In future lessons, I plan to talk about creating information networks using:

  • Ethernet LAN controllers;
  • WiFi modems;
  • GSM modems;
  • Bluetooth modems.

All these devices communicate using a client-server model. The same principle applies to the transmission of information on the Internet.

I do not pretend to complete coverage of this voluminous topic. I want to give the minimum information necessary to understand the following lessons.

Client-server technology.

The client and the server are programs located on different computers, in different controllers and other similar devices. They interact with each other through a computer network using network protocols.

Server programs are service providers. They constantly wait for requests from client programs and provide them with their services (transfer data, solve computational problems, control something, etc.). The server must be constantly on and “listen” to the network. Each server program, as a rule, can fulfill requests from several client programs.

The client program is the initiator of the request, which can be made at any time. Unlike the server, the client does not need to be always on. It is enough to connect at the time of the request.

So, in general terms, the client-server system looks like this:

  • There are computers, Arduino controllers, tablets, cell phones and other smart devices.
  • All of them are included in a common computer network. Wired or wireless, it doesn't matter. They can even be connected to different networks interconnected via a global network, such as the Internet.
  • Some devices have server programs installed. These devices are called servers, must be constantly turned on, and their task is to process requests from clients.
  • Client programs work on other devices. Such devices are called clients, they initiate requests to servers. They are included only at times when it is necessary to contact the servers.

For example, if you want to turn on an iron from a cell phone via WiFi, then the iron will be the server, and the phone will be the client. The iron must be constantly plugged into the outlet, and you will run the control program on the phone as needed. If you connect a computer to the WiFi network of the iron, you can also control the iron using the computer. It will be another client. The WiFi microwave added to the system will be the server. And so the system can be expanded indefinitely.

Sending data in batches.

The client-server technology is generally intended for use with large information networks. From one subscriber to another, data can travel a complex path through various physical channels and networks. The data delivery path may vary depending on the state of individual network elements. Some network components may not work at this moment, then the data will go the other way. Delivery times may vary. Data may even disappear, not reach the addressee.

Therefore, the simple transfer of data in a loop, as we transferred data to a computer in some previous lessons, is completely impossible in complex networks. Information is transmitted in limited portions - packets. On the transmitting side, information is divided into packets, and on the receiving side, it is “glued together” from packets into whole data. The volume of packets is usually no more than a few kilobytes.

The package is analogous to a regular mail letter. It also, in addition to information, must contain the address of the recipient and the address of the sender.

The packet consists of a header and an information part. The header contains the addresses of the recipient and the sender, as well as service information necessary for "gluing" packets on the receiving side. The network equipment uses the header to determine where to send the packet.

Packet addressing.

There is a lot of detailed information on this topic on the Internet. I want to tell as close to practice as possible.

Already in the next lesson, for data transfer using client-server technology, we will have to set information for addressing packets. Those. information on where to deliver data packets. In general, we will have to set the following parameters:

  • device IP address;
  • subnet mask;
  • Domain name;
  • IP address of the network gateway;
  • MAC address;
  • port.

Let's figure out what it is.

IP addresses.

The client-server technology assumes that all subscribers of all networks of the world are connected to a single global network. In fact, in many cases this is true. For example, most computers or mobile devices connected to the internet. Therefore, an addressing format is used that is designed for such a huge number of subscribers. But even if the client-server technology is used in local networks, the accepted address format is still preserved, with obvious redundancy.

Each connection point of the device to the network is assigned a unique number - an IP address (Internet Protocol Address). The IP address is assigned not to the device (computer), but to the connection interface. In principle, devices can have several connection points, which means several different IP addresses.

An IP address is a 32-bit number or 4 bytes. For clarity, it is customary to write it as 4 decimal numbers from 0 to 255, separated by dots. For example, my server IP address is 31.31.196.216.

In order to make it easier for network equipment to build a packet delivery route in the IP address format, logical addressing has been introduced. The IP address is divided into 2 logical fields: the network number and the host number. The sizes of these fields depend on the value of the first (highest) octet of the IP address and are divided into 5 groups - classes. This is the so-called classful routing method.

Class High octet Format

(C-network,
U-knot)

Starting address End address Number of networks Number of nodes
A 0 S.U.U.U. 0.0.0.0 127.255.255.255 128 16777216
B 10 S.S.U.U 128.0.0.0 191.255.255.255 16384 65534
C 110 S.S.S.U 192.0.0.0 223.255.255.255 2097152 254
D 1110 Group address 224.0.0.0 239.255.255.255 - 2 28
E 1111 Reserve 240.0.0.0 255.255.255.255 - 2 27

Class A is intended for use in large networks. Class B is used in medium sized networks. Class C is intended for networks with a small number of nodes. Class D is used to refer to groups of hosts, while class E addresses are reserved.

There are restrictions on the choice of IP addresses. I considered the following to be the main ones for us:

  • The address 127.0.0.1 is called loopback and is used to test programs within the same device. Data sent to this address is not transmitted over the network, but returned to the upper level program as received.
  • “Gray” addresses are IP addresses allowed only for devices operating in local networks without access to the Internet. These addresses are never processed by routers. They are used in local networks.
    • Class A: 10.0.0.0 - 10.255.255.255
    • Class B: 172.16.0.0 - 172.31.255.255
    • Class C: 192.168.0.0 - 192.168.255.255
  • If the network number field contains all 0's, then it means that the host belongs to the same network as the host that sent the packet.

Subnet masks.

In classful routing, the number of network and host address bits in an IP address is given by the type of the class. And there are only 5 classes, 3 are actually used. Therefore, the classful routing method in most cases does not allow you to optimally choose the size of the network. This leads to wasteful use of the IP address space.

In 1993, a classless routing method was introduced, which is currently the main one. It allows you to flexibly, and therefore rationally choose the required number of network nodes. This addressing method uses variable length subnet masks.

A network node is assigned not only an IP address, but also a subnet mask. It has the same size as the IP address, 32 bits. The subnet mask determines which part of the IP address is for the network and which is for the host.

Each bit of the subnet mask corresponds to an IP address bit in the same bit. A 1 in the mask bit indicates that the corresponding bit in the IP address belongs to the network address, and a mask bit with a value of 0 indicates that the bit in the IP address belongs to the host.

When transmitting a packet, the node uses a mask to extract the network part from its IP address, compares it with the destination address, and if they match, this means that the transmitting and receiving nodes are on the same network. Then the package is delivered locally. Otherwise, the packet is sent through the network interface to another network. I emphasize that the subnet mask is not part of the packet. It only affects the node's routing logic.

In fact, the mask allows one large network to be divided into several subnets. The size of any subnet (number of IP addresses) must be a multiple of a power of 2. That is, 4, 8, 16, etc. This condition is determined by the fact that the bits of the network and host address fields must be consecutive. You cannot set, for example, 5 bits - the network address, then 8 bits - the host address, and then again the network address bits.

An example of a network notation with four nodes looks like this:

Network 31.34.196.32, mask 255.255.255.252

The subnet mask always consists of consecutive ones (signs of the network address) and consecutive zeros (signs of the host address). Based on this principle, there is another way to record the same address information.

Network 31.34.196.32/30

/30 is the number of ones in the subnet mask. In this example, two zeros remain, which corresponds to 2 bits of the host address, or four hosts.

Network size (number of nodes) long mask Short mask
4 255.255.255.252 /30
8 255.255.255.248 /29
16 255.255.255.240 /28
32 255.255.255.224 /27
64 255.255.255.192 /26
128 255.255.255.128 /25
256 255.255.255.0 /24
  • The last number of the first subnet address must be divisible without remainder by the size of the network.
  • The first and last subnet addresses are service addresses and cannot be used.

Domain name.

It is inconvenient for a person to work with IP addresses. These are sets of numbers, and a person is used to reading letters, coherently written letters are even better, i.e. the words. In order to make it more convenient for people to work with networks, a different system for identifying network devices is used.

Any IP address can be assigned a literal identifier that is more human-readable. The identifier is called the domain name or domain.

A domain name is a sequence of two or more words separated by dots. The last word is the first level domain, the penultimate word is the second level domain, and so on. I think everyone knows about it.

Communication between IP addresses and domain names occurs through a distributed database using DNS servers. Every owner of a second-level domain must have a DNS server. DNS servers are combined into a complex hierarchical structure and are able to exchange data on the correspondence between IP addresses and domain names.

But it's not all that important. For us, the main thing is that any client or server can access the DNS server with a DNS request, i.e. with a match request IP address - domain name or vice versa domain name - IP address. If the DNS server has information about the correspondence between the IP address and the domain, then it responds. If it does not know, then it looks for information on other DNS servers and then informs the client.

Network gateways.

A network gateway is a hardware router or software for interfacing networks with different protocols. In the general case, its task is to convert the protocols of one type of network into the protocols of another network. As a rule, networks have different physical transmission media.

An example is a local network of computers connected to the Internet. Within their own local area network (subnet), computers communicate without the need for any intermediate device. But as soon as the computer needs to communicate with another network, such as accessing the Internet, it uses a router that acts as a network gateway.

The routers that everyone who is connected to the wired internet have are one example of a network gateway. A network gateway is a point through which Internet access is provided.

In general, using a network gateway looks like this:

  • Let's say we have a system of several Arduino boards connected via an Ethernet local network to a router, which in turn is connected to the Internet.
  • In the local network, we use “gray” IP addresses (described above), which do not allow access to the Internet. The router has two interfaces: our local network with a “gray” IP address and an interface for connecting to the Internet with a “white” address.
  • In the node configuration, we specify the gateway address, i.e. "white" IP address of the interface of the router connected to the Internet.
  • Now, if the router receives a packet from a device with a “gray” address with a request to receive information from the Internet, it replaces the “gray” address in the packet header with its “white” address and sends it to the global network. Having received a response from the Internet, it replaces the “white” address with the “gray” address that was remembered during the request and transfers the packet to the local device.

MAC address.

A MAC address is a unique identifier for devices on a local network. As a rule, it is recorded at the equipment manufacturer in the permanent memory of the device.

The address consists of 6 bytes. It is customary to write it in hexadecimal in the following formats: c4-0b-cb-8b-c3-3a or c4:0b:cb:8b:c3:3a. The first three bytes are the unique identifier of the manufacturing organization. The rest of the bytes are called ”Interface number” and their meaning is unique for each particular device.

The IP address is logical and is set by the administrator. A MAC address is a physical, permanent address. It is he who is used to address frames, for example, in Ethernet local area networks. When a packet is sent to a specific IP address, the computer determines the corresponding MAC address using a special ARP table. If there is no data about the MAC address in the table, then the computer requests it using a special protocol. If the MAC address cannot be determined, no packets will be sent to that device.

Ports.

The IP address is used by the network equipment to identify the recipient of the data. But a device, such as a server, can run multiple applications. In order to determine which application the data is intended for, another number is added to the header - the port number.

The port is used to define the packet receiver process within the same IP address.

16 bits are allocated for the port number, which corresponds to numbers from 0 to 65535. The first 1024 ports are reserved for standard processes such as mail, websites, etc. It is better not to use them in your applications.

Static and dynamic IP addresses. DHCP protocol.

IP addresses can be assigned manually. Quite a tedious operation for an administrator. And in the case when the user does not have the necessary knowledge, the task becomes intractable. In addition, not all users are constantly connected to the network, and other subscribers cannot use the static addresses allocated to them.

The problem is solved by using dynamic IP addresses. Dynamic addresses are issued to clients for a limited time while they are continuously online. Dynamic address allocation is managed by the DHCP protocol.

DHCP is a network protocol that allows devices to automatically obtain IP addresses and other settings needed to operate on a network.

At the configuration stage, the client device contacts the DHCP server and receives the necessary parameters from it. A range of addresses distributed among network devices can be specified.

Viewing network device settings using the command line.

There are many ways to find out the IP address or MAC address of your network card. The simplest is to use the CMD commands of the operating system. I'll show you how to do it using Windows 7 as an example.

The Windows\System32 folder contains the cmd.exe file. This is a command line interpreter. With it, you can get system information and configure the system.

Open the execute window. To do this, we execute the menu Start -> Run or press the key combination Win+R.

Type cmd and press OK or Enter. The command interpreter window appears.

Now you can set any of the many commands. For now, we are interested in commands for viewing the configuration of network devices.

First of all, this is a team ipconfig, which displays the NIC settings.

Detailed version ipconfig/all.

Only MAC addresses are shown by the command getmac.

The table of correspondence between IP and MAC addresses (ARP table) is shown by the command arp -a.

You can check the connection with the network device with the command ping.

  • ping domain name
  • ping IP address

My site server is responding.

Basic network protocols.

I will briefly talk about the protocols we need in future lessons.

A network protocol is a set of conventions, rules that govern the exchange of data on a network. We are not going to implement these protocols at a low level. We intend to use off-the-shelf hardware and software modules that implement network protocols. Therefore, there is no need to go into detail about header formats, data formats, and so on. But, why each protocol is needed, how it differs from others, when it is used, you need to know.

IP protocol.

The Internet Protocol delivers data packets from one network device to another. The IP protocol unites local networks into a single global network, ensuring the transfer of information packets between any network devices. Of the protocols presented in this lesson, IP is at the lowest level. All other protocols use it.

The IP protocol works without establishing connections. It simply tries to deliver the packet to the specified IP address.

IP treats each data packet as a separate, independent entity, unrelated to other packets. It is not possible using only the IP protocol to transfer a significant amount of related data. For example, in Ethernet networks, the maximum amount of data per IP packet is only 1500 bytes.

There are no mechanisms in the IP protocol to control the validity of the final data. The control codes are only used to protect the integrity of the header data. Those. IP does not guarantee that the data in a received packet will be correct.

If an error occurs during packet delivery and the packet is lost, then IP does not attempt to resend the packet. Those. IP does not guarantee that a packet will be delivered.

Briefly about the IP protocol, we can say that:

  • it delivers small (no more than 1500 bytes) individual data packets between IP addresses;
  • it does not guarantee that the delivered data will be correct;

TCP protocol.

Transmission Control Protocol (transmission control protocol) is the main data transmission protocol of the Internet. It uses the ability of the IP protocol to deliver information from one host to another. But unlike IP, it:

  • Allows you to transfer large amounts of information. The division of data into packets and the “gluing” of data on the receiving side is provided by TCP.
  • The data is transferred with a pre-established connection.
  • Performs data integrity checks.
  • In case of data loss, it initiates repeated requests for lost packets, eliminates duplication when receiving copies of one packet.

In fact, the TCP protocol removes all the problems of data delivery. If possible, he will deliver them. It is no coincidence that this is the main data transfer protocol in networks. The terminology of TCP/IP networks is often used.

UDP protocol.

The User Datagram Protocol is a simple protocol for transferring data without establishing a connection. The data is sent in one direction without checking the readiness of the receiver and without confirmation of delivery. The data size of a packet can be up to 64 kB, but in practice many networks only support a data size of 1500 bytes.

The main advantage of this protocol is simplicity and high transmission speed. Often used in speed-critical applications such as video streams. In such tasks, it is preferable to lose a few packets than to wait for stragglers.

The UDP protocol is characterized by:

  • it is a connectionless protocol;
  • it delivers small individual packets of data between IP addresses;
  • it does not guarantee that the data will be delivered at all;
  • it will not tell the sender if the data was delivered and will not retransmit the packet;
  • there is no ordering of packets, the order of message delivery is not defined.

HTTP protocol.

Most likely, I will write more about this protocol in the next lessons. And now I will briefly say that this is the Hyper Text Transfer Protocol. It is used to get information from websites. In this case, the web browser acts as a client, and the network device as a web server.

In the next lesson, we will apply client-server technology in practice using an Ethernet network.

Using client-server technology

Over time, the not very functional model of the file server for local area networks (FS) was replaced by the client-server structure models (RDA, DBS and AS) that appeared one after the other.

The "Client-Server" technology, which occupied the very bottom of the database, has become the main technology of the global Internet. Further, as a result of the transfer of the ideas of the Internet to the sphere of corporate systems, the Intranet technology arose. In contrast to the "Client-server" technology, this technology is focused on information in its final form for consumption, and not on data. Computing systems, which are built on the basis of Intranet, include central information servers and certain components for presenting information to the last user (browsers or navigators). The action between the server and the client on the Intranet is performed using web technologies.

In modern times, the "Client-server" technology has become very widespread, but this technology itself does not have universal recipes. It only provides a general judgment on how the current distribution information system should be created. Similarly, implementations of this technology in certain software products and even types of software are recognized quite significantly.

Classical two-level architecture "Client - server"

As a rule, network components do not have equal rights: some have access to resources (for example: a database management system, processor, printer, file system, and others), while others have the ability to access these resources. operating system server technology

The "Client-Server" technology is the architecture of the software package distributing the application program into two logically different parts (server and client), which interact according to the "request-response" scheme and solve their own specific tasks.

A program (or computer) that manages and/or owns a resource is called a resource server.

A program (computer or) that requests and uses a resource is called a client of this resource.

In this case, such conditions may also appear when some software block will simultaneously implement the functions of the server in relation to one block and the client in relation to another block.

The main principle of the Client-Server technology is to divide the application functions into at least three links:

User interface modules;

This group is also called presentation logic. It allows users to interact with applications. Regardless of the specific characteristics of the presentation logic (command line interface, proxy interfaces, complex graphical user interfaces), its purpose is to provide a means for more efficient information exchange between the information system and the user.

Data storage modules;

This group is also called business logic. The business logic finds what exactly an application needs (for example, application functions specific to the provided domain). Separating an application along boundaries between programs provides a natural basis for distributing an application across two or more computers.

Data processing modules (resource management functions);

This group is also called logic data access algorithms or simply data access. Data entry algorithms are considered as a specific application-specific interface to a persistent storage device such as a DBMS or a file system. With the help of data processing modules, a specific interface is organized for the DBMS application. Using the interface, an application can manage database connections and queries (translating application-specific queries into SQL, getting results, and translating those results back into application-specific data structures). Each of the listed links can be implemented independently of several others. For example, without changing the programs that are used to process and store data, you can change the user interface so that the same data will be displayed in the form of tables, histograms or graphs. Most simple applications are often able to assemble all three links into a single program, and such a division corresponds to functional boundaries.

In accordance with the division of functions in each application, the following components are distinguished:

  • - data presentation component;
  • - application component;
  • - resource management component.

The client-server in the classic architecture needs to distribute the three main parts of the application into 2 physical modules. Typically, the application component resides on the server (for example, a database server), the data presentation component resides on the client side, and the resource management component is distributed between the server and client parts. This is the main drawback of the classical two-tier architecture.

In a two-tier architecture, when separating data processing algorithms, developers must have complete information about the latest changes that have been made to the system and understand these changes, which creates no small difficulties in developing client-server systems, maintaining and installing them, since it is necessary to spend large efforts to coordinate the actions of different groups of specialists. Contradictions often arise in the actions of developers, and this slows down the development of the system and forces changes to ready-made and proven elements.

To avoid inconsistency between different elements of the architecture, two modifications of the two-tier architecture "Client - Server" were created: "Thick Client" ("Thin Server") and "Thin Client" ("Thick Server").

In this architecture, the developers tried to perform data processing on one of two physical parts - either on the client side ("Thick Client") or on the server ("Thin Client").

Each approach has its significant drawbacks. In the first situation, the network is unnecessarily overloaded, because unprocessed, that is, redundant data, is transmitted over it. In addition, it becomes more difficult to maintain the system and change it, because the correction of an error or the replacement of the calculation algorithm requires the simultaneous complete replacement of all interface programs, if a complete replacement is not made, then data inconsistency or errors may occur. If all information processing is performed on the server, then there is a problem of describing the built-in procedures and debugging them. A system with information processing on the server is absolutely impossible to transfer to another platform (OS), this is a serious drawback.

If a two-level classical client-server architecture is being created, then you need to know the following facts:

The "Thick Server" architecture is similar to the "Thin Client" architecture

Passing a request from the client to the server, processing the request by the server, and passing the result to the client. At the same time, the architectures have the following disadvantages:

  • - the implementation becomes more complicated, since languages ​​such as SQL are not suitable for developing such software and there are no good debugging tools;
  • - the performance of programs written in languages ​​such as SQL is very low compared to those created in other languages, which is most important for complex systems;
  • - programs that are written in DBMS languages, as a rule, do not function very reliably; an error in them can lead to the failure of the entire database server;
  • - the resulting programs are completely non-portable to other platforms and systems.
  • - "Thick Client" architecture is similar to "Thin Server" architecture

Request processing is performed on the client side, that is, all raw data from the server is transferred to the client. In this case, architectures have negative aspects:

  • - the software update becomes more complicated, because its replacement must be carried out simultaneously throughout the system;
  • - the distribution of powers becomes more complicated, because access control occurs not by actions, but by tables;
  • - the network is overloaded due to the transmission of raw data through it;
  • - Weak data protection, as it is difficult to correctly allocate authority.

To solve these problems, you need to use multi-level (three or more levels) client-server architectures.

Three-level model .

Since the mid-90s of the last century, the three-tier client-server architecture has gained popularity among specialists, dividing the information system by functionality into three certain links: data access logic, presentation logic and business logic. In contrast to the two-tier architecture, the three-tier one has an additional link - an application server designed to implement business logic, while the client is completely unloaded, which sends requests to the middleware, and all the capabilities of the servers are used to the maximum.

In a three-tier architecture, the client, as a rule, is not overloaded with data processing functions, but performs its main role as a system for presenting information coming from the application server. Such an interface can be implemented using standard Web technology tools - browser, CGI and Java. This reduces the volume of data provided between the client and the application server, allowing client computers to connect even over slow lines such as telephone lines. Because of this, the client side can be so simple that in most cases it is done using a generic browser. However, if you still have to change it, then this procedure can be implemented quickly and painlessly.

An application server is software that is an intermediate layer between a server and a client.

  • - Message oriented - prominent representatives of MQseries and JMS;
  • - Object Broker - prominent representatives of CORBA and DCOM;
  • - Component based - bright representatives of .NET and EJB.

Using an application server brings many more features, for example, the load on client computers is reduced, since the application server distributes the load and provides protection from failures. Since the business logic is stored on the application server, client programs are not affected in any way by any changes in reporting or calculations.

There are few application servers from such famous companies as Sun, Oracle Microsystem, IBM, Borland, and each of them differs in the set of services provided (I will not take into account performance in this case). These services make it easy to program and deploy enterprise-wide applications. Typically, an application server provides the following services:

  • - WEB Server - most often included in the delivery of the most powerful and popular Apache;
  • - WEB Container - allows you to execute JSP and servlets. For Apache, this service is Tomcat;
  • - CORBA Agent - can provide a distributed directory for storing CORBA objects;
  • - Messaging Service - message broker;
  • - Transaction Service - already from the name it is clear that this is a transaction service;
  • - JDBC - drivers for connecting to databases, because it is the application server that will have to communicate with databases and it needs to be able to connect to the database used in your company;
  • - Java Mail - this service can provide service to SMTP;
  • - JMS (Java Messaging Service) - processing of synchronous and asynchronous messages;
  • - RMI (Remote Method Invocation) - calling remote procedures.

Layered client-server systems can be easily translated to Web technology - for this you need to replace the client part with a specialized or universal browser, and supplement the application server with a Web server and small server procedure callers. For

These programs can be developed using either the Common Gateway Interface (CGI) or the more modern Java technology.

In a three-level system, the fastest lines that require minimal costs can be used as communication channels between the application server and the DBMS, since the servers are usually located in the same room (server room) and will not overload the network due to the transfer of a large amount of information.

All of the above leads to the conclusion that the two-level architecture is very inferior to the multi-level architecture, in this regard, today only the client-server multi-level architecture is used, which recognizes three modifications - RDA, DBS and AS.

Various models of "Client-Server" technology

The very first core underlying technology for LANs was file server (FS) model. At that time, this technology was very common among domestic developers using systems such as FoxPro, Clipper, Clarion, Paradox, and so on.

In the FS model, the functions of all 3 components (the presentation component, the application component, and the resource access component) are combined in one code that runs on the server computer (host). The client computer in such an architecture is completely absent, and the display and removal of data is performed using a computer computer or terminal in the order of terminal emulation. Applications are usually formed in the fourth generation language (4GL). One of the computers on the network is considered a file server and provides file processing services to other computers. It operates under the control of network operating systems and plays an important role as a component of access to information resources. On other PCs in the network, an application is running, in the codes of which the application component and the presentation component are connected.

The technology of action between the client and the server is as follows: the request is sent to the file server, which transmits the DBMS, which is located on the client computer, the required data block. All processing is performed on the terminal.

An exchange protocol is a set of calls that provide an application with an input to file system on the file server.

The positive aspects of this technology are:

  • - ease of application development;
  • - ease of administration and software updates
  • - low cost of workplace equipment (terminals or cheap computers with low performance in terminal emulation mode are always cheaper than full-fledged PCs).

But the advantages of the FS model outweigh its disadvantages:

Despite the large amount of data that is sent over the network, response time is critical, because each character entered by the client at the terminal must be transmitted to the server, processed by the application, and returned back to be displayed on the terminal screen. In addition, there is the problem of load distribution between several computers.

  • - expensive server hardware since all users share its resources;
  • - no GUI .

Thanks to the solution of the problems inherent in the "File - Server" technology, a more advanced technology has appeared, called the "Client - Server".

For modern DBMS, the client-server architecture has become the de facto standard. If it is assumed that the designed network technology will have a "client-server" architecture, then this means that the application programs implemented within its framework will be distributed, that is, part of the application functions will be implemented in the client program, the other - in the program -server.

Differences in the implementation of applications within the "Client-Server" technology are determined by four factors:

  • - what types of software are in the logical components;
  • - what software mechanisms are used to implement the functions of logical components;
  • - how logical components are distributed by computers in the network;
  • - what mechanisms are used to connect the components to each other.

Based on this, three approaches are distinguished, each of which is implemented in the corresponding model of the Client-Server technology:

  • - remote data access model (Remote Date Access - RDA);
  • - database server model (DateBase Server - DBS);
  • - application server model (Application Server - AS).

A significant advantage of the RDA model is an extensive selection of application development tools that provide rapid development of desktop applications that work with SQL-based DBMS. As a rule, tools support a graphical user interface with the OS, as well as automatic code generation tools that mix presentation and application functions.

Despite the large distribution, the RDA model is giving way to the most technologically advanced DBS model.

Database server (DBS) model - Network architecture of the "Client - Server" technology, which is based on the mechanism of stored procedures that implements application functions. In the DBS model, the concept of an information resource is compressed to a database due to the same mechanism of stored procedures implemented in the DBMS, and even then not in all.

The positive aspects of the DBS model over the RDA model are obvious: this is the possibility of centralized administration of various functions, and the reduction of network traffic because calls to stored procedures are transmitted over the network instead of SQL queries, and the possibility of separating a procedure between two applications, and saving computer resources for the account of use of once created plan of execution of procedure.

Application server (AS) model - this is a network architecture of the "Client - Server" technology, which is a process that runs on a client computer, and which is responsible for the user interface (input and display of data). The most important element of such a model is the application component, which is called the application server, it operates on remote computer(or two computers). The application server is implemented as a group of application functions designed as services (services). Each service provides some services to all programs that are willing and able to use them.

Having learned all the models of the "Client-Server" technology, we can draw the following conclusion: RDA- and DBS-models, these two models are based on a two-tier scheme of separation of functions. In the RDA model, application functions are transferred to the client; in the DBS model, their execution is implemented through the DBMS kernel. In the RDA model, the application component merges with the presentation component; in the DBS model, it is integrated into the resource access component.

The AS-model implements a three-tier separation of functions scheme, where the application component is singled out as the main isolated element of the application, which has standardized interfaces with two other components.

The results of the analysis of the "File Server" and "Client - Server" technology models are presented in Table 1.

Despite its name, the Client-Server technology is also a distributed computing system. In this case distributed computing understand as the architecture "Client - server" with the participation of some servers. When applied to distributed processing, the term "server" simply means a program that responds to requests and performs the necessary actions at the request of the client. Since distributed computing is a type of client-server system, users gain the same benefits, such as increased overall throughput and the ability to multitask. Also, integrating discrete network components and making them work as a whole helps increase efficiencies and reduce savings.

Since processing is implemented anywhere on the network, distributed computing in a client-server architecture guarantees efficient scaling. To strike a balance between server and client, an application component should only be run on the server if centralized processing is more efficient. If the logic of a program that interacts with centralized data is located on the same machine as the data, it does not have to be transmitted over the network, so the requirements for the network environment can be reduced.

As a result, we can draw the following conclusion: if you need to work with small information systems that do not require a graphical user interface, you can use the FS model. The question of the GUI can be freely solved with the RDA-model. The DBS model is a very good option for database management systems (DBMS). The AS-model is the best option in order to create large information systems, as well as when using low-speed communication channels.

 
Articles on topic:
Delete data permanently Erase all data from the hard drive
Order in the computer is the key to user well-being - no glitches, maximum performance. Yes, and it is clear visually where which file lies, what programs are installed, what they are for. But if, on the contrary, chaos reigns on PC disks ... - this is, def
Messenger usage statistics
They are the main means of communication among users of mobile devices and computers. SMS, MMS, and even email are becoming obsolete and less convenient than instant messaging apps, which often use your phone number.
Fundamentals of client-server technology Network login
Client-server technology is a method of connection between a client (user's computer) and a server (powerful computer or equipment that provides data), in which they interact directly with each other. What is a “client-server”? Common
Shutdown Linux from the command line How to shutdown Linux from the console
No operating system is perfect. Even if this is the case, there may be problems with drivers and applications. Linux is no exception. Even though it is more stable than Windows, there will probably come a time when you need to restart your computer.