Open Systems Interconnection (OSI) was set up as an international standard for network architecture. The International Organization for Standardization (ISO) took the initiative in setting up OSI. OSI has two meanings. It 'refers to:
- Protocols that are authorized by ISO
- OSI basic reference model
OSI reference model divides the required functions of the network architecture into several layers and defines the function of each layer. Layering the communications process means breaking down the communication process into smaller and easier to handle interdependent categories, with each solving an important and somehow distinct aspect of the data exchange process. The objective of this detail is to develop an understanding of the complexity and sophistication that this technology has achieved, in addition to developing the concept for the inner workings of the various components that contribute to the data communications process.
Physical Data Encoding
Information exchanged between two computers is physically carried by means of electrical signals assuming certain coding methods. These codings can be characterized by changing voltage levels, current levels, frequency of transmission, phase changes, or any combination of these physical aspects of electrical activity. For two computers to reliably exchange data, they must have a compatible implementation of encoding arid interpreting data carrying electrical signals. Over time, network vendors defined different standards for encoding data on the wire. One such standard, namely, bipolar data encoding.
In bipolar encoding, binary data is simply represented by the actual signal level, in which a binary 1 is encoded using a fixed voltage level (for example, +5 volts) and a binary 0 is encoded using a negative voltage level (for example, -5 volts).
This deals with the type of media used (fiber, copper, wireless, and so on), which is dictated by the desirable bandwidth, immunity to noise, and attenuation properties. These factors affect the maximum-allowable media length while still achieving a desirable level of guaranteed data transmission.
Data Flow Control
Data communications processes allocate memory resources, commonly mown as communications buffers, for the sake of transmission and reception of data. A computer whose communications buffers become full while still in the process of receiving data runs the risk of discarding extra transmissions and losing data unless a data flow control mechanism is employed. A proper data flow control technique calls on the receiving process to send a "stop sending" signal to the sending computer whenever it cannot cope with the rate at which data is being transmitted. The receiving process 'later sends a "resume sending" signal when data communications buffers become available.
Data Frame Format
Information exchange between computers, communication processes need to have following for accomplishing these aspects of the exchange process:
- Receiving computer must be capable of distinguishing between an information carrying signal and mere noise.
- There should be a detection mechanism to detect whether the information carrying signal is intended for itself or some other computer on the network, or a broadcast (a message that is intended for all computers on the network).
- If the receiving and engages in the process of recovering data from the medium, it needs to be able to recognize where the data train intended for the receiver ends. After this determination is made, the receiver should discard subsequent signals unless it can determine that they belong to a new, impending transmission.
- The receiving end after completion of receiving of information must also be capable of dealing with and recognizing the corruption, if any, introduced in the information due to noise or electromagnetic interference.
To accommodate ·the above requirements, data is delivered in well-defined packages called data frames. This frame belongs to Ethernet packet format and is explained in earlier chapter on Local Area Network. The receiving end compares the contents of this data frame. If the comparison is favourable, the contents of the Information field are submitted for processing. Otherwise, the entire frame is discarded. It is important to realize that the primary concern of the receiving process is the reliable recovery of the information embedded in the frame.
As networks grow in size, so does the traffic imposed on the wire, which in turn impacts the overall network performance, including responses. To alleviate such degradation, network specialists resort to breaking the network into multiple networks that are interconnected by specialized devices, including routers, bridges, brouters, and switches.
The routing approach calls on the implementation of various cooperative processes, in both routers and workstations whose main concern is to allow for the intelligent delivery of data to its ultimate destination. Data exchange can take place between any two workstations, whether or not both belong to the same network.
The Network Address and the Complete Address
In addition to the data link address, which should be guaranteed to be unique for each workstation on a particular physical network, all workstations must have a higher-level address in common. This is known as the network address. The network address is very similar in function and purpose to the concept of a street name. A street name is common to all residences located on that street. Unlike data link addresses, which are mostly hardwired on the network interface card, network addresses are software configurable. It should also be noted that the data structure and rules of assigning network addresses vary from one networking technology to another.
Inter – process Dialog Control
When two applications engage in the exchange of data, they have established a session between them. Consequently, a need arises to control the flow and the direction of data flow between them for the duration of the session. Depending on the nature of the involved applications, the dialog type might have to be set to full duplex, half duplex, or simplex mode of communication. Even after setting the applicable communications mode, applications might require that the dialog itself be arbitrated. For example, in the case of half duplex communications, it is important that some how applications know when to talk and for how long.
Another application-oriented concern is the capability to reliably recover from failures at a minimum cost. This can be achieved by providing a check mechanism which enables the resumption of activities since the last checkpoint. As an example, consider the case of invoking a file transfer application to have five files transferred from point A to point B on the network. Unless a proper check mechanism is made to take care of the process, a failure of some sort during the transfer process might require the retransmission of all five files, regardless of where in the process failure took place. Check pointing circumvents this requirement by retransmitting only the affected files, saving time and bandwidth.
Whenever two or more communicating applications run on different platforms, another concern arises about the differences in the syntax of the data they exchange. Resolving these differences requires an additional process. Good examples of presentation problems are the existing incompatibilities between the ASCII and EBCDIC standards of character encoding, terminal emulation incompatibilities, and incompatibilities due to data encryption techniques.
Related Articles (You May Also Like)