Unlocking the OSI Model Acronym: Insights for Network Mastery

7 min

Let’s dive into the exciting History of the OSI model, which is a conceptual model for networking standards. 

In a world of diverse networking standards, implementations, protocols, and applications, the Open Systems Interconnection (OSI) model offers a handy common-ground representation. It unifies different communication systems into an abstract for a hierarchy that makes it easy to understand, teach, and learn how to do networking effectively.

Of course, that’s just scratching the surface.  

Because our network monitoring software leverages automated network topology mapping and diagrams, we wanted to cover more about the history of the OSI model. 

Currently, our topology maps display elements from the following:

  • Layer 1 (links and physical devices)
  • Layer 2 (MAC addresses)
  • Layer 3 (IP addresses)

If you’re new to Domotz we’re a network device monitoring tool and have affordable and powerful network management features for your IT systems.

Here’s all about the history of the OSI model, how it began, and how it stacks against the alternatives. Read on for the history of the OSI model.

What this article covers:

The OSI (Open Systems Interconnection) model serves as a blueprint for application communication across a network. It visually outlines how each layer of communication is constructed atop the next, beginning with physical cabling and extending up to the applications interacting with other devices on the network.

The OSI model characterizes and standardizes the communication functions of a telecommunication or computing system.

This model is layered. In other words, it’s composed of several tiers of abstraction that describe networked communication at various scales and levels of detail. For instance, whereas one layer might look at the higher-level flows in an application that sends data between the nodes in a cluster, another might zoom in on the byte-structured contents that make up the packets.

The lower you go in OSI layers, the closer you get to the hardware. The higher layers go in the opposite direction, towards the application.
The original version of the model defined seven layers. And no, it’s not a reference to some famous vision of hell – although the humor might fit depending on the state of your network.

By the late 1970s, the world recognized an increasing need for standards regarding how connected things communicate. Researchers from France, the UK, and the USA, began two projects to develop just that. These groups would attempt to develop standards for the computer networking space. These projects aim to become the standard for computer networking and interoperability amongst vendors and device manufacturers. 

One team working on this was the International Organization for Standardization (ISO). The International Telegraph and Telephone Consultative Committee (CCITT) undertook the other. Each of these bodies produced a document attempting to standardize how computer networking protocols would take shape in the future. 1983 saw the merging of the two documents and the formation of The Basic Reference Model for Open Systems Interconnection. 

The OSI model was meant to be the industry standard for computer networking and the Internet. But not everything went according to plan. The idea behind OSI was to get everyone in the industry to agree on standards for interoperability across vendors. At the time, many devices and networks were leveraging and supporting many different protocols. Many devices were cropping up, and many spoke different languages. 

In the following years, in the 1980s and 1990s, the TCP IP model began to make headway. There was a lot of division as to what was best for the purpose, the OSI model or the TCP/IP model. 

The OSI model never got traction amongst vendors. While these papers were being written, the TCP/IP model was making significant headway amongst vendors. More and more vendors started implementing the TCP/IP model as the means for interoperability. 

So did their predictions turn out to be correct? If it’s any clue, the OSI model is still used today. It’s commonly used as a reference for describing network protocols, training IT professionals, and interfacing with multiple architectures. Since its inception, however, IT professionals have gone back and forth over its merits compared to alternatives.

OSI is a service definition that gives an abstract meaning to how entities interact across layers. This differs from communication protocols that offer a concrete technical definition of how messages should propagate within a single layer.

Imagine you have two servers that need to share information. The message doesn’t just magically teleport from an app on the first machine to the app on the other. Instead, it transits down the layers and eventually reaches the transmission line. Once it jumps across the gap to the other device, it has to repeat the process in reverse by ascending layers until it reaches the receiving application.

For any starting number N representing a layer that transmits a message, the OSI model can be used to explain the journey in terms of a few key concepts:

At each subsequent transition from some layer N to some layer N-1, a layer-N PDU becomes a new N-1 SDU. This payload gets wrapped up in a layer N-1 PDU with the relevant headers and footers. On the opposite end, the data passes up the chain, unwrapping at each relevant stage until it’s just a payload that can be consumed by the corresponding layer-N device.

Layer 1: Physical Layer

One nice thing about the OSI model’s longevity is that there aren’t a lot of different standards to keep up with. You’ll find everything you need to know in ISO 7498, which explains the model, its security aspects, naming and addressing standards, and management practices.

The physical layer defines the electrical and physical specifications for devices. This includes the media, interface, and transmission mode.

This is as close as you’ll get to the bare metal in OSI terms – At this level, it’s all about the bits.  

The data link layer is responsible for the error-free transfer of data frames from one node to another. Networks achieve Layer-2 transmission by using medium access control (MAC) addresses to determine how devices should handle and access data. They also depend on logical link control (LLC) frameworks to encapsulate different protocols.

Data link layers also provide flow control and error control. Some examples include the common Wi-Fi, ZigBee, and Ethernet standards.

At this level, it’s all about frames.

Layer 3: Network Layer

The network layer is responsible for routing packets between nodes. It typically uses IP addresses and other logical schemes to make routing decisions. Network protocols like multicast group management, address assignment, and routing is all associated with the network layer.

At this level, it’s all about packets.

Layer 4: Transport Layer

The transport layer is responsible for the end-to-end delivery of packets. It provides mechanisms for error-free delivery, flow control, and congestion control.

The OSI model draws distinctions between different transport layer protocols based on their fundamental features, such as connection modes, reliability, timeout retransmission capabilities, and multiplexing support. According to most experts, the TCP and UDP standards fall under Layer 4 even though their original designs didn’t conform to the OSI model.

Layer 5: Session Layer

The session layer is responsible for setting up, maintaining, and terminating communication sessions between applications. It can handle multiple connection directionality modes, and in many applications, it’s implemented on a case-by-case basis to handle specific data flows, like streaming media.

Layer 6: Presentation Layer

The presentation layer is responsible for translating between different application formats. This is where jobs like encryption, decompression, compression, decoding, and encoding take place. The presentation layer, a.k.a syntax layer, has high-level semantic workloads.

Layer 7: Application Layer

The application layer is the highest tier in the OSI model. Furthermore, it provides an interface between the application and the network. Moreover, ot includes the high-level protocols you know and love, such as HTTPS, NTP, IMAP, DNS, SNMP, and your preferred assortment of OS file-sharing standards. Learn more about what is SNMP and how it works.

When building on the OSI model, it’s important to remember that Layer 7 distinguishes between application logic and the layer itself. As such, you can further divide Layer 7 into two sublayers:

The common application service element sublayer (CASE) supports widely used services like reliable transfers and remote operation.

The specific application service element sublayer (SASE) includes a range of protocols particular to each application, like Remote Database Access, Distributed Transaction Processing, and Virtual Terminal.

The cross-layer functions of the OSI model include critical services that affect multiple parts of the data transmission process. This means services that can impact multiple layers without explicit restriction to any of them in particular. For example, Multiprotocol Label Switching (MPLS), Address Resolution Protocol (ARP), security management tools, and more.

Cross-layer optimization eliminates the restrictions of the classic OSI model by accounting for situations where you might need to communicate between levels. For instance, you can use it to make the quality of service tweaks in one layer based on the feedback you get from another. Or your network monitoring system might modify data link layer behavior to minimize congestion based on what’s happening at the application layer.

One caveat to remember about cross-layer functions is that they can complicate your life as an admin. The whole point of these deviations is that they have useful side effects on other aspects of the model, but this is a double-edged sword. It pays to exercise caution – and trust in your dependency graphing tools – when designing around cross-layer functions.

The TCP/IP model is an alternative set of protocols that overlaps with much of what the OSI model does. Historically, however, it has undergone many more changes and tweaks.

Some significant differences include that:

  • The OSI model uses seven layers compared to the four levels of TCP/IP,
  •  The OSI model is more abstract, while TCP/IP is more concrete,
  • Whereas TCP/IP was initially a defense agency initiative, OSI evolved in the industry world, and it never quite gained the same popularity, and
  • The OSI model is more comprehensive, covering all networking aspects, while TCP/IP focuses on connecting computers.

How does the TCP/IP model cover everything in fewer layers? For one thing, it groups layer 5-7 together in a single application layer. Experts haven’t entirely agreed on interpreting the distinctions between the two protocols at the lower levels, like the link layer, but that hasn’t stopped them from using TCP/IP to build robust enterprise architectures.

Learn more about the OSI model vs. the TCP/IP model.

This sums up the history of the OSI model. Furthermore, the OSI model can help you rethink your network architecting and management approach. In addition, it couches the defining characteristics of networks in standard terms that anyone can understand. In the process, it serves a vital role in academic training. What’s more, it serves a role in bringing new admins up to speed in corporate environments.

Of course, it also helps to contextualize the feedback you get from your network. This is why many MSPs, IT professionals, and sysadmins turn to Domotz network monitoring software.

With dashboards that tie abstract data to real-world signals, Domotz empowers you to monitor networks as events unfold – no matter how you prefer to conceptualize them.

Further reading:

Share via Social Networks

You might also like…

Read more top posts in this category

Ready to get started with Domotz?

  • Powerful
  • Automated
  • Simple
  • Affordable
Start Your Free Trial Contact Sales