PicoBlog

What the heck is "Media Access Control" or "MAC"????

Almost everyone knows what a “MAC address” is. At least, everyone has seen them. My nieces have seen the WiFi MAC address of their iPhone, my aging parents have seen the list of MAC addresses attached to their Comcast router. And yet, even experts who have spent decades working with him on network backbones are a little hazy about some details.

For example, what does the phrase “media access control” mean and where did it come from?

In this blogpost I’m going to answer this trivial question with a long history lesson, taking you to new heights of bafflement and confusion.

Around 1850s we saw the birth of analog telegraphs. Around 1880 we saw the invention of digital telegraphs sending bits down the wire using electromechanical devices. The sending device had to transmit enough electrical power (current) down the wire to physically move solenoids in the receiving machine, to do something like print characters on paper.

In the 1950s the transistor was born, and we wanted to send electric signals instead of current. The power needed to punch tape or print letters had to be supplied locally. In 1960, the transistor-based RS-232 standard was born to replace the older current-loop from the 1800s.

I mention this because as of 2023 you can still buy new converters on Amazon.com that convert from 1880s links to 1960s links. Really old current-loop devices still exist in factories and the power grid. Slightly less old RS-232 links exist all over the place connecting to old equipment. If you search for an “industrial computer” on Amazon, you’ll see a device with RS-232 ports. It’s still a big deal.

The thing you are supposed to learn from this is that in the beginning, networking consisted of links between two devices that carried streams of bits, neither of which may have been intelligent computers.

As Moore’s Law turned, in the 1970s we saw the invention of the microprocessor. As you know from your history class, computers progressed from mainframes in the 1960s (like IBM), to minicomputers in the 1970s (like the DEC PDP-11 and VAX), to microcomputers in the 1980s (like the IBM PC and Apple Mac).

But the reality is that in the 1970s, the mainframe was still the majority of the market. Instead of minicomputers and 8-bit microprocessors replacing anything, they were integrated into the mainframe system.

The IBM mainframe network was born around 1975 (called “SNA”). It shook the world. For the first time, you had smart devices on both sides of the link. As part of this IBM invented something called SDLC or Synchronous Data Link Control that worked on top of links, forming streams of bits into packets, and then intelligently handling those packets in software.

The words you are supposed to learn from these acronyms are data link control. Up until this point, links produced a stream of bits. With data link control, we now had something that would send packets. Sometimes if large chunks of data needed to be fragmented into smaller packets. Sometimes packets needed to be re-transmitted, such as when somebody bumped the cable causing data to be corrupted. This data link control solved such problems.

When SDLC was finally made an official international standard in 1979, there was much rejoicing. After years of hard work, political compromises, and debate, we had the first of what would become many standards for computer networking. The standard was very similar to SDLC and had a similar name, HDLC or High Level Data Link Control.

But a data link wasn’t a network. It only allowed two neighboring devices to talk. People wanted a full network, where packets could be forwarded from link to link between nodes. They envisioned something that looked like the following: links connecting nodes in a network. Packets would be injected at one end then follow a path through the network, hop-to-hop, across links between hops, until they came out the other end.

IBM called this path control, and used it build networks centered around their mainframes, interconnecting terminals, disk drives, card readers, tape drives, printers, minicomputers that did work on behalf of the central mainframe, and so on. They had a 7 layer model to describe their mainframe network.

People didn’t like this. They didn’t like the fact that IBM monopolized networks in the real-world. Europeans also didn’t like what the American military complex was building as a network. Therefore, they decided to build their own open standard for a network. It was largely based upon IBM’s mainframe model.

At the bottom they had “physical” standards for transmitting bits, like RS-232. Then they had the “data link” standard for building packets across local links. Instead of “path control” like IBM, they called it a “network layer”. At the top of the model, they had “session”, “presentation”, and “application” layers that reflected how dumb terminals talked to centralized mainframes.

What they came up with is called the “OSI (Open Systems Interconnect) Seven Layer Reference Model”. You’ll often see it today as some theoretical framework for describing networks, but it’s bizarrely out of date and has led to much confusion. It describes IBM’s mainframe networks and not any real network today. It’s like using a model of a horse buggy to describe a Tesla. Most techies today think it helped them understand how networks work, but that’s mostly because they believe misconceptions about how networks work.

This was the 1970s. Other things were going on at the same time. Some people were envisioning networks that looked nothing like mainframes.

Specifically, Xerox was doing pioneering work in workstations. This is where the “mouse” was invented, along with “windows” on the “desktop”. Included with these inventions was a “local area network”. Instead a wire connecting just two nearby devices, they invented a way for hundreds of machines to share a wire that spread out over kilometers — throughout an office building and across a campus.

This didn’t quite fit people’s models. It wasn’t a link between two devices, but a link between hundreds of devices. Packets could be be exchange between any of those devices like a network, but there was no node inside the network. From the beginning this was confusing. Did this match the network layer, because it was a network? Or did it match the data link layer, because it was a local link? It was both.

It was first standardized as DIX Ethernet, where DIX stood for DEC-Intel-Xerox, corporate partners working together to create a common standard. It had two natural layers: one layer simply sent bits on the wire (like RS-232), and the other layer combined those bits into packets (like SDLC). There’s another way of thinking about these natural layers. The one that dealt with bits was dumb, with some logical gates, but with no computer software. The other layer had simple computer software, such software that would retransmit packets when there were collisions on the wire.

When the DIX partners created their first standard, they described these natural layers in terms of the emerging OSI Model. OSI was a poor fit, but because it was being adopted by “official” standards organization, they used the terminology.

At this Data Link Layer, they had a simple packet format: two addresses, a type field, and a checksum/FCS.

But a private consortium isn’t the same thing as an official standard’s body. Therefore, in 1980, the IEEE (a real standards body) created the “Local Area Networks” group to standardize this — the IEEE 802 Local Area Networking project. Such a group would accept input from everyone, not just the DIX consortium members.

IBM invaded this group and tried to hijack it to standardize a different Local Area Network they dubbed “Token Ring”. IBM didn’t like playing in other people’s sandbox, they wanted to steal all the toys and make other kids play in it’s own sandbox. There was much debate and gnashing of teeth, and the committee was prevented from declaring either one a standard. It ended up having to standardize both. This is an early document that describes the compromise, eventually creating IEEE 802.3 Ethernet and IEEE 802.5 Token Ring standards.

IBM needed something more complex that the simple packet format above. Its protocol stack, as well as the promised future OSI mainframe protocol stack, needed something compatible with SDLC or HDLC.

Therefore, when they IEEE 802 project took over from DIX, they split the Data Link Layer into two sublayers. One was called Logical Link Control or LLC at the top, written entirely in software. The other was called Media Access Control or MAC at the bottom, right above the physical media itself, some in simple logic and a little bit in software.

The thing about the Media Access Control is that it gets data transmitted onto the media, but does nothing to verify data reaches the other end. It doesn’t doesn’t really form a link. In contrast, the Logical Link Control does form a link. When a packet containing data arrives, the other side’s LLC layer sends back an acknowledgement, verifying the data has reached the other. Once sublayer merely accesses the media, the other forms a logical link.

What you see here is the invention of Ethernet in a non-mainframe environment that gets hijacked by IBM and OSI to look more mainframe-like.

While we got two different LAN standards, IEEE 802.3 for Ethernet and IEEE 802.5 for Token Ring, they still shared the same basic concepts. They still had LLC at the top. And they shared the same 6-byte/48-bit MAC addresses at the bottom.

The trick with these MAC addresses was they needed to be locally unique. Around this time, many other companies developed their own Local Area Network (LAN) technologies. Often, they would have simple 8-bit local addresses. When you installed a network adapter in a machine, you had to configure it with a unique address on the local network by flipping hardware switches on the card.

This was how the first Ethernet prototypes worked, each devices having an 8-bit address. This was obvious not ideal. So when they wrote the official standard, they chose 6-byte/48-bit addresses that would be globally unique. Even though there would never be a global Ethernet, where every device could talk to each other (like the Internet), the addresses where still defined to be unique globally simply to solve the problem that they would be unique locally, without anybody haven’t to configure them manually. You’d just plug in a device onto an Ethernet network and it would work.

Each vendor producing Ethernet or Token Ring hardware would be assigned a number for the first 3-bytes of an address, and the vendor would themselves then assign the next 3-bytes. 3Com (a famous early Ethernet vendor) was assigned the code 00-60-8C, so their MAC addresses would look something like 00-60-8C-DE-AD-12. Thus, global uniqueness was assured as long as vendors didn’t make mistakes.

As technology progressed, new types of networking were invented. For example, FDDI was a fiber-optic technology used to create “metropolitan area networks”, spready in conduits underneath cities. Later, cable companies produce Internet-enabled cable-modems using the DOCSIS standard. Then satellites provided Internet service using standards similar to DOCSIS. There was also the development of WiFi for local home and business networks.

All of these chose MAC addresses. Their actual packet formats that go across the wire all look different, but they all use this same MAC address standard. You can interconnect them locally by simply translating the packet formats.

Thus, you have Ethernet and WiFi that work seamlessly together as one local network rather than two separate networks. They actually have different packet formats, but they work together as if they have the same packet format.

The effort by IBM and OSI to impose mainframe networks on everyone failed. There was also a push by the telephone companies to push their own vision, called X.25, based on making network connections look like phone calls. This also failed.

What succeed was that American military sponsored technology initially call “the TCP”, now known as “the Internet”.

Just like how Ethernet doesn’t quite fit the IBM/OSI mainframe model, the Internet conforms that that model even worse. It’s not a network but an inter-network. These days, educators pretend they mean the same thing, that the Internet works in layer #3 of the OSI model, but it’s trying to shove something in there that doesn’t fit. IBM and OSI envisioned a single mainframe network stack, with layers, and each layer implementing a different function. An inter-network is instead layered on top of any local network. Instead of a one integrated network we have two independent networks, one layered on the other.

Both have “network” layers. Today’s Ethernet and WiFi forwards packets through your local network based upon the MAC address. The local network may carry Internet traffic, it may carry non-Internet traffic. Likewise, the Internet forwards packets across the global network based upon the IP address. Locally, between two hops, the Internet may use Ethernet, or it may use something completely different, like carrier pigeons. These networks aren’t integrated together.

So nowadays you use MAC addresses in anything that looks like local connections, like Ethernet, WiFi, Bluetooth, and so on. If it’s piece of hardware that can create a local connection, then it’s likely got a MAC address. On top of that local connection you can have anything, maybe Internet traffic, maybe not.

Note that you really don’t use LLC or Logical Link Control sublayer. The Internet has it’s own technologies to resend lost packets end-to-end across the network, such as TCP and QUIC. The Internet doesn’t like the overhead of also doing it locally. Therefore, when you look at your local network, you almost never see LLC — just MAC. The name MAC was created in order to split things into two sublayers, but we still only really use the one sublayer anyway.

So this is where the name comes from. Back in the early 1970s, Xerox (then DEC and Intel) developed the Ethernet technology. When it came time to write standards, they adopted the language of the emerging mainframe networking standards, which used terms like Data Link Control. Then they needed to expand that into two sublayers, which they pretty arbitrarily called Logical Link Control and Media Access Control.

Local addresses become known as MAC addresses. They have taken on a life of their own, existing in forms that look nothing like the original Ethernet. I use SpaceX Internet service. They apparently use MAC addresses internally for radio signals that have nothing to do with either Ethernet or WiFi — but yet can still interconnect with them because of these MAC addresses.

ncG1vNJzZmibqZeys7%2FEnKtnq6WXwLWtwqRlnKedZL1ww8eaq2asmJp6qbHCpGSiq12isqW1wGaYnJuVqMBur86nq6unnA%3D%3D

Filiberto Hargett

Update: 2024-12-02