whitehoune - Fotolia

Storage 101: How to create, share and manage a LUN in SAN storage

Despite pronouncements from some, the days of the logical unit number (LUN) are not over. We look at the LUN in SAN management: How to create, share, provision and manage a LUN

The logical unit number (LUN) has been a bedrock concept in storage for decades. It’s the fundamental “soft” way in which physical media is partitioned in block access storage.

But recent times have seen the LUN’s role in SAN management become eroded.

Server virtualisation has seen the rise of new building blocks in data storage that hide the LUN from view or do away with it altogether.

These have been adopted by storage array makers that provide so-called VM-aware storage, as well as those that provide software-defined hyper-converged products that combine compute and storage in the same hardware.

For many, however, the LUN is still a key concept, and retains its place as a fundamental unit of storage management.

Why do we need the LUN?

The short answer is that the file system, object store or virtual file system needs something to mediate between itself and the physical storage media. The LUN – sometimes called a volume – is effectively a layer of virtualisation or abstraction that takes the physical addressing of the media (flash or spinning disk) – and makes it available to servers and their applications.

So, storage array physical media is partitioned into logically-addressed portions, and this partition is called a LUN.

When provisioned by an administrator, using management software to create LUNs, there does not need to be a one-to-one relationship between physical media and the LUNs created.

Read more about storage

  • Cloud services giant Rackspace brings in Scality object storage, in a move that slashes 4,000 boxes from its server count as it separates compute and storage.
  • This year’s storage news so far has provided a firm impression of the increasing prominence of the cloud, and in particular of attempts to harness the public cloud and private datacentre in hybrid operations.

More than one LUN can be built from one drive and this appears as two or more drives to the user. Every PC user is familiar with a C: drive and D: drive that reside on the same disk.

Or, LUNs can be created that span several disks in a RAID array. These also appear as separate drives to users.

Also, you can share LUNs between a number of servers, such as across an active server and failover server.

You need to ensure, however, that only servers that are authorised are allowed to access specific LUNs. This is where SAN zoning and masking come in.

LUN zoning; LUN masking

SAN zoning and masking provides security for iSCSI and Fibre Channel SANs.

On Fibre Channel fabrics you can decide which storage arrays and servers are visible to each other by putting them in the same zone when you configure the fabric switch. With iSCSI SANs this is done using TCP/IP segmentations, such as by setting up a VLAN.

SAN zoning enables servers to see specified ports on a storage array. In this way, bandwidth can be reserved for certain servers while traffic can be blocked between others.

There is “hard” zoning and “soft” zoning. Hard zoning assigns a device to a zone by reference to a port, while soft zoning assigns a node to a zone according to its Fibre Channel World Wide Name (WWN). The Fibre Channel switch puts designated node WWNs into a zone without reference to what port they’re connected to.

Meanwhile, LUN masking allows for a finer level of control in zoning. After zoning a server and storage system LUNs can be masked so that a server can see only the ones you want it to see.

LUN masking can be done at the array by reference to a port, where disks accessed will be seen by servers that access the port. Or it can be done at the server, which can be configured to see only the LUNs assigned to it.

LUNs and performance and reliability

It’s important to consider the physical media LUNs will be configured to. Performance and reliability requirements will vary according to the workload. Obviously, if you need very high performance then the LUNs that serve that workload may need to be on flash drives. Alternatively, other workloads may warrant Fibre Channel 15,000 rpm spinning disk or 7,200 rpm SATA.

Here also, the Raid configuration chosen will affect performance and reliability.

Read more on Computer storage hardware