
Block storage
A SAN is where block storage is mainly utilized, using protocols such as FC, or iSCSI, which are essentially mappings of the Small Computer System Interface (SCSI) protocol over FC and TCP/IP, respectively.
A typical FC SAN looks like the following diagram:

A typical iSCSI SAN looks like the following diagram:

Data is stored in logical block addresses. When retrieving data, the application usually says—I want X number of blocks from address XXYYZZZZ. This process tends to be very fast (less than a millisecond), making this type of storage very low on latency, a very transactional-oriented type of storage form, and ideal for random access. However, it also has its disadvantages when it comes to sharing across multiple systems. This is because block storage usually presents itself in its raw form, and you require a filesystem on top of it that can support multiple writes across different systems without corruption—in other words, a clustered filesystem.
This type of storage also has some downsides when it comes to high availability or disaster recovery; because it is presented in its raw form, the storage controllers and managers are, therefore, not aware of how this storage is being used. So, when it comes to replicate its data to a recovery point, it only takes blocks into account, and some filesystems are terrible at reclaiming or zeroing blocks, which leads to unused blocks being replicated as well, thus leading to deficient storage utilization.
Because of its advantages and low latency, block storage is perfect for structured databases, random read/write operations, and to store multiple VM images that query disks with hundreds, if not thousands, of I/O requests. For this, clustered filesystems are designed to support multiple reads and writes from different hosts.
However, due to its advantages and disadvantages, block storage requires quite a lot of care and feeding—you need to take care of the filesystem and partitioning that you are going to put on top of your block devices. Additionally, you have to make sure that the filesystem is kept consistent and secure, with correct permissions, and without corruption across all the systems that are accessing it. VMs have other filesystems stored in their virtual disks that also add another layer of complexity—data can be written to the VM's filesystem and into the hypervisor's filesystem. Both filesystems have files that come and go, and they need to be adequately zeroed for blocks to be reclaimed in a thinly provisioned replication scenario, and, as we mentioned before, most storage arrays are not aware of the actual data being written to them.