A little bit of background (Those interested only in the RAID part, feel free to skip this section by clicking here)
I’ve had a Drobo5N for about 2 years and it has been a rocky journey with moments varying from GREAT to HORRIBLE.
It was my first NAS after all so at the time I was looking for a cheap and simple device that would storage my data. The Drobo5N fit those requirements but through it I learned what I personally do NOT want in a NAS and want I should look for in my next one.
Let’s highlight some of the Drobo5N downsides:
- Restrictive control over RAID
While it does offers the option to protect against a single or dual disk failure, it does so with a technology they call BeyondRaid.
The problem with this is that if the NAS device itself fails, you are left holding your disks without a way to recover the data off it unless you replace the NAS. This could take days, weeks, and sometimes money (if your warranty expired.
2. Long RAID rebuild time
As stated before, it protects against disk failure but it seems lunacy to me that if you pull a disk out (even half way) and immediately push it back in that it takes 24-36 hours to rebuild itself.
Example
3. Slow Performance
While the Drobo does offer a good place to store your pictures, movies, etc.. it is not ideal for those wishing to run a plex server or rendering/encoding video files on it.
Below 4 samples are taken at 10 second intervals and we see the read speed fluctuates between 5501 kB/s (5.501 MB/s) and 76478.4 kB/s (76.48 MB/s) despite having a constant read workload at the time.
Linux 3.2.58-2 (Drobo5N) 08/22/16 _armv7l_ (3 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.89 0.00 4.21 18.56 0.00 76.34 Device: kB_read/s kB_wrtn/s sda 5501.77 1460.39 avg-cpu: %user %nice %system %iowait %steal %idle 0.24 0.00 25.59 40.72 0.00 33.46 Device: kB_read/s kB_wrtn/s sda 76478.40 1.60 avg-cpu: %user %nice %system %iowait %steal %idle 0.44 0.00 20.43 45.04 0.00 34.10 Device: kB_read/s kB_wrtn/s sda 61974.40 0.00 avg-cpu: %user %nice %system %iowait %steal %idle 0.95 0.00 25.22 40.60 0.00 33.23 Device: kB_read/s kB_wrtn/s sda 71982.80 2.00
4. Single volume
The Drobo5N support a maximum of one volume (virtual directory), this is not ideal management wise as some system admins may prefer to have different volumes with different RAID levels. For example, create a RAID 6 (dual disk failure) for my photos which I would never want to lose and another volume with RAID 0 if I want more performance but don’t care about redundancy.
5. Limited maximum storage of 16TB (Each disk having a maximum capacity of 4 TB)
This was not an issue at first but over time the available hard drives became larger in capacity and more efficient to purchase than older smaller ones.
The reason behind this is that the Drobo has a 32bit processor and 4KB page size allowing it to access 2^32*4 KB = 17179869184 KB = 16384 MB = 16 TB
At the time of writing this article, both Hitachi and Seagate released 8TB 3.5 Hard drives which aren’t compatible according to Drobo.
Bright side, they provided support for up to 64TB by changing the paging size to 16 KB.
Example
As seen above, df shows a maximum of 16 TB (Really have a total of about 12 TB of capacity)
Now the last one has a catch and was the main motivation for this article because changing the page size can NOT be done dynamically/online while the data is on the volume. The user will have to destroy the whole volume and build from scratch.
Also as seen above, I have 9.7 TB of used space which is difficult to backup to an external disk and I can’t create a separate volume (with the new page size) and slowly migrate the data to it.
So it was best to remove one disk at a time from the NAS (which it can recover from), place it inside the desktop as a blank disk and move the data over.
However, it wouldn’t be wise not to have redundancy as one of the disk may fail during the migration. It was then I decided to create a RAID on the desktop running Linux
Onto the main article (RAID creation, the main reason most of you came for)
The problem was I had a limited number of disks (less than the required 2 for the smallest RAID) but I still wanted to create the groundwork filesystem and add disks when I can.
After adding the new disk, we see it has no partitions:
root@timi-kali:~/Downloads# fdisk -l Disk /dev/sdb: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
- Create a partition
- # fdisk /dev/sdX (where sdX is the drive to be partitioned)
- Enter ‘n‘ to create a new partition:
Command (m for help): n
- Enter ‘p‘ for Primary partition and then ‘1‘ to be the first partition.
Select (default p): P Partition number (1-4, default 1): 1
- If you plan to use the whole disk for the RAID, press enter twice
First sector (2048-2930277167, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-2930277167, default): 2930277167):
- Next press ‘p‘ to print the created partition:
Command (m for help): p Disk /dev/sdb: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xa7bdb1c2 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 2930277167 2930275120 1.4T 83 Linux
Overview of commands run and their respective outputs
- Change the Type to “Linux raid auto” by entering ‘fd’ as seen below:
Partition type (type L to list all types): fd Changed type of partition 'Linux' to 'Linux raid autodetect'.
- Enter ‘p’ to print the partition to verify the type has changed
Command (m for help): p Disk /dev/sdb: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xa7bdb1c2 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 2930277167 2930275120 1.4T fd Linux raid autodetect
- If you are happy with the created partition, enter ‘w‘ to save the changes.
Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.
- Repeat for any additional disks that will be part of the RAID Array.
- Use mdadm to view the newly created partition (Type fd below means RAID as explained before)
# mdadm --examine /dev/sdb /dev/sdb: MBR Magic : aa55 Partition[0] : 2930275120 sectors at 2048 (type fd)
- Now we will create RAID device (in our case /dev/md0)
- To create a normal RAID 1 array, we will include a raid level of 1 and at least two disk like so:
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
- To create a RAID 1 with one disk and a second simulated missing disk (at least temporarily until we can get another disk, we will run:
# mdadm --create /dev/md0 --level 1 --raid-device=2 /dev/sdb missing
- To create a normal RAID 1 array, we will include a raid level of 1 and at least two disk like so:
- Now we verify the raid device
# mdadm -E /dev/sdb: ........... Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 2930015024 (1397.14 GiB 1500.17 GB) Array Size : 1465007488 (1397.14 GiB 1500.17 GB) Used Dev Size : 2930014976 (1397.14 GiB 1500.17 GB) ...........
- Reviewing the array created (Note in my case, I have two RAID devices and one total device because I set the second device as missing. As such, has a degraded state)
# mdadm --detail /dev/md0: Version : 1.2 Creation Time : Mon Aug 22 18:55:20 2016 Raid Level : raid1 Array Size : 1465007488 (1397.14 GiB 1500.17 GB) Used Dev Size : 1465007488 (1397.14 GiB 1500.17 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Aug 26 11:57:04 2016 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0
- Now a filesystem has to be created
# mkfs.ext4 /dev/md0 mke2fs 1.43.1 (08-Jun-2016) Creating filesystem with 366251872 4k blocks and 91570176 inodes Filesystem UUID: ee30fa64-a5d0-4354-9cd1-e75cd8110a96 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
- Next we create directory under
# mkdir /mnt/raid5 # mount /dev/md0 /mnt/raid5/ # ls -l /mnt/raid5/
If you found this article useful, don’t forget to share it by clicking one of the buttons below. You can also share your thoughts in the comments section below.
You can also follow me on Twitter, add me to your circle on Google+, or like my Facebook page to keep yourself updated on all the latest of Hardware and Software articles.