In the world of IT acronyms, nothing sounds more like a Marvel supervillain than "JBOD." No, JBOD is not a criminal mastermind bent on world domination (as far as we know) but a non-RAID storage architecture that maximizes the volume of its drives, so the total disk space is the sum of its drives.
So what is a JBOD? If the above sounds complicated, know that "J.B.O.D" stands for "Just a Bunch Of Drives" (or "Disks") and is exactly as it sounds: an extensive configuration of disks in a computer. (A JBOD could also be several stacked external disks connected to a computer, but for our purposes, we'll assume the drives are all internal).
JBOD vs. RAID
Whereas RAID involves configuring drives into a virtual disk that can be used for redundancy and hot-swapping, JBODs have no such redundancy or backup features.
Despite the lack of inherent redundancy, JBODs do have some key advantages over RAID:
- Configuring a JBOD is a pretty straightforward thing. Install the drives, and you are done!
- You are maximizing your storage. Remember, RAID arrays achieve redundancy by copying the data over multiple drives – this causes your total storage capacity to decrease. There is no such issue with JBODs – the sum storage between your drives is the total storage available.
- There is much more flexibility with NO limitations on drive types. If you've read our other blog posts on RAID, you'll know that RAID has some limitations. For example, if you include different sized drives in the same RAID array, the virtual disk will default to the smallest size drive. Additionally, you cannot mix SAS and SATA drives in the same RAID array. With JBODs, no such limitations exist: you can mix different size volume drives, and combine SAS and SATA interfaces. Drives with varying RPM speeds, data transfer speeds, even mixing SDDs and HDDs are all okay with JBOD configurations. Go nuts!
- It's Cheap. Have a bunch of spare drives lying around? Install them into a multi-bay server, and now you have a JBOD.
CUSTOMER QUESTION CORNER: "Will I need a RAID controller for my server if I am planning to implement a JBOD configuration?"
Even though JBODs don't implement RAID, you will still need a SAS/SATA controller card in all likelihood. The native RAID controller in most servers can only manage a maximum of four SATA (and ONLY SATA, no SAS) drives. Assuming you intend to have more than four drives and any SAS drives, you will need a dedicated SAS/SATA controller. |
Despite these pretty significant JBOD advantages, it's not all sunshine and roses for JBODs storage spaces. Here are some considerations:
- Did we mention NO redundancy? If a drive in a JBOD configuration fails, that data is gone. Kaput. It wasn't striped and mirrored anywhere. If implementing a JBOD, be sure to have a robust backup plan in place (or be really sure you are okay with losing the data).
- Performance-wise, RAID more often beats JBOD. It depends on the configuration and the specs of the RAID card, but RAID is designed to improve the performance of the drives in its array. (The possible exception to that is a JBOD running the right file system, such as ZFS – more on that below.)
TECH FOOTNOTE: Many JBODs utilize a local file system and logical volume manager called ZFS, which can control the server's storage and data retrieval. ZFS employs an exclusive RAID software tool called RAIDZ, which creates (some) redundancy with data in the JBOD. The ins and outs of ZFS is a blog post in and of itself (we're working on it!), but if you want to read more about ZFS, here is an excellent primer. |
Mike's Pick for a JBOD Beast
A refurbished rack server is the ideal vessel for creating a JBOD. Why? They are designed with multiple hard drive bays that can be loaded with drives.
We put together an R730XD with all 24 of its bays loaded with 1.2TB SAS drives. The Processors and RAM are mid-range spec-wise; higher performance would be needed for a Virtual Machine system such as VMWare running the ZFS file system.
We also put together the same configuration but with no installed hard drives, ONLY caddies. If you have spare 2.5” hard drives that you want to populate into the server, this is the build for you.
Don’t forget – you can always customize through our Configure-to-Order section.
Final Thoughts
If you have several spare drives lying around and need a cheap storage solution, taking advantage of a JBOD server is the easy and affordable way to go. Just be sure to take into consideration backups and disk failure.
Do you have any JBOD questions? Send them our way! We love hearing from our customers and answering their questions!
3 comments
Hi Ronnie. Thanks for the comment below. You are correct: ZFS is open source and not proprietary software. What we were trying to get at was ZRAID is exclusive to ZFS. That said, we’ve rephrased this in the blog post to be clear. As for the RAID card, we normally recommend the LSI 9211-8i or 9217-8i as long as it does not have any special firmware installed. The stock firmware should allow for pass-through. Over the past year, we had received some of these with the passthrough available and some with RAID only.
ZFS isn’t proprietary. TrueNAS is probably what most would want JBOD’s for these days. It’s running FreeBSD and OpenZFS. Both of which are open source-not proprietary. ZFS has made It’s way into Linux too, so you could do it that way too. The biggest issue is getting a RAID card that can be flashed to IT mode. Most of the PERC controllers in Dell servers 11th and 12 generation ones will do. I think the 13 generation will too, however I’m not sure about that. LSI cards have been historically the go-to cards. They were bought by someone and have since shot themselves in the foot and made it so you have to get an older card to do true JBOD. (AFAIK)
In the case of a PowerEdge R730XD loaded with 24 drives, you still need an OS that can “present” that storage to your network. You’ll need some kind of OS in that can support things like NFS, SMB, or iSCSI. And if you boot VMWare ESXi your storage will be specific to the host only unless you get VSAN licenses to share the host’s storage with other hosts in the VSPhere. So a JBOD may sound great, but the outcome of such an implementation could be disappointing if you need more than a large pool of network attached storage.