As you might have guessed, configuring enterprise-grade servers takes some serious attention and care. After sourcing the right combination of compatible processors, RAM sticks, RAID controllers, drive bays, you name it, our expert technicians put meticulous work and a whole lot of love into each and every unique server we send out.
Don't just take our word for it. Take a peek behind-the-scenes at our server-building process to see just what it takes to get a TechMikeNY server ready for action.
One lucky R740XD got tricked out with a mid-bay, flex bay and enough breathing room for massive 4Kn block size drives. This build only has one 18-core 3Ghz processor, but with 2 procs, you're looking at up to 56 cores. (For you Virtual Machine nerds, that's 112 threads, or virtual CPUs.)
Now that we’ve got some Dell AMD servers on our hands, we get to put things together like this Dell PowerEdge R6415 with 6.8TB of raw storage. Hats off to expert server builder Daniel, who’s built over 3,000 servers since he joined the team at TechMikeNY. (Or, in this case, hat’s.. on?)
This server mammoth is equipped with 1.5 TERABYTES of DDR4 memory. Maybe it's just us, but there’s really something so satisfying about watching all those sticks of RAM line up.
Celebrity Configs Love your config so much that you want to name it after yourself? Easy. Once you’ve put together your build, just type a name for it in the box, and all future TechMikeNY customers will know exactly who invented your beautiful creation.
|
Check out how we got this 16-core, 64GB RAM Dell PowerEdge R640 ready for rails. And before you ask, no, TechMike is not an official ULine glove partner. Yet.
What’s this? An HP server? Yes, yes it is. This custom build is a 10th generation HP DL360 with 4 bays, 64GB 2666Mhz RAM, a P408i-A SR controller, and Platinum 8153 16 core CPUs. In the words of Instagrammer @evertonkernel, "🔥🔥🔥🔥".
Back in the day, we used to build some very pretty R720s. And once upon a time, R610s. We’ve been in the game long enough to develop our very own testing procedures, resulting in industry-leading return rates on parts like HDDs (including new from OEM). How's that for quality control?
The piece de resistance - this fully loaded R730xd rocks 12 x 10TB drives for a head-spinning 120TB of storage. Ideal for backups or storing all the media humanity has ever created. 😜
Feeling inspired? Good thing there’s a whole slew of rack-ready builds right where these came from. Browse TechMikeNY’s massive inventory of Dell servers, HP servers, and more machines - fully customized to your specs and built to last.
If you have any questions, don’t be shy! Our experts are standing by. (Poetic, right?) Get a free consultation via email at info@techmikeny.com or over the phone at (718)-635-9880.
(Not sure what a Graphics Processing Unit is? Give this a read.)
The most fundamental compatibility consideration is whether the GPU will physically fit and electrically integrate with the server. When it comes to hardware compatibility, these are key questions you'll want to answer:
Physical card height/length: Will the GPU fit within the chassis?
Server GPUs come in full-height and low-profile form factors. Low-profile cards allow installation in 1U servers.
Power draw and connectors: Do you have enough power/the right supplies?
GPUs above 75W require auxiliary power cables. For supplementary GPU power, Dell and HP servers rely on riser cards to supply the physical 6-pin or 8-pin cable connections. And when adding those kinds of GPUs, we recommend 1100W power supply units (PSUs) to provide sufficient wattage headroom. Confirm the connector type and that adequate power overhead is available, especially for the more power intensive high-end cards.
Connector type: Do you have the right connectors? (The answer is probably yes.)
PCIe slots are backwards compatible. This means a GPU with a PCIe Gen 4 connector can still physically fit and work in a server’s Gen 3 slot, though it will run at the slower Gen 3 speeds.
There’s also a new connector standard for NVIDIA cards, the SXM4. Some high-end NVIDIA data center GPUs use this proprietary SXM connector for compute; however, as much bandwidth as it provides, it’s mostly used for top of the line GPUs (and PCIe standard connectors aren’t going anywhere).
Motherboard support: How many PCIe lanes are you working with?
x4, x8, and x16 PCIe lanes can all be adapted to connect a GPU. But only an x16 slot will allow for the GPU to work at full capacity (read: maximum speeds).
Let's take the Dell R740xd and the Dell R640 for example. The R740xd can fit full-height, dual-slot width GPUs with up to 300W power draw and Gen 4x16 support. But the R640 is limited to a single slot, low-profile GPU that draws less than 75W of power. So matching server model, chipset generation, PCIe slot bandwidth and power available to GPU requirements is key.
Pro Tip: when ordering a server with a GPU, especially a power-intensive GPU, check to see whether your order comes with an "enablement kit" with auxiliary power cables / other components that don't come standard with the chassis. (Shameless self-promo: any TechMikeNY server with a GPU will automatically come with the cables included.)
Like processors, GPUs get real hot under load. So of course, making sure there's enough cooling for the GPU is essential; of all things that could throttle performance, you definitely don’t want it to be poor airflow.
Confirm that GPU airflow requirements match the server fans, shrouding etc. This might require special server configurations for airflow baffles to route air correctly.
With multiple GPUs, pay special attention to heat concentration in contiguous slots, as more spacing may be required. Most GPU vendors offer solid guidelines on spacing between cards.
The GPU driver package hooks up the hardware capabilities of the physical card to the OS, enabling accelerated workloads by exposing the parallel processing capabilities. To work together properly, though, both the driver and GPU hardware versions should be validated and certified. Mixing and matching the wrong driver/hardware combinations can lead to instability, crashes or limited functionality due to missing libraries. (So make sure you’re downloading the driver version that corresponds to both your specific GPU and the OS version your system is running on.)
Some tools to find compatible drivers:
NVIDIA Driver Compatibility Website (manual)
https://www.nvidia.com/download/index.aspx
NVIDIA GeForce Driver Compatibility Website (manual):
https://www.nvidia.com/en-us/geforce/drivers/
NVIDIA SmartScan Tool (checks your system automatically, requires Java):
https://www.nvidia.com/download/Scansg.aspx?lang=en-us
NVIDIA automatic SmartScan Tool for older systems:
https://www.nvidia.com/download/ScannForce.aspx?lang=en-us
For Windows users, AMD also has an auto-detect tool that checks your PC for “check your PC for compatible AMD Radeon™ Series Graphics, AMD Ryzen™ Chipsets, and Windows® version” and auto downloads compatible drivers:
https://www.amd.com/en/support/kb/faq/gpu-56
Aside from baseline compatibility, you’ll also want to keep in mind exactly what it is you plan on using that GPU for. To make the most of your GPU, you’ll want to check that it aligns to the performance profile required by the target workloads. For example, inferencing workloads (like transformers for NLP tasks), cards optimized for INT8 precision tend to provide the best throughput/TCO. Meanwhile, training or graphics workloads will require high floating point precision and bandwidth to drive large-scale parallel processing. Matching the card capabilities to what you’re doing is crucial – more on that coming soon.
En masse, GPUs have been designed as a one size fits all. For a while, makers were aiming to make deployment easier by making compatibility more standardized. But as using GPUs is becoming more and more common in data centers, the tasks for which they’re being used are getting more and more specific. Now, as companies are exploring very specific integration types, the market is moving towards boutique solutions.
Thankfully, resources for GPU integration are abundant in the open source community. If you Google a GPU model + task + “Reddit”, you will find several off-duty scientists explaining their drivers, setups, and fixes.
The future is software-defined. That is to say, down the road, servers will become fully software-controlled to allow flexible GPU use - just like spinning up cloud VMs today.
Doing your homework on what GPU cards work with the server specs is a key step in successful GPU integration, regardless of whether it’s for your data center or your homelab testing environment. Checking vendor manuals is a great place to start. But of course, if you want a hand figuring out the best setup, you can always give us a ring.
]]>Ever sat around and wondered about your RAID controller’s battery? Can’t say we have either. But turns out, Reddit is full of people who are curious to know what it’s for. So naturally, we had to talk about it.
RAID controllers come with a dedicated battery, the RAID battery, that works to preserve the controller's cache memory in the event of sudden power loss. In this blogpost, we’re talking RAID battery backup units (BBUs) - what they’re good for, how they do what they do, and why it’s so helpful to have them around.
A RAID battery (also known as a RAID cache protection battery or Battery Backup Unit) is a key component installed on a RAID controller card. These small battery units connect directly to the card in your server, workstation, or external disk array. Their sole job? To supply emergency backup power in the event of a power disruption so that no data is lost or corrupted.
Above: these batteries are often mounted directly onto the RAID controller card, attached via a short cable to supply backup power to cache when needed. May be located in the enclosure holding the controller.
The battery allows the RAID controller’s memory cache to remain active in the event that main power is cut off. This cache acts as very fast temporary storage, improving speed by allowing the controller to rapidly write data to cache before flushing it to permanent disk storage.
Above: What happens with a RAID battery in a failover scenario on power loss. (If you're looking at this on mobile, press and hold to save and zoom in!)
By stepping in during outages, the battery enables that safe cache clearing process and prevents potential data corruption or gaps.
And an added benefit? As we mentioned in an earlier post in our RAID series, the fact that the battery allows the RAID controller to utilize cache also enables much faster data transfers. So disaster recovery aside, having that backup power source helps accelerate everyday system performance. (Sweet.)
If you have one of the following controllers in your server, it will already have a RAID cache battery or other cache device: Dell H710, Dell H710p, Dell H730, Dell H730p, Dell H740p, HP P440, HP P840.
… you've got an array with many disk drives. In other words, the more drives you have in your RAID, the more you should consider a controller with a battery.
…your servers are in an environment prone to sudden power failures or disruptions like outages, surges, or spikes. If your data center’s in a region that’s consistently impacted by storms or earthquakes, it will definitely be useful to have that kind of backup.
…you have critical data constantly flowing to the RAID that allows zero tolerance for halted writes or unavailable access. You probably don’t need the reminder, but RAID battery backups are great helpers for use cases like data feeds from critical monitoring systems, such as medical devices, always-on apps where uptime directly relates to revenue (like online retail sites during sale season), and transactional databases that require constant processing.
Word of Caution: Don’t let this be your end-all, be all. BBUs work well in conjunction with other power backup solutions, but they only provide a temporary partial backup. A properly configured UPS (Uninterruptible Power Supply) would give you enough time to shut everything down properly without losing data.
Above: NVDIMM Battery Module.
NVDIMMs (Non-Volatile Dual In-line Memory Modules) serve a similar role to RAID batteries in keeping data safe when power is disrupted. NVDIMMs combine standard DRAM with flash memory storage and capacitors on an integrated server memory module. The capacitors provide backup power to flush data from the volatile DRAM to the nonvolatile flash storage if electricity is cut off.
So while NVDIMMs also offer backup power, they focus on temporary memory rather than RAID controller caches. Both help ensure no data loss, but they work at different components of the storage infrastructure – NVDIMMs on server memory and RAID batteries on RAID card caches.
Shop RAID cards from TechMikeNY
Some RAID storage vendors have been exploring alternatives to standard batteries, like supercapacitors. Supercapacitors have a longer overall lifespan and recharge faster. The tradeoff, however, is that smaller supercapacitors may only supply enough temporary power for the controller to flush cache (not maintain performance by keeping cache active).
Larger scale supercapacitor-based solutions aim to provide both data protection and sustained caching during short outages. But for homelabs and many mid-sized IT environments, the standard RAID battery still offers the best balance of resilient backup power for cache protection and everyday cache performance gains.
Pro Tip: The RAID controller manages the battery's charge level and sometimes discharges and re-charges the battery as part of maintenance operations. So when you first get it, give the battery enough initial charge time. Read: 48 hours. (For more RAID advice, see TechMike’s Quick Fix Troubleshooting Guide – RAID.)
The bottom line is that RAID batteries are a neat little hardware add-on that makes sure temporary data gets made permanent, even if the power’s down. Not only that, but they give everyday data transfers a snappy speed boost. So here’s to RAID batteries. Hope you keep this cached. ;-)
You'll want to remove the air shroud and the fans before you start. This will allow you to properly install the cables that connect the flex bay to the Dell R740xd rear backplane.
Start by installing the flex bay controller itself. When the controller is aligned, push down and slide the controller in.
]]>
You may have already read the RAM Compatibility Checklist. (Hopefully.) But there's one crucial specification we didn't cover that determines whether your system even boots up - the max compatible RAM speed.
We know, just when you thought you had this RAM compatibility thing pinned down, there's ANOTHER set of memory specs to keep in mind. But the truth is, finding the right RAM requires looking at both your processor's limits, as well as your system-board's limits. In this blog post, we'll look at why matching the RAM speed is so important for different server generations. (Oh, and we made a handy RAM compatibility chart for different processors - you know we've got you covered.)
(This article includes some info you may want to know from TechMike’s Full RAM Compatibility Checklist.)
As a recap from last time, the key specs to check when you’re looking to see if a RAM module is compatible with your system are:
And as a reminder of the best practices when looking at RAM compatibility…
For how to identify RAM specs based on the module descriptions, check out:
Laptop & Desktop RAM Terminology
Server & Workstation RAM Terminology
In addition to making sure that all of the RAM’s relevant specs line up with that of the specific server, you’re also going to need to look at speed. Regardless of what the individual RAM module’s maximum RAM speed is, the actual fastest RAM speed you’ll be able to achieve will be limited by the max speed that both your processors and your system are able to handle.
The maximum memory speed you’ll be able to achieve in your system will depend on the specific model and generation. For Dell servers, the original system-boards for 13th generation models like the R630, R730, and R730xd support up to 2400 MT/s (DDR4 2400). For 14th generation systems like the R640, R740, and R740xd, the original system-boards support up to 2933 MT/s (DDR4 2933). So you'll want to check what generation your Dell server is and what the specs say is the maximum memory speed for that system board.
RULES OF THUMB:
In 12th and 13th gen servers, you can *technically* install faster RAM than the processors/system-board support; however, the system board version and the procs will still limit RAM speeds to whatever their max speed is. For example, in 13th gen servers, you can *technically* install DDR4 2666Mhz; however, the processors that are compatible with 13th gen servers will only run the RAM at up to 2400Mhz.
In 14th gen servers with original system-boards, you cannot install faster than 2666Mhz RAM with gen 1 processors (e.g. Silver 4110 or Gold 6150 or Platinum 8160); well, you can, but the system won’t boot. Faster RAM here can only be used with gen 2 processors (e.g., Silver 4210, Gold 6250, Platinum 8260).
With the second version of the 14th gen system-board, you'll be able to install faster RAM, but it’ll only run at your processor's maximum RAM speed. (You can use 3200 Mhz RAM with a v2 14th gen system-board, but it’ll only run at 2933 Mhz.)
That being said, the max RAM speed will also be limited by the processors you’re using. So your next move is to…
If you’re looking for processors compatible with Dell’s 14th Gen servers and HP’s 9th Gen Servers, that would be Intel’s Xeon E5 v3 and Intel’s Xeon E5 v4 Families. Generally speaking, the max RAM speed that processors from each of the following series would actually run would be:
Intel E5 v3 Family: 2133 MHz
Intel E5 v4 Family: 2400 MHz
As Dell’s 14th Gen Servers, you’ll be looking at processors from Intel’s Scalable Processors Family. The Scalable Processors are divided up into the categories of Bronze, Silver, Gold, and Platinum (using a version of the Olympic ranking system, with Bronze at the low-end and Platinum at the high-end). The higher-end the processor, the faster max RAM speed it can run. So the below would be the fastest RAM speeds that the Gold and Platinum processors from each respective series would run:
BONUS - Note for Nerds: (Save Image to Zoom In)
Processor Series |
Processor Model |
Max RAM Speed |
Max RAM Size |
Compatible RAM Types |
---|---|---|---|---|
Intel® Xeon® Processor E5 v3 Family | E5-2640 v3 | 1866 MHz | 768 GB | DDR4 1600 /1866 |
E5-2690 v3 (Same RAM specs as E5-2697 v3 |
2133 MHz | 768 GB | DDR4 1600 / 1866 / 2133 | |
Intel® Xeon® Processor E5 v4 Family | E5-2667 v4 | 2400 MHz | 1.5 TB | DDR4 1600 / 1866 / 2133 / 2400 |
E5-2680 v4 | 2400 MHz | 1.5 TB | DDR4 1600 / 1866 / 2133 / 2400 | |
E5-4669 v4 | 2400 MHz | 1.5 TB | DDR4 2400 | |
Intel® Scalable X100 Series | Silver 4110 | 2400 MHz | 768 GB | DDR4 2400 |
Gold 6150 | 2666 MHz | 768 GB | DDR4 2666 | |
Platinum 8170 | 2666 MHz | 768 GB | DDR4 2666 | |
Intel® Scalable X200 Series | Silver 4210 | 2400 MHz | 1 TB | DDR4 2400 |
Gold 6252 | 2933 MHz | 1 TB | DDR4 2933 | |
Platinum 8268 | 2933 MHz | 1 TB | DDR4 2933 | |
Intel® Scalable X300 Series | Silver 4310 | 2667 MHz | 6 TB | DDR4 2667 |
Gold 6354 | 3200 MHz | 6 TB | DDR4 3200 | |
Platinum 8352Y | 3200 MHz | 6 TB | DDR4 3200 |
If the RAM type matches up with the processor and the max system board speed, congratulations! You made everything line up juuuust right. Goldilocks would be proud.
Need help figuring out if your RAM’s compatible? Just ask us at info@techmikeny.com or (718) 635-9880.
The PowerEdge R640 sports a sleek and space-saving 1U rack-mountable chassis that can be easily installed in any standard 19-inch rack.
Inside, it supports the latest second generation Intel Xeon Scalable processors (Intel C620 chipset) with up to 28 cores per socket, 6 memory channels and 48 PCIe 3.0 lanes per socket.
Memory-wise, the R640 has 24 288-pin DIMM slots and supports speeds up to 2933MT/s with configs up to 7.68TB using 128GBDCPMM and LRDIMMs. (For optimal performance, balance memory channels across processors and populate all channels evenly.)
Storage includes up to 10 front-loading 2.5" drive bays for SAS/SATA HDDs and NVMe SSDs. For I/O-heavy workloads, populating all bays with high capacity NVMe SSDs takes full advantage of the dedicated PCIe lanes. Benchmarks show NVMe SSD arrays hitting over 12GB/sec sequential read speeds in this server.
Equivalent servers from HP include the ProLiant DL360 Gen10, which has a similar form-factor and component options.
This PowerEdge can be equipped with high-performance Intel Xeon CPUs with up to 28 cores each and massive memory capacity of up to 7.68TB for data-intensive workloads. It has dedicated PCIe lanes for increased I/O performance.
Storage scalability is excellent with support for up to 10 SAS/SATA HDDs or SSDs, including NVMe options, in hot-swappable drive bays. The R640 can also be configured for software-defined storage with options like vSAN ready nodes.
Also see: How to Use NVMe Drives in an R640
The Dell PowerEdge R640 is managed through the user-friendly OpenManage Enterprise console also known as iDrac (internal Dell Remote Access Controller) which allows for complete local and remote server control, monitoring and updates of the server components and firmware. iDrac is Dell's version of IPMI. This allows IT administrators robust control over the infrastructure. It also works with OpenManage Mobile.
With features like hot-plug drives, redundant power supplies and fans, Dell's fault resilient memory, and physical security options, the R640 delivers rock-solid reliability for business-critical applications.
The R640 can be configured to have a nice balance of processing power, memory density, storage capacity and I/O performance. With that in mind, the R640 excels for workloads including:
For lighter workloads, consider the PowerEdge R440, which has less expandability but starts at a lower price point. The R740 offers greater expansion room with up to 26 drives.
The Dell PowerEdge R640 squeezes impressive performance and flexibility into a compact 1U rack form factor. With its scalability, intuitive management, reliability features and competitive pricing, it packs a punch with excellent price-to-performance ratio and strikes an ideal balance for SMBs and enterprises running critical applications.
“This is my lab, 14 Dell servers, all from you guys. I would never buy a server from anyone else.
This is a bare metal Kubernetes stack that is deployed with a container OS (flatcar) and is used to simulate larger deployments. There is a lot of custom software development that is performed on this cluster.
1 server is the main router with 3 dual port 10 interfaces to simulate 3 racks and top of rack l3 switching. The infrastructure is back by the R730 as the main deployment server with 24 x1g disks in a ZFS pool. Three servers act as the Kubernetes controller and ceph monitor servers, and the remaining 9 are used as nodes and as ceph OSD servers for proving persistent volumes for the Kubernetes cluster.
Currently it is all off, as I am redesigning it to be all layer 3 BGP routed, so no l2 switching.”
"My homelab is a place to use and hone my work skills. I’m an OT Infrastructure engineer.
From the top down:
> 48 port Unifi switch with POE. This runs several cameras and all my network devices and WiFi.
> 16 port 10GB unifi switch which runs my Desktop, Storage network and VM networks.
> Pfsense Main Firewall running 2.7.0 that runs my 2GB fiber internet connection. Some items I run on here are PFBlocker (Pihole for PFSense) Suricata for IPS/IDS.
> Fortigate 60F secondary firewall and tunnel to work
> R820 with 64 cores and 512GB of RAM running XCP-NG
> R720 with 32 cores and 512GB of RAM running XCP-NG
> R820 with 32 core and 512GB of RAM running TrueNAS this machine has 2 1.2TB Intel NVME PCI cards for Cache, 8 900GB SSDs for a VM pool
> Netapp disk shelf, this has 18 10TB drives. I use this storage for bulk storage for family and friends, steam library, ISOs, and secondary storage for VMs.
> 2 Dell UPS's that need new batteries.
I use my homelab for all sorts of stuff. I host several game servers including Minecraft, ARK Survival, 7 Days to Die and more, on the nerdy side. I run a few containers for my own code repo and runners that redeploy websites when I push to my repos, of course a RSS feed VM that monitors Distrowatch for new releases of ISO. I also run AWX to run scripts against my cluster to keep them up to date."
I've got Xfinity cable internet coming into this, ARRIS SB8200, the go-to for cable internet. I've got a Sonos amp, bunch of Sonos speakers in the house. (This one is hardwired to create the Sonos net.) Patch panel up here, I have a Lorax 4k camera system in the attic, with 8 cameras around the house, those all wired in the box, and then the box into the network. 24 port UniFi switch Gen 2 with the DAC cable. UDM-SE. Run about 8 VLANs, pretty well segregated. I've got a PD power bar here that is okay.
I've got an old Optiplex that runs ESXI 8. On here is my lab runs a Windows 2019 domain controller and a Veeam 365 server. I do no need to work on this, because it's both production and lab. I also have a Plex server running on a physical box. I've replaced the internal hard drives in both of these and Optiplexes run forever.
Got a 10TB Western Digital drive and a 4TB backup drive. This is it, and it's rock solid!"
(Got a homelab setup you want to enter? Send it to us @techmikeny on Instagram or via email at info@techmikeny.com!)
Dell servers have temperature sensors that monitor system heat. When you install a PCI card, it changes the airflow inside the server. If the new PCI card gets hot, it could reduce cooling to other components.
To prevent overheating, Dell servers automatically increase fan speeds if they detect a non-Dell PCI card. This quickly ramps up airflow to compensate for the unknown card. Dell's rationale is that non-Dell cards haven't been validated, so the servers assume they need extra cooling.
For 12th generation and 13th generation Dell servers, you have two main options to tell your fans to ~cool it~ and bring their speed back down to normal:
Dell has provided an IPMI command that disables the automatic fan response to third-party PCI cards.
To use this command:
Then, you’re ready to go.
(Note: Substitute in your own iDRAC IP, username, and password in the commands below.)
Run the following command to disable the automatic fan response:
ipmitool -I lanplus -H <iDRAC IP> -U <user> -P <password> raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x01 0x00 0x00
If you want to re-enable the automatic fan response, you can run a similar command but with "0x00 0x00 0x00" at the end (instead of "0x01 0x00 0x00").
To check the current status of the automatic cooling response, run:
ipmitool -I lanplus -H <iDRAC IP> -U <user> -P <password> raw 0x30 0xce 0x01 0x16 0x05 0x00 0x00 0x00
If disabled, the response will be: 16 05 00 00 00 05 00 01 00 00
If enabled, the response will be: 16 05 00 00 00 05 00 00 00 00
You can also manually set a specific fan speed percentage. Enabling manual control and setting a fan speed override will bypass the automatic fan response.
To do this:
1. Enable manual control:
Using the following command, you can enable manual control of the fan speed.
ipmitool raw 0x30 0x30 0x01 0x00
2. Set a manual fan speed percentage in hex:
ipmitool raw 0x30 0x30 0x02 0xff 0x2D
In this case, 0x2D is equivalent to 45% fan speed. Adjust the hex value to your desired percentage.
Keep in mind that after making these fan adjustments, you’ll want to keep an eye on system temperatures to make sure adequate cooling is still occurring.
( Note: In the event that you absolutely need to disable manual control, here’s the command to run. Doing so, however, will return your fans to running at whatever speed Dell deems necessary. ipmitool raw 0x30 0x30 0x01 0x01 )
For 14th generation PowerEdge servers, you may be out of luck, as neither of the above solutions currently work. The IPMI command was phased out in more recent iDRAC updates, and manual fan control does not override the automatic response.
To use non-Dell PCI cards in 14th generation Dell servers, you will unfortunately either need to tolerate the high fan noise… or replace the cards with Dell versions.
12th and 13th generation Dell servers give you options to control fan speed after installing non-Dell PCI cards. For 14th gen servers, there don’t seem to be any viable solutions. (Unless you’ve found a workaround, that is… in which case, we’d love to hear about it in the comments below!)
Having trouble telling your fans to cool it, got some tips for 3rd party cards, or just need some help? Drop a line down below or email us at info@techmikeny.com.
Want your boss, purchasing team, spouse, or even archnemesis to buy you that new server you've been eyeing? Well, we've added a handy "Share Cart" feature to make it happen.
Understanding that our customers often need approvals on their orders prior to purchasing them, we decided to design a feature to make the process smoother. The new "Share Cart" button allows you to seamlessly forward a link to your cart to anyone, regardless of what browser they use.
You’ll find it conveniently located directly to the right of the "Clear Cart" button on your cart page. (Make sure you don’t get those two mixed up!) When you’re ready, simply click on the button, and the link to everything in your cart will instantly be copied to your clipboard.
No more having to manually list out that perfect custom configuration you've painstakingly chosen. No more emailing an extensive parts list. Just click and share!
Whether you want to send your purchasing team your ideal setup or add something to your holiday wishlist, we hope that the "Share Cart" feature makes it just a little easier to bring new gear into your life.
Let us know if you have any other feature requests - we're always aiming to improve your shopping experience here at TechMikeNY!
]]>
A homelab is a small setup of servers, networking equipment, and other IT infrastructure that you can experiment with without having to leave the house. Having a homelab lets you create a hands-on environment for learning practical IT skills, experimenting with new technologies, and testing things out without impacting real production systems. DIY setups let you take risks and try things that you otherwise couldn't in a professional setting.
Homelabs are great for improving your skills and knowledge on your own time. The right setup allows you to develop experience with enterprise-grade hardware, virtualization, hosting services, networking, storage systems, and much more. And of course, they can also be used for fun projects like hosting game servers, web apps, and media centers for personal use. Overall, homelabs are the perfect playground for IT professionals, students, and hobbyists looking to expand their technical abilities.
The core component of any homelab setup is the server hardware that will run your virtual machines, storage systems, and applications. You don't necessarily need the latest enterprise-grade rack servers to get going – when you’re first starting out, even an old desktop PC can work. The key considerations you’ll need to have in mind are CPU processing power, enough RAM for smooth performance under load, and reliable storage.
Some general rules of thumb:
Of course, the exact configuration you’ll want to start with highly depends on what you’re planning on using your homelab to test out. If you already have a specific use case in mind and need help figuring out how to put together a build that matches (or are just curious to see what’s out there), check out our guides to different server applications and recommended configurations.
Part 1: Database Servers, Web servers, Domain Controllers, File Servers, Remote Desktop Servers
Part 2: Media Servers, Routers/Security Gateways, Gaming Servers, Administrative Servers, General Purpose Virtualization Hosts/Hypervisors
Getting Specific: Top 10 Questions to Ask When Buying for a Home Lab
Networking equipment forms the backbone that allows devices in your homelab to communicate with each other and access networks outside the home, like the internet. At minimum, you'll need a router to connect your homelab to your home network and modem. An Ethernet switch creates a high-speed local area network between wired devices like servers and desktop PCs by linking them together. This allows very fast file transfers and communication between these wired nodes.
Wireless access points can also be added to provide WiFi connectivity, allowing your mobile devices and laptops to access the homelab network wirelessly. More advanced options include enterprise-grade switches with more ports and bandwidth, as well as firewalls for network security and VLANs to segment your network into subnetworks.
The goal is having fast, reliable, and robust networking gear so that your various homelab components can communicate and transfer data efficiently. A high-performance network fabric ensures your storage, servers, clients and other devices can interoperate smoothly. Investing in capable networking equipment is definitely an important step in creating a versatile home server environment.
Just like your brain needs memory space, your homelab needs storage space - reliable and ample storage capacity is crucial for running virtual machines, hosting files, storing data, and backing up critical system configurations.
For primary storage, you can start simple with internal hard drives (either HDDs or SSDs) installed directly into your server hardware. However, this local storage is limited in capacity and lacks redundancy if a drive fails. External USB drives can supplement storage, but managing multiple separate drives can get messy quickly.
For more advanced needs, network-attached storage (NAS) appliances provide shared pools of storage that can be accessed by devices across your home network. NAS systems also support RAID configurations like RAID 5 or RAID 10 to combine multiple physical drives into a single logical volume, increasing capacity, speed, and redundancy. With RAID, if one drive fails, your data remains safe on the other drives.
More on RAID configurations
Other advanced file systems like ZFS are also useful on NAS devices, providing protection against bit rot and data corruption. For the highest performance, you can opt for cutting-edge enterprise-grade SSDs and NVMe drives - just be prepared to pay a premium.
When choosing storage, weigh factors like your budget, capacity requirements, desired speeds, and redundancy needs. And think about the future; plan ahead when sizing and selecting storage so you don't run out of space for all of your homelab projects and VMs!
In addition to your core infrastructure of servers, networking, and storage, having dedicated workstations and client devices is super helpful for having a flexible setup.
Workstations like desktop PCs and laptops allow you to access and manage your homelab environment. They can serve as administration consoles for controlling your servers, monitoring systems, and configuring services. Powerful workstations also provide the muscle for development work, media editing, and other intensive tasks.
On the other end, client devices are invaluable for testing different user scenarios. By mimicking various real-world endpoints, you can validate how your applications, network, and infrastructure will perform for users. Useful client devices include:
The more diverse client devices you have, the more thoroughly you can test your homelab systems under real-user conditions. For these, you once again don’t need to have the latest and greatest gear - older and cheaper devices often work perfectly well as test client devices. Starting to put together a range of workstations and clients pays dividends in allowing comprehensive testing
• • •
And there you have it - the core components that make up a versatile homelab environment. It can be intimidating to get started, but with the right setup, you can begin to create your very own personal playground for learning IT skills, testing applications, and so much more. Start small as you need, then expand your setup over time. The possibilities are endless when you have your own homelab to tinker with!
]]>And that’s it! That's how to toggle between BIOS and UEFI. Now go and switch boot modes to your heart's content.
]]>Short-term memory isn't perfect. It’s meant to store all the critical data you need right away to keep all processes running quickly and smoothly. And when it isn’t functioning properly... well, ask anyone who’s absent-minded. (Yes, we’re talking about Random Access Memory - RAM.)
RAM is crucial to keeping everything on track. So how do you keep your server's memory solid and prevent random bits from corrupting important data? Well, that depends on whether you go with Error Correcting Code (ECC) RAM or non-ECC RAM. In this blogpost, we're talking all about these two kinds of RAM, their roles in data integrity, and what to factor in when you’re deciding between them.
Short answer: You can’t. Not directly, at least. With non-ECC, the lack of error correction means you'll need to implement regular memory testing and monitoring tools to keep a close eye on memory errors creeping up over time. But for non-critical applications where speed is king, non-ECC strikes a decent cost-performance balance.
Still unsure which route to take? Our techs are always available to help point you in the right direction. Get in touch with us at info@techmikeny.com and we'll help you find the ideal memory solution for your server setup.
|
It’s been a while since we’ve talked about virtualization. Before we dive into the platforms and types, let’s review some of the basics.
Here are a few refreshers from our article explaining virtual machine basics:
“Think of a Virtual Machine (VM) as a computer within a computer. If you think of a traditional desktop or laptop, it runs on a single Operating System (OS), and that OS is tied to the system's hardware resources. In virtualization, a type of software called a hypervisor allows you to create a new virtual computer (the virtual machine) within that host computer …[This virtual machine can have] all the same features as a traditional computer; e.g., its own network IP or its own OS (which can be different from the host OS).” - TechMikeNY.
In essence, a VM Host, known as a hypervisor, is a software that enables the creation and management of virtual machines. It runs directly on physical hardware, such as a desktop, laptop, or server, and acts as a mediator between the physical hardware and virtual machines.
A virtual machine (VM), meanwhile, is a software emulation of a physical computer system. It operates within the VM host and runs its own operating system and applications. What’s really neat is that multiple virtual machines can run within a single hypervisor on the same hardware; the hypervisor just manages the virtual machines in a way that allows the resources of the physical server to be allocated between them efficiently.
“As you can imagine, having a "computer inside a computer "allows you to run processes –and take risks –that you would typically not want to do with a system running a single OS. For example if you experience a system crash in a virtual environment, the consequences are minimal compared to [those that would occur] if your laptop's OS crashed. (Spoiler Alert: the latter would be Game Over.)” - TechMikeNY.
Virtual machines are ideal for concurrently using different versions of applications, creating secure backups, and isolating experimental environments, but perhaps their most valuable benefit for do-it-yourself tinkering is the room they provide for trial and error.
VM hosts/hypervisors exist in two primary forms: Type 1 and Type 2.
We took a look at the key features of the most popular virtualization platforms. In this guide, we’ve included what it is that each platform is good for, a bit about its management interfaces, and a few other considerations you’ll want to keep in mind when looking for the perfect platform to run a “computer within a computer”.
Each of these platforms is meant to scale from small businesses to large enterprises and supports live migration, snapshotting, dynamic memory and cross-platform compatibility. If you see one of those mentioned as a key feature, we thought the platform’s specific execution of the feature made it distinct.
Best for Windows Workloads
Price: $24.95 to $199. Free 1 version available.
Security: ★★★★☆
Management Interface: ★★★★☆
Scalability: ★★★★☆
Microsoft Hyper-V is a robust, Type 1 “bare-metal” hypervisor developed by Microsoft specifically for Windows workloads. Hyper-V enables users to create and manage virtual machines (VMs) running on Windows-based operating systems, allowing for the virtualization of various Windows applications and services as well as a wide range of Windows guest operating systems.
For managing multiple Hyper-V hosts, VMs, and advanced features, Microsoft provides the Hyper-V Manager and System Center Virtual Machine Manager (SCVMM) as management tools for performance monitoring and resource allocation (similar to vSphere’s vCenter).
Some key features of Microsoft Hyper-V:
Microsoft Hyper-V is particularly well-suited for users heavily invested in the Microsoft ecosystem; it seamlessly integrates with other Microsoft products and tools, such as Active Directory and System Center suite, allowing for a cohesive management experience. Hyper-V is able to support a significant number of VMs on a cluster of hosts, and between its live migration feature and centralized management capabilities, it scales well.
1 Note: Hyper-V Server is available to download for free from the Windows site, but you would still need licenses for Windows VMs. Using Hyper-V, the Windows Server OS allows you to use one Standard license for up to two Windows Server VMs. Using one Windows Server DataCenter OS allows for unlimited VMs.
Security: ★★★★★
Management Interface: ★★★★★
Scalability: ★★★★★
VMware vSphere is a comprehensive virtualization and cloud computing platform developed by VMware. vSphere enables the creation and management of VMs; each VM represents a self-contained and isolated instance of an operating system and applications, running on a single physical server.
At the core of vSphere is the VMware ESXi hypervisor, a Type 1 “bare-metal” hypervisor that runs directly on the server hardware. ESXi can be installed without using vSphere or vCenter, and can be managed independently. Using vSphere, multiple ESXi hosts can be managed and operated in unison. For managing multiple ESXi hosts, VMs, and advanced features, vSphere offers a centralized management interface called vCenter Server. It provides a single point of control for managing multiple ESXi hosts, VMs, and advanced features. Administrators can monitor performance, allocate resources, and automate tasks through the vCenter interface.
Some key features of vSphere:
VMware vSphere can potentially support thousands of VMs on a cluster of hosts. It leverages features like Distributed Resource Scheduler (DRS) and Distributed Power Management (DPM) to optimize resource utilization and energy efficiency.
Security: ★★★★☆
Management Interface: ★★★☆☆
Scalability: ★★★★★
KVM, which stands for Kernel-based Virtual Machine, is an open-source virtualization technology that is tightly integrated with the Linux kernel. It provides a platform for virtualizing Linux-based workloads, making it an ideal choice for users running Linux applications and services.
In terms of management, KVM offers several options, including command-line tools and graphical interfaces like Virt-Manager. While the management interface for KVM may not be as feature-rich or user-friendly as some of the others on this list, it does provide essential management capabilities, such as creating, starting, stopping, and monitoring VMs.
Some key features of KVM:
KVM is particularly popular among cloud providers and Linux-focused deployments due to its performance and strong integration with the Linux ecosystem. It offers excellent support for virtualizing Linux workloads and can accommodate a large number of VMs on a single physical server. Overall, it’s a reliable and efficient choice for anyone seeking to virtualize Linux workloads, offering strong security as well as good scalability and integration with the Linux ecosystem.
(Note: while KVM itself is open source, there are various distributions and management tools available that provide additional features and support for KVM-based virtualization.)
2 If you want to get nerdy about it: Making the guest operating system aware of the virtualization layer is a concept called paravirtualization, where the guest OS is modified to collaborate effectively with the hypervisor. The modification typically involves replacing certain hardware-specific operations with calls to the hypervisor. Virtio is a powerful framework for implementing paravirtualization. |
Security: ★★★★☆
Management Interface: ★★★☆☆
Scalability: ★★★★☆
Proxmox VE (Virtual Environment) is an open-source virtualization platform that combines virtualization, containerization 3, and software-defined storage in a single solution. It is based on Debian GNU/Linux and utilizes the KVM hypervisor and LXC (Linux Containers) technology to provide a robust virtualization platform, and its integration with Zettabyte File System (ZFS) is a big draw for many of its users.
Proxmox VE offers a web-based management interface, Proxmox GUI (Graphical User Interface), which allows users to easily create, configure, and manage virtual machines and containers. Unfortunately, its GUI doesn’t have dockers, making it a little more difficult to deploy applications. However, the interface does provide a comprehensive set of tools for monitoring and managing resources, performing backups and restores, and implementing high availability and clustering.
Some key features of Proxmox VE:
Overall, Proxmox VE provides a solid foundation for virtualization. As far as virtualization platforms go, Proxmox VE is pretty quick to get up and running; the combination of its open-source nature and the access the platform provides to a robust set of features makes it particularly popular among educational and research institutions. (Which in turn makes for excellent community support.) Its flexible architecture allows for easy integration of virtual machines and containers, enabling efficient resource management, high availability, and some top-notch scalability.
3 Good to know: Containerization is a way to package applications and all the things they need to run into small, self-contained units called containers. These containers are like little isolated bubbles that can be easily moved around and run on different computers. They make it easier for developers to build, ship, and run applications, and they help save computer resources by running more efficiently. |
Xen is a powerful open-source server virtualization platform that provides efficient and secure virtualization capabilities for a wide range of workloads. Developed by the Xen Project, Xen offers a reliable solution for virtualizing server environments.
As a Type 1 hypervisor, Xen runs directly on the server hardware, enabling the creation and management of multiple VMs. It supports various guest operating systems, including Linux, Windows, and BSD, making it a versatile choice for users with diverse workload requirements.
Xen provides a management interface called XenCenter, which allows administrators to monitor and control virtual machines, allocate resources, and perform essential management tasks. Given the extensive features that some other platforms offer, we only gave XenCenter three stars, but XenCenter does still offer essential functionality for managing virtualized environments.
One important aspect of Xen is its focus on security. It ensures that each virtual machine is isolated from others, which means that even if one VM is compromised or experiences an issue, it does not affect the other VMs running on the same server. This isolation helps protect sensitive data and applications from unauthorized access or potential threats.
Some key features of Xen:
Xen is particularly well-suited for anyone looking to consolidate their server infrastructure and improve efficiency. It offers excellent security measures, ensuring isolation between VMs, and has a reputation for stability and performance. Xen's live migration and distributed architecture contribute to its outstanding scalability, a facet of Xen that has made it an appealing choice for cloud service providers, hosting providers, research environments, and financial institutions.
Note: While Xen offers a robust open-source solution, Citrix's XenServer is a popular alternative option if you’re looking for something more enterprise-grade. XenServer, based on the Xen hypervisor, provides a commercial virtualization platform with additional tools for high availability, a centralized management console, live migration, and advanced networking options. We chose to focus on Xen because it’s open-source and compatible with a wide range of operating systems, but Citrix's XenServer is a very solid option.
Security: ★★★☆☆
Management Interface: ★★★☆☆
Scalability: ★★☆☆☆
Oracle VirtualBox is a widely used open-source virtualization platform that enables users to run multiple operating systems simultaneously on a single physical machine. A type 2, or “hosted ”hypervisor, it provides a flexible solution for desktop virtualization, offering compatibility with various host operating systems, including Windows, macOS, Linux, and Oracle Solaris.
Oracle VirtualBox was intended for individual use, so in comparison to the other platforms mentioned, its management interface has fewer of the more advanced features necessary for a multiple-user environment. You can use it to configure virtual hardware settings, manage storage, and control network configurations, but as far as virtualization platforms go, it's not the most sophisticated. If you need a temporary environment for testing or for running remote applications, VirtualBox can function as a short-term solution, but we wouldn’t recommend using it long-term for larger-scale organizational use.
Some key features of VirtualBox:
Oracle VirtualBox is favored by individuals, developers, and small-scale environments due to its open-source nature and cross-platform compatibility. It is a cost-effective virtualization option great for testing software configurations, running virtualized environments, or exploring new operating systems.
Security: ★★★☆☆
Management Interface: ★★★★☆
Scalability: ★★★☆☆
Some key features of Parallels Desktop:
Cloud-based virtualization is an alternative form of virtualization; as the name implies, it's a way of creating and managing VMs in the cloud instead of using physical hardware. In cloud-based virtualization, the virtualization layer is deployed and managed by a cloud service provider, who provisions and allocates the necessary computing resources for running virtual machines.
If you’re working on virtualization in a data center or otherwise dealing with an infrastructure that is reliant on physical hardware, you may not need to focus on cloud-based virtualization right now. Some cloud service providers have a good infrastructure for resource management, and cloud environments do typically offer redundancy / failover mechanisms. However, to avoid vendor lock-in and ensure you’re taking the safest route in terms of data security, access control, and compliance requirements, we would recommend going through the more traditional, on-premises virtualization.
Virtualization is a critical component of cloud computing, data centers, and software development. Embracing virtualization makes it possible to optimize infrastructure, consolidate resources, streamline operations, and create an insulated testing environment. Platforms like VMware and Microsoft Hyper-V provide powerful tools to effectively manage virtualized environments that offer significant advantages; from increased efficiency and scalability to better disaster recovery and room for trial and error, these systems allow you to effectively leverage some pretty neat and transformative technology.
Some server configurations you may want to consider:
Processors: Total of 28x Cores (2x 2.60Ghz E5-2690v4 14 Core Processors)
Memory: Total of 256GB (8x 32GB PC4-2666V RAM)
Hard Drives: Total Storage of 15.0TB (5x NEW 1TB SSD SATA 2.5'' 6G + 5x 2TB 7.2K SAS 2.5'' 12G)
RAID Controller: H730P
Processors: Total of 48x Cores (2x 2.10Ghz Platinum 8160 24-Core Processors)
Memory: Total of 384GB (12x 32GB PC4-2666V RAM)
Raid Controller: H730P
Processors: Total of 48x Cores (4x 2.40Ghz E5-4657Lv2 12 Core Processors)
Memory: Total of 768GB (48x 16GB PC3-12800R RAM)
Hard Drives: Total Storage of 240GB ( 2x 120GB SSD SATA 2.5'' 6G + 6x Caddies / Sleds)
Raid Controller: H730P
If you’re interested in learning more, many of the open-source platforms we mentioned above have solid documentation and widely used community forums. These great resources are filled with support for understanding specific platforms as well as the more general, complex concepts associated with virtualization.
Xen Project Official Website
Virtual Box Official Website
KVM Information + Forums
Kubernetes
Docker
More on containers
]]>
The R430 and the R530 have 4 built-in 1GB RJ-45 network ports. This means you do not necessarily need to choose a network daughter card, but do have the option to add some PCIe network interface cards, to the extent that the quad-port 1GB ports are not sufficient. The R630, R730, and R640 do not have built-in ports, so for those, you do need to select at least a daughter card and still have the option to add PCIe NICs as required.
Back of an R430 with built in 1GB RJ-45 ports. The one in the middle is the iDRAC enterprise port.
Great question. Unfortunately, it’s not possible to attach a SAS drive directly to the Xbox One and use it as a common mass storage device for media. You may be able to utilize a compatible SAS to USB adapter to connect the drive to your Xbox. While you can connect a NAS to the Xbox One, Xbox does not have the option for you to use it for game storage (as games would suffer from the latency in retrieving files).
Hard Drives: Everything to Consider All in One Place
Super doable. In addition to the number of processor cores in your server configuration, you can divide each physical single core into multiple virtual CPUs (vCPUs). Each vCPU acts as a "slice" of the core's processing power. So, if you have a server with, let's say, a quad-core CPU, you can create four virtual machines (VMs), and you can assign 2 or 3 vCPUs for each VM. This means that each VM will have access to 2-3 slices of the processing power of a single core, effectively giving you 8-12 virtual machines.
If your services don’t need a full core’s worth of processing power, this will let you sustain 60 VMs on a budget without nearly as much hardware or quantity of physical cores. Ideally, you’ll want to stick to a ratio of 2:1 vCPUs to CPU cores (ex. up to 8 vCPUs for a quad-core processor). You can push it further and assign more vCPUs, but be cautious. If you push beyond the recommended ratio, you might encounter performance issues. When you assign more vCPUs, the tasks from the guest OS have to be scheduled on the host CPU cores, and this can lead to bottlenecks*.
(*To explain it visually, imagine there are two lines of people waiting to buy tickets, but there's only one ticket seller. The ticket seller has to alternate between the lines to serve everyone. Similarly, when you have more vCPUs than physical cores, the CPU cores have to share their processing power, leading to potential slowdowns.)
To transition away from pieced-together hardware, you can achieve your goal of running 60 VMs on a budget under $3,000 – all on a single server.
How? Any 13th generation server will do the trick. The maximum 13th gen servers can support is two 22-core processors. With 2 x 22-core procs, you could run a whopping 88 VMs in total. If you prefer, you could also go with 2 x 18-core processors (like these) for 72 VMs, but you may want a little more cushion depending on the service these VMs are running.
]]>Dell introduced a new, more efficient power supply unit (PSU) form factor for the R650. The new PSU is significantly smaller and saves a good deal of space at the rear of the server. This new design seems to be what allowed for a lot of components in the back to be shifted around.
One major upgrade: the R650 has integrated BOSS (Boot Optimized Storage Solution). As the name suggests, BOSS is a dedicated storage solution designed to optimize booting the operating system. In the case of the R650, the integrated BOSS comes in the form of an m.2 SSD (which the smaller PSU left room for). Note that there are two slots you can use for installing/running the OS, which can be setup in RAID 1 so that if one fails, the other can keep things moving. This allows for uninterrupted uptime in case of an BOSS SSD failure and overall improved system responsiveness.
In R640s, BOSS can be added via PCI-e slot. So the best part of having integrated BOSS in the R650? A now-free PCI slot to use for other adapter cards, networking, external storage, or whatever (compatible) device your heart desires.
The R650 comes with much better built-in support for NVMe (Non-Volatile Memory Express) storage via the H755 or H755N RAID card. NVMe is a storage protocol specifically designed for SSDs that offers several advantages over traditional storage options.
One key benefit of NVMe is its significantly faster data transfer speeds compared to traditional storage interfaces like SATA. NVMe leverages the PCIe (Peripheral Component Interconnect Express) interface, enabling direct communication between the storage device and the CPU. This direct connection minimizes latency and allows for much higher data transfer rates, making for improved storage and overall system performance.
Quick tutorial: How to use NVMe drives in an R640
It’s certainly possible to set up your R640 for NVMe use. But as opposed to the R640, the R650 was designed to fully harness the capabilities of NVMe storage; that means optimized data transfer pathways, reduced latency, and parallel processing. On the storage performance potential front, we think this addition makes the R650 outperform the R640.
The R650 takes a pretty massive leap in memory with 32 DIMM slots, compared to the R640's 24. The R640 offers up to 3TB of RAM with load-reduced DIMMs, while the R650 raises the bar with an impressive maximum memory capacity of up to 12TB. Not bad, not bad at all.
The R650 offers support for 3rd Generation Intel® Xeon® Processors, a step up from the R640 (which only supports up to 2nd gen processors). This upgrade unlocks CPUs with more cores, pins, and faster bus speeds. To take full advantage of this, the PCIe slots also needed a speed boost, so the R650 got higher bandwidth PCIe Gen4 slots to match. Gen4 doubles the maximum bandwidth for PCIe, making it possible to hook up more powerful peripheral devices like high-performance graphics cards, network adapters, storage controllers, etc.
Assembly Tip: The R650’s new retention lever also makes it much easier to carefully install processors.
Shop TechMikeNY Dell PowerEdge R650
If you're curious about the full feature by feature generational comparison table, you can check it out here: Dell PowerEdge R650 Technical Guide (see page 8)
Bottom line - with its faster processors, beefed up memory, and high-speed storage, the R650's next-gen design blows the R640 out of the water. The new PSU form factor allowed for upgrades like swappable I/O that make this server built to last. So for IT pros ready to step up their infrastructure's performance, we’d say the R650 is definitely an upgrade in terms of storage and delivers way more power for running top-tier workloads.
Looking to configure an R650 but not quite sure where to start? Talk to the nerds at TechMikeNY and we’ll help you out. Just email info@techmikeny.com, call (718) 635-9880, or drop us a line in the comments below.
Let's get right into it.
And that's it! That’s how to do a full system reset on your server.
]]>To test it out, we took ChatGPT for a brief spin and asked it what someone using a Dell workstation would need to know. In this blogpost, we’ll show you what happened, tell you about what went into choosing our prompts, and talk about some of the pros and cons of using ChatGPT when it comes to IT.
ChatGPT has been shown to be good at summarizing the information it’s been given and using human language to formulate replies to queries. On the surface, it looks like it could be a really helpful tool for IT support; it takes chat messages as inputs, it’s great at returning relevant information from long stretches of text, and it’s designed to respond in conversational language that’s easy to understand. Give it a miles long technical guide, and it’ll tell you most anything you want to know. However, unlike Google Bard, it doesn’t have access to the internet, and its training data is limited to information available up to September 2021; that means that it doesn’t always provide accurate, up-to-date information.
With its potential advantages and pitfalls in mind, TechMikeNY decided to test out ChatGPT in the context of IT. But instead of just asking it to answer the same questions that our customers ask us, we wanted to see what would happen if we asked ChatGPT to tell us what kinds of questions someone might ask about a particular machine. Why? Well, first of all we want our customers to be as informed as they possibly can, so we try to be prepared to answer anything; our techs are always testing things out in different configurations ~just in case someone might need to know~. But even after all of these years, we still occasionally get surprised by some of the questions that we receive. We figured that if ChatGPT could successfully identify questions that our customers ask us (without giving it any samples of what we receive), it could serve as a good way to expand what we put on our troubleshooting guides and potentially present us with new topics to research to prepare for the unexpected.
We chose to use the Dell Precision 3420 for this trial run. It is a small, robust workstation that can accommodate a GPU and NVMe drives, making it a low real-estate, high-performance beast. The Precision 3420 has been around long enough for there to be comprehensive documentation, it’s still widely in use, and it’s a machine that folks want to play around with and customize (meaning that there also exist plenty of community forums where folks have answered relatively niche questions). It’s a great workstation, and there’s plenty of information on it.
To start off the conversation, we asked the following question:
There were two intentional phrases incorporated into question:
ChatGPT’s responded with the following:
As requested, ChatGPT provided us with a list of 10 categories and provided us with 2 example questions for each category. These consisted of the following:
In order to test both the accuracy of the data ChatGPT had on the 3420 (without feeding it a specifications sheet), the nature of the questions it would come up with, and the quality of the shortened answers it would provide, we selected one of the categories we have a solid sample size of – Hardware Compatibility Questions.
This is the prompt that we used:
Here are the questions that ChatGPT came up with and the answers that it provided.
Answer: Yes, the Dell Precision 3420 supports RAM upgrades. It has four DIMM slots and can support a maximum of 64GB of DDR4 ECC or non-ECC memory.
TechMike’s review: True. Good question.
Answer: Yes, the Dell Precision 3420 has multiple drive bays for adding additional storage drives. It supports 2.5-inch and 3.5-inch SATA hard drives, as well as M.2 PCIe SSDs.
TechMike’s review: Accurate.
Answer: Yes, the Dell Precision 3420 allows for graphics card upgrades. It has a PCIe x16 slot that can accommodate a wide range of dedicated graphics cards for improved graphics performance.
⚠️ TechMike’s review: Relevant question, accurate information. But if you’re thinking what we’re thinking about the limited space for graphics cards, see question 5.
Answer: Yes, the Dell Precision 3420 supports dual monitors. It has multiple video outputs, including DisplayPort and HDMI, allowing you to connect two monitors simultaneously.
⚠️ TechMike’s review: It’s true that the 3420 supports dual monitors, but this answer assumes that all 3420s come with an automatically usable HDMI port. This is inaccurate. If you have a PCI graphics card (or any other PCI-e device), the onboard HDMI ports are typically disabled. The onboard ports would then only be functional if you remove the PCI adapter. So while there is an HDMI port, you can’t use it to support dual monitors.
Answer: Yes, the Dell Precision 3420 has a compact form factor, and the available space for the graphics card may be limited. It is recommended to choose a graphics card that fits within the physical constraints of the workstation's chassis.
TechMike’s review: True, the available space for the graphics card is in fact *quite* limited. When we were reading through these answers we wanted to add more detail to the answer from question 3, and it looks like ChatGPT has its basics covered. It provided the answer to our follow-up question… just in question form. (Maybe if we asked it to list the questions in logical order, it would have listed 5 directly after 3.)
Answer: While the Dell Precision 3420 comes with a pre-installed CPU cooler, it is generally possible to replace it with an aftermarket cooler as long as it is compatible with the CPU socket and fits within the workstation's chassis.
⚠️ TechMike’s review: Another “it depends” situation. The 3420 has a phenomenal built-in cooling system, but that aside, it’s can be tricky to install an aftermarket cooler due to the limited space. (Water cooling could work here, with some DIY tweaking.) ChatGPT answered the question but did not necessarily lead the potential user down the correct path; the difficulty of doing this task was not mentioned, and neither was the quality of the built-in cooling system.
Answer: Yes, the Dell Precision 3420 supports USB 3.0. It has multiple USB 3.0 ports, which provide faster data transfer speeds compared to USB 2.0.
TechMike’s review: We’ve gotten this one from our customers before. It’s important to know. And the answer's true, it does have multiple USB 3.0 ports; ChatGPT even explained what they are and how they compare to USB 2.0. Three cheers for superspeed.
Answer: The Dell Precision 3420 does not have built-in Wi-Fi or Bluetooth capabilities. However, you can add Wi-Fi and Bluetooth functionality by using compatible expansion cards or USB adapters.
TechMike’s review: We’ve also gotten this question from our customers! This answer’s accurate, and so’s the question; it doesn’t just ask whether the Precision 3420 has built-in capabilities, but by formulating the question in terms of compatibility, ChatGPT gave itself room to provide a solution in the answer (“compatible cards/expanders”).
Answer: No, the Dell Precision 3420 does not have built-in Thunderbolt ports. It supports a range of other connectivity options, including USB, DisplayPort, and HDMI.
⚠️ TechMike’s review: Yep, true. Well, mostly. It’s the whole PCI-adapters-disable-onboard-HDMI-ports situation again; see our review of question 4.
Answer: Yes, the Dell Precision 3420 supports both ECC (Error-Correcting Code) and non-ECC memory modules. ECC memory can help detect and correct certain types of data corruption, providing increased reliability for critical tasks.
⚠️ TechMike’s review: Unfortunately, ChatGPT didn’t account for the fact that Intel Core i3/i5/i7 processors can only be paired with Non-ECC memory. Those are 3/4 processors that Dell Precision 3420s support, so the better answer here would be “Yes, in some situations, but it depends on the processor”. (The 3420 supports the Intel Xeon E3, which is compatible with ECC RAM.) Though ChatGPT did fill out the rest of its sentence requirement with accurate information on what ECC memory is good for, this would not be a helpful answer for someone actually looking into what kind of RAM they should be upgrading their system with.
We were happy to see that ChatGPT did an excellent job pinpointing the specific questions that someone using a 3420 would need the answers to. Our customers have definitely asked us about USB 3.0 compatibility, Thunderbolt compatibility, and graphics. Before submitting the queries to ChatGPT, a few of us were actually talking about the fact that we should let our customers know about the Dell Precision 3420’s WiFi capabilities, and we were pleased to see the question incorporated.
It was *especially* neat to see that ChatGPT picked up on the question hidden in our request. To know what kinds of questions someone might have about hardware compatibility, you need to be able to answer, “What are people seeking to do if they’re asking about hardware compatibility?”
And it didn’t disappoint. ChatGPT correctly identified that people looking for information on hardware compatibility are thinking about upgradability, expandability, and even connectivity options. Rather than offering up a Q&A straight from a specs sheet, it correctly identified the goals of someone asking about hardware compatibility and provided us with a set of questions intended to help someone expand and customize their setup. Were they new questions? Not necessarily. But modifying the prompts to incorporate “unexpected” or “uncommon” questions could yield different results.
We ended up 5 ⚠️flagged answers and 5 accurate ones. In the same message in which it shared the 10 questions + answers, ChatGPT made sure we knew that it might not get everything:
The disclaimer is accurate. The LLM has a solid base of information, but it can’t account for variation in the “it depends” scenarios. If you’re in a pickle, you would still want to ask your techs that have the equipment in front of them and can test things out.
In all fairness, we asked ChatGPT to provide 2-3 sentence answers without directly sending it a specs sheet to synthesize or asking it to account for all possible scenarios (like PCI adapters disabling the onboard HDMI ports). To get more out of using a language model like ChatGPT to answer your questions, some work is required with context injection. It’s great at synthesizing information and identifying important points, and it doesn’t think anything is “too obvious”.
From an IT perspective, ChatGPT is a great tool for writing and research tasks including troubleshooting guides, as well as technical information. However, it can’t look at a photo to tell you what’s wrong, or rely on past experience to form an innovative solution. It is ultimately a tool and should be used accordingly; human interaction and editing is definitely required. If you want the nitty gritty details and someone willing to problem-solve with you on the go, you’re probably better off asking a tech… for now.
Want us to run more experiments or share your own adventures with ChatGPT for IT? Suddenly tempted to learn more about Dell Precision 3420s? Drop a comment below or shoot us an email at info@techmikeny.com!
]]>TrueNAS Scale is a powerful free network-attached storage (NAS) with an easy to understand graphical user interface, making set-up a relatively intuitive process. In this guide, we’ll walk you through the process of installing and configuring TrueNAS & Plex so you can have your media server running in no time.
Due to their ease of setup and space efficiency, using a desktop is the way to go when setting up Plex on TrueNAS. Unlike rack servers, desktops are compact, fitting well in home or small office environments; you might not want a full rack in your TV room. They also require minimal technical expertise, featuring user-friendly interfaces and straightforward installation processes, so after it's set up, it's something the whole family can use. (If you want them to.) Plus, desktops are definitely going to be the quieter option, so they won’t interrupt movie night.
Importantly, desktops also offer reduced energy consumption. Media servers generally need to be turned on all the time, and using a desktop instead of a rack server definitely saves power (and your electricity bill). Desktops also happen to excel in heat dissipation, thanks to their advanced cooling technologies. That means better performance and longer-lasting components. Essentially, setting up your media server on a desktop is the more efficient, sustainable choice.
Once TrueNAS boots, you’ll be able to head into the WebUI to get TrueNAS ready to host Plex. The first step will be creating pools within TrueNAS.
A pool is a logical storage construct in TrueNAS that combines multiple physical drives into a single entity. Pools enable you to allocate drives, configure redundancy options like mirroring, and create datasets for organizing and managing your files and data. In this case, we’ll be creating datasets specifically intended to store media files and configuration files for Plex.
Security note on setting up the pool: To avoid data loss, it’s recommended to at least do a mirror of 2 identical drives when setting up a pool (in case one fails).
Additionally, if you want your entire pool to be encrypted, you must check the encryption box at this stage during the pool creation process. You will later be able to choose whether or not you want your child datasets encrypted, but you won’t be able to encrypt the entire pool if you don’t do so here.
Once the pool has been created, we need to set up two datasets on that pool. One will be a designated space for your media, and the other will hold the configuration files for Plex.
ALTERNATIVE METHOD: If you don’t want to set up an SMB share, a simpler way of getting your media to “Media” is by installing and using File Browser within TrueNAS. File Browser is a web-based file manager app. In our case, we would use it to add media to “Media”.
For more on how to set up and use File Browser within TrueNAS, check out this helpful video.
Now you’re ready to install Plex! Well, almost; you need to have a Plex account set up and ready before proceeding. (Here's how to create an account.)
When Plex says it’s Active under the “Installed Applications” tab, you’ll need to log into the Web Portal to set up your library.
And you’re all done. Congratulations! You now have a centralized media server. Feels good, right?
We hope you’ve found this guide for setting up Plex on TrueNAS helpful. If you have any questions, don’t be a stranger – drop us a line in the comments below or at info@techmikeny.com.
]]>
A CPU acts as the "brain" of a computer, handling general-purpose tasks such as running operating systems, executing software, and managing system resources. It excels in sequential processing, making it ideal for single-threaded applications and general computing tasks.
GPUs, on the other hand, are specialized processors designed for rendering and accelerating graphics-related tasks. They are optimized for parallel processing, making them highly effective in handling large amounts of data simultaneously. GPUs are commonly used in applications like gaming, video editing, 3D modeling, and scientific simulations.
Some CPUs also have GPU cores that enhance the CPU’s capacity for parallel processing and rendering graphics. However, in terms of their functional hierarchy, it’s a one-way street; you can have a CPU without a GPU, but you can’t conventionally utilize a GPU without a CPU.
CPUs typically consist of a few cores (ranging from 2 to 64+), each capable of executing multiple threads. These cores are designed with complex instruction sets and cache hierarchies to maximize single-threaded performance.
What's more important? Processor cores or clock speed?
GPUs feature a larger number of simpler cores (ranging from hundreds to thousands) that work together to process data in parallel. While individual cores are less powerful than CPU cores, their sheer number allows for massively parallel processing capabilities, making GPUs well-suited for tasks that can be divided into smaller, independent operations.
CPUs excel in tasks that require high single-threaded performance, such as general-purpose computing, web browsing, office applications, and database management. They are also crucial for server tasks like handling multiple virtual machines, file serving, and complex computational workloads that don't benefit significantly from parallel processing.
GPUs, on the other hand, shine in highly parallelizable workloads, including graphics-intensive applications, machine learning, artificial intelligence, scientific calculations, and cryptocurrency mining. These tasks involve processing large datasets and performing repetitive calculations, making GPUs significantly faster compared to CPUs in such scenarios.
CPUs tend to consume less power compared to GPUs and are often designed to operate within a limited thermal envelope. They are generally easier to cool, making them suitable for systems with limited cooling capabilities like rack servers and compact workstations.
GPUs consume more power due to their parallel processing architecture and require robust cooling solutions. Tower servers and desktop workstations are better equipped to handle the higher power requirements and cooling demands of GPUs.
In most modern computing systems, you can have both a CPU and a GPU working together. In fact, it’s pretty common to find systems that utilize both processors to maximize performance and efficiency. This configuration is commonly referred to as "heterogeneous computing" or "CPU-GPU integration."
Adding a GPU when gaming allows for dedicated graphics processing, relieving the CPU of rendering duties and enabling it to focus on game logic, physics, and AI. This synergy results in a seamless gaming experience with lifelike graphics, immersive visuals, and smooth gameplay.
Integrating a GPU in machine learning and AI tasks empowers the CPU's training and inference algorithms with massive parallel processing power. By leveraging the GPU's capabilities, the training of deep neural networks becomes more efficient, enabling tasks such as image recognition, natural language processing, and data analysis to be performed faster and with improved accuracy.
Incorporating a GPU in scientific simulations and computations complements the CPU's handling of complex calculations and data management. The GPU's parallel processing prowess accelerates the processing of large datasets, leading to faster simulations and enhanced research capabilities in various scientific domains.
No surprises here; graphical processing units are incredible for, you guessed it, graphics. Adding a GPU to the 3D modeling/CAD workflow enables accelerated rendering and visualization of intricate 3D models. With the GPU's parallel processing capabilities, it efficiently handles the complex calculations required for generating and manipulating high-resolution graphics. This integration results in real-time visualization, reduced rendering times, and overall improved performance, allowing designers and engineers to work seamlessly with complex models, make adjustments in real-time, and overall just have a much more efficient workflow.
CPUs and GPUs are distinct processors with different strengths. CPUs excel in single-threaded performance and general-purpose computing, while GPUs stand out for their parallel processing abilities and specialized focus on graphics-intensive applications. So, basically? For tasks requiring massive parallel processing and high-performance graphics, adding in a GPU is the way to go.
Want to learn more? Need help configuring your server? Don't be shy. Comment below or send your questions to info@techmikeny.com !
]]>
Why use NVMe drives instead of an ATA interface? Simple answer: because it gives you the throughput of the PCI bus to the drives. And why does that matter? Because the PCI bus is considerably faster than you would get with SAS and SATA.
In this blog post, we’ll walk you through everything you need to know to use NVMe drives in a Dell PowerEdge R640, from ensuring that your server is equipped with the necessary hardware components to properly configuring the server's BIOS.
NVMe (Non-Volatile Memory Express) drives are a type of solid-state drive (SSD) that leverage the PCIe (Peripheral Component Interconnect Express) interface to deliver exceptional speed and performance. Unlike traditional storage protocols like SATA, NVMe drives provide a direct connection to the PCIe bus, allowing for significantly faster data transfer rates and lower latency.
With NVMe drives, you’ll come across two common form factors: M.2 and U.2. M.2 drives are small, compact drives that are typically installed directly onto the motherboard of a compatible system, making for a convenient and space-saving solution. The M.2 form factor is more commonly used in consumer desktops, laptops, and smaller form factor systems.
In rack servers, including the PowerEdge R640, NVMe SSDs often utilize the U.2 form factor. U.2 drives, also known as 2.5-inch NVMe drives (and formerly known as SFF-8639), are slightly larger and physically resemble traditional hard drives. They are designed to fit into drive caddies, allowing for easy installation, removal, and maintenance.
(If you’re super duper new to NVMe's and want to know more, check out our article all about NVMe SSDs.)
Dell servers are not natively compatible with NVME m.2 drives. Dell BOSS (Boot Optimized Server Storage) are for m.2 SATA only. For m.2 NVME, you would need to use a 3rd party PCI adapter, which may or may not be supported by the Dell server hardware.
First, you'll need to make sure that the PowerEdge R640 has a PCIe NVMe expander card. Most R640 servers have at least two PCIe adapter slots (SFF 8654 slots). NVMe backplane cables are also required to connect the PCIe adapter to the system board and the PCIe NVMe Extender Controller Adapter Card. You’ll need to have 2 CPUs to be able to see the NVMe drives.
(Pictured: 10-bay R640, PCIE adapter slots.)
You will need to physically install the NVMe u.2 drives into the drive caddies and then into the NVMe bays. Ensure that the drives are installed correctly, and that they are properly seated in the slots. NOTE: The NVMe drive bays start from the right to left.
(Pictured: Cable management for an R640 with NVMe drives configured.)
Note: The NVMe drives also support hot-swapping. Read more about hot-swapping.
Access the server BIOS through “System Setup” by hitting F2. The NVMe drives should show up under “Device Settings” in this initial menu. You can also navigate to the "System BIOS" section, and if the "NVMe Settings” sub-menu is visible, that’s a good indicator that the server is compatible. Additionally, make sure that the "Boot Support" option is set to "UEFI," as NVMe drives require UEFI boot mode.
Once the NVMe drives are installed and the BIOS settings are configured correctly, you can proceed with installing your operating system (OS). Make sure that your OS supports NVMe drives, and that you have the necessary drivers installed. Nowadays, Windows and Linux distros have their own built-in NVMe drivers, but you can research your OS to double check if you need any additional drivers.
If you want to enhance data performance and protect against disk failures by configuring RAID, you will need to use software RAID, such as Windows Server Storage manager, or the ZFS filesystem. For R640’s, hardware RAID on the NVME drives is not an option.
When it comes to NVMe drives, there are 3 basic options for RAID configurations for NVMe drives you can explore; RAID 0 (striping), RAID 1 (mirroring), or RAID 5 (striping with parity*). For more detailed explanations of the standard RAID configurations and what they mean, here’s a handy cheat sheet.
*Good to know: Parity is a mathematical algorithm that calculates and generates additional data, which is then distributed across the drives in the RAID array. In RAID 5, for example, parity information is distributed across all the drives in the array, except for one drive dedicated to storing parity information. In the event that one drive fails, this configuration allows for data to be reconstructed; the parity information is used to recalculate the missing data and restore it from the remaining drives.
When choosing a RAID configuration for your NVMe drives, consider your specific requirements. RAID 0 is suitable for applications that prioritize speed and performance but lacks redundancy. RAID 1 is ideal for those seeking data redundancy and fault tolerance, at the cost of reduced storage capacity. RAID 5 strikes a balance between performance and data protection, making it a popular choice for many environments.
You’ll need to evaluate your needs and backup strategies to determine which configuration is best for you, or if you even want to use RAID at all. Regardless of the RAID configuration you choose (or don’t), it’s crucial to have a solid backup plan in place.
And that's it! Following these steps should have you using NVMe drives in your R640 in no time. And as always, if you have any questions, just drop us a line.
Note: If you’re still in the stage of selecting a server, you may also want to consider network infrastructure and network speed. Depending on your existing network infrastructure, if any, you may want to upgrade to a minimum 10gbps network speed for a smoother user-experience.
Purpose: A media server is a specialized server that is designed to store and deliver multimedia content such as videos, music, and photos to other devices on a network. Media servers often have large amounts of storage capacity and are optimized for streaming media to multiple devices simultaneously. If you have a lot of movies, photos, or music downloaded from various sources and want to consolidate them all in one place to make them available to your home network, a media server is the way to go. It’s also what you’d want to use if you have a business with a bunch of training videos to distribute amongst your company staff, etc.
Media servers need…
Low |
Middle |
High |
|
---|---|---|---|
CPU | 2.0GHz to 2.2GHz + 8 Cores per CPU | 2.1GHz to 2.6GHz + 12 to 16 Cores per CPU | 2.1GHz to 2.6GHz + 16 to 28 Cores per CPU |
RAM | 32GB to 64GB | 96GB to 128GB | 192GB to 256GB |
Storage | 2x 500GB SSD, 4x to 8x 2TB | 4x to 8x 4TB to 8TB | 8x to 12x 8TB to 10TB |
Network Speed | 1GB NIC | 10GB NIC | 10GB NIC |
For example... | Dell PowerEdge R720 Server 2.00Ghz 16-Core 32GB 2x NEW 500GB SSD 4x 3TB H710 | Dell PowerEdge R730 Server 2x E5-2670v3 2.30Ghz 24-Core 128GB 8x 4TB 12G H730 | Dell PowerEdge R730xd Server 2x E5-2699v3 2.30Ghz 36-Core 256GB 12x 8TB 12G H730 |
Purpose: A router or security gateway is a network device that connects multiple networks together and helps to manage and control the flow of data between them. Routers can be used to provide security by enforcing policies that limit access to certain parts of the network, or by filtering out malicious traffic. Security gateways have pretty low processing & memory demands. That makes them great to learn on in a homelab setting before full-scale implementation. Otherwise, setting up a router at home might be a little over the top, but you’ll certainly get a *really* secure home network.
Routers/security gateways need…
Low |
Middle |
High |
|
---|---|---|---|
CPU | 1.6GHz to 2.0GHz + 4 Cores per CPU | 2.2GHz to 2.6GHz + 8 Cores per CPU | 2.6GHz to 3.2GHz + 8 Cores per CPU |
RAM | 8GB | 16GB | 32GB |
Storage | 2x 240GB SSD | 2x 480GB SSD | 2x 960GB SSD |
Network Speed | 1GB NIC | 10GB SFP and 10GB RJ-45 | 25GB/40GB QSFP+ |
For example... | HP ProLiant DL360 G9 Server 2x E5-2609v3 1.90Ghz 12-Core 8GB 2x 240GB SSD P440ar | HP ProLiant DL360 G9 Server 2.40Ghz 12-Core 16GB 2x NEW 500GB SSD P440ar | Dell PowerEdge R630 Server 2x E5-2667v4 3.20Ghz 16-Core 32GB 2x NEW 1TB SSD H730 |
Purpose: For fun and games! A gaming server is a server that is dedicated to hosting online games. Gaming servers typically have high processing power and low latency to ensure a smooth gaming experience for however many players are simultaneously involved in the game.
Gaming servers need…
Low |
Middle |
High |
|
---|---|---|---|
CPU | 1.8GHz to 2.4GHz + 6 to 8 Cores per CPU | 2.2GHz to 2.6GHz + 12 to 18 Cores per CPU | 2.1GHz to 2.6GHz + 18 to 28 Cores per CPU |
RAM | 32GB | 64GB | 128GB |
Storage | 2x 480GB SSD | 4x 960GB SSD | 6x 1.92TB SSD |
Network Speed | 1GB NIC | 10GB NIC | 10GB NIC |
For example... | Dell PowerEdge R430 Server 1.90Ghz 12-Core 32GB 2x NEW 500GB SSD H730 | Dell PowerEdge R630 Server 2x E5-2699v3 2.30Ghz 36-Core 64GB 4x NEW 1TB SSD H730 | Dell PowerEdge R630 Server 2x E5-2699v3 2.30Ghz 36-Core 64GB 4x NEW 1TB SSD H730 |
Purpose: Administrative servers, also known as management servers, are servers that are used to manage other servers and network devices. They typically run software that allows administrators to monitor and configure various aspects of the network, such as user accounts, security policies, and device settings.
Administrative servers need…
Low |
Middle |
High |
|
---|---|---|---|
CPU | 1.8GHz to 2.4GHz + 6 to 8 Cores per CPU | 2.1GHz to 2.4GHz + 12 to 16 Cores per CPU | 2.1GHz to 2.4GHz + 18 to 22 Cores per CPU |
RAM | 32GB | 64GB | 128GB |
Storage | 2x 240GB SSD | 2x 480GB SSD | 2x 960GB SSD |
Network Speed | 1GB NIC | 10GB NIC | 10GB NIC |
For example... | HP ProLiant DL360 G9 Server 1.90Ghz 12-Core 32GB 2x 240GB SSD P440ar | HP ProLiant DL360 G9 Server 2.30Ghz 24-Core 64GB 2x NEW 500GB SSD P440ar | HP ProLiant DL360 G10 Server 2.10Ghz 44-Core 128GB 2x NEW 1TB SSD E208i-a |
Purpose: A virtualization host, also known as a hypervisor, is a server that is optimized for running virtual machines. Virtualization hosts provide hardware resources, such as CPU, RAM, and storage, to multiple virtual machines, allowing them to share the same physical hardware while maintaining isolation from one another. These servers are used in virtualization environments to run multiple operating systems and applications on a single physical server.
Type 1 or “bare-metal” hypervisors (ex: VMware ESXi, Microsoft Hyper-V, and Citrix XenServer): | Type 2 or “hosted” hypervisors (ex: Oracle VirtualBox, VMware Workstation, and Parallels Desktop): |
---|---|
Type 1 hypervisors, also known as bare-metal hypervisors, run directly on the host machine's hardware. They provide a layer of abstraction between the physical hardware and the virtual machines. and they have direct access to the underlying physical resources, such as the CPU, memory, and storage. Type 1 hypervisors are typically used in server environments, and are designed for high availability, scalability, and security. |
Type 2 hypervisors run on top of the host machine’s operating system and provide a virtualized environment for guest operating systems to run. A Type 2 hypervisor will typically be used in a desktop environment, and is generally easier to install and use, as it can be installed as a regular software program on an existing operating system. However, since it is running within the host operating system, it may have some limitations and performance overhead. |
Physical servers acting as virtualization hosts need…
Low |
Middle |
High |
|
---|---|---|---|
CPU | 1.8 to 2.2 GHz, 8 to 10 cores per CPU | 2.1 to 2.6 GHz, 12 to 16 cores per CPU | 2.1 to 2.6 GHz, 18 to 28 Cores per CPU |
RAM | 96-128GB | 192-256GB | 384-512GB+ |
Storage | 2x 600GB SAS or 2x 1TB SATA HDD | 2x 1.2TB SAS or 2x 2TB SATA HDD | 2x 1.92TB SSD or 2x 4TB SATA HDD |
Network Speed | 1GB NIC | 10GB NIC | 10GB NIC |
For example... | Dell PowerEdge R730 Server 1.80Ghz 16-Core 96GB 2x 600GB 15K 12G H730 | Dell PowerEdge R730 Server 2x E5-2670v3 2.30Ghz 24-Core 256GB 2x 1.2TB 12G H730 | Dell PowerEdge R740xd Server 2.10Ghz 44-Core 512GB 2x 1.9TB SSD H730P |
Have questions about these use cases or a server you're trying to configure that we didn't cover? Drop us a line in the comments or at info@techmikeny.com!
]]>
Jump to: Dell 12th Generation PowerEdge NDCs Chart |
You asked, we answered! We recently wrote about the ins and outs of NICs and were reminded of a question we frequently receive: how on earth do you tell the difference between Dell’s various NDCs? As our friend Mike O’Dell put it,
“One thing that is daunting is the expansive list of NICs & NDC boards that look equivalent (eg 4x1G RJ45) but have very different prices and only Dell bingo numbers to distinguish them. Given the evolution of the Ethernet chips, knowing which part is on the card is *golden*. Is there an xref somewhere for the Dell novitiates?”
Short answer: Unfortunately, no, there isn’t. And searching through countless documents and forums with only a part number in hand definitely doesn’t make it easy to tell the difference between the boards. To save you some time, we put together a few handy charts of Dell’s NDCs for 12th and 13th/14th gen servers, complete with part number, model name, and port configuration.
Compatible with Dell R620, R720, R820.
Part | Chipset | Port Configuration |
---|---|---|
DELL_0JC10M | Broadcom 57840S | Quad-Port 10GB SFP+ |
DELL_0XGRFF | Broadcom 57840S | Quad-Port 10GB SFP+ |
DELL_0F6PCP | Emulex OneConnect | Quad-Port 10GB SFP+ |
DELL_0XVVY1 | QLogic QL41164 | Quad-Port 10GB SFP+ |
DELL_0C63DV | Intel X520-DA2 + Intel i350-T2 | Dual-Port 10GB SFP+ + Dual-Port 1GB RJ-45 |
DELL_0165T0 | Broadcom 57800S | Dual-Port 10GB SFP+ + Dual-Port 1GB RJ-45 |
DELL_0MT09V | Broadcom 57800S |
Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0Y36FR | Broadcom 57800S |
Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0G8RPD | Broadcom Netxtreme II 57800-T |
Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_099GTM | Intel X540-T2 + Intel i350-T2 |
Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_098493 | Intel X540-T2 + Intel i350-T2 |
Dual-Port 10GB RJ45 + Dual-Port 1GB RJ-45 |
DELL_0P71JP | Intel X540-T2 + Intel i350-T2 | Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0FM487 | Broadcom 5720 | Quad-Port 1GB RJ-45 |
DELL_0R1XFC | Intel i350-T4 | Quad-Port 1GB RJ-45 |
All of the cards in our Dell 12th gen chart are compatible with Dell 13th/14th generation servers, as well as a few newer chipsets. However, the 13th and 14th Gen NDCs are not compatible with the 12th Gen servers.
Part | Chipset | Port Configuration |
---|---|---|
DELL_0R887V | Mellanox ConnectX-4 | Dual-Port 25GB SFP28 |
DELL_0JC10M | Broadcom 57840S | Quad-Port 10GB SFP+ |
DELL_0XGRFF | Broadcom 57840S | Quad-Port 10GB SFP+ |
DELL_0F6PCP | Emulex OneConnect | Quad-Port 10GB SFP+ |
DELL_068M95 | Intel X710-DA4 | Quad-Port 10GB SFP+ |
DELL_0XVVY1 | QLogic QL41164 | Quad-Port 10GB SFP+ |
DELL_0C63DV | Intel X520-DA2 + Intel i350-T2 |
Dual-Port 10GB SFP+ |
DELL_0165T0 | Broadcom 57800S | Dual-Port 10GB SFP+ + Dual-Port 1GB RJ-45 |
DELL_06VDPG | Intel x710-T2 + I350-T2 | Dual-Port 10GB SFP+ + Dual-Port 1GB RJ-45 |
DELL_05V6Y4 | Qlogic QL41264 | Dual-Port 10GB SFP+ + Dual-Port 1GB RJ-45 |
DELL_0NWMNX | Broadcom 57412 | Dual-Port 10GB SFP+ |
DELL_064PJ8 | Intel X550-T4 | Quad-Port RJ-45 |
DELL_0CD2VM | Intel X550-T2 + Intel i350-T2 |
Dual-Port 10GB RJ-45
+ Dual-Port 1GB RJ-45 |
DELL_01224N | Broadcom 57416 | Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0MT09V | Broadcom 57800S |
Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0Y36FR | Broadcom 57800S | Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0G8RPD | Broadcom Netxtreme II 57800-T | Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_099GTM | Intel X540-T2 + Intel i350-T2 |
Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_098493 | Intel X540-T2 + Intel i350-T2 | Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0P71JP | Intel X540-T2 + Intel i350-T2 | Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 |
DELL_0FM487 | Broadcom 5720 | Quad-Port 1GB RJ-45 |
DELL_0R1XFC | Intel i350-T4 | Quad-Port 1GB RJ-45 |
DELL_0CF4P0* | Broadcom 57416* | Dual-Port 10GB SFP+* |
*DELL_0CF4P0 is only compatible with R440’s.
]]>Data loss is universally acknowledged as a nightmare scenario. And unfortunately, hard drive errors are a common culprit behind it. The consequences can be devastating, so your system will probably try to warn you before disaster strikes. But how can you tell what all those different sounds and lights mean?
We know, we know, we said that we already gave you everything to consider about hard drives, all in one place. But it wouldn’t be fair to tell you what’s so great about hard drives without discussing what can go wrong and how to prevent it.
In this post, we'll explore some of the most common hard drive errors and what you can do to mitigate them. We'll also share some tips on how to replace a failing drive without losing your precious data. So if you're ready to learn more about hard drive health and maintenance, stick around.
If you hear grinding noises coming from your hard drive, run... to replace your hard drive. That sound usually means that the read/write head is physically scraping or grinding against the surface of the disk, indicating a mechanical failure. This is a serious problem that could lead to data loss, and the hard drive should be replaced as soon as possible.
If you’re not seeing any lights, that means that there is no drive connected to the backplane or no power to the backplane. Check the connection between the drive and backplane, as well as the one from the backplane to the motherboard. More specifically, make sure the connector of the drive lines up with the edge of the caddy. (Also, it's always good to make sure you’re not putting the drive in backwards. Been there, done that.)
Green lights are good. But what if they’re only green when the server boots, and turn amber when the RAID controller initializes?
The change in color when the RAID controller initializes indicates that the server does see the drives, but the drives are not compatible with that RAID controller. Make sure you verify compatibility of the drive and the RAID controller you’re using. Check everything, including block size, drive speed, and connecting interfaces (ie SAS/SATA).
Amber lights or flashing amber and green lights indicate that the server does recognize the drive but is reading that there’s an error, whether it’s:
GOOD TO KNOW: Because of TechMikeNY's quality metrics and testing, our hard drives last. And it shows; our RMA Rate (“Return Merchant Authorization,” or the rate that a return is requested from a customer) on hard drives is extremely low, holding steady at <1% for the past several years.
If the drive fails while the server is running, then the RAID controller should back it up automatically. For this process to start automatically, however, you must have a controller configured with one of the following RAID virtual disks (in RAID 1, 5, 6, or 10): H310, H710, H330, H730, H740, H240, P440, P840. Using RAID 0 does not provide any redundancy, so while you could replace the drive while the server’s still running, you would lose any and all data stored on the drive.
Pictured: RAID controller inside of an R640.
If the server is in a reboot state or the drive fails while the server is shut down, the server boot process will halt to alert you of the failed drive. In that case, you should replace the drive as soon as possible. In the halted state, you can still access the RAID utility and replace the bad drive with a new one, but the controller will most likely not start the restoration process on its own. To get that going, you may need to first set the new drive as a RAID drive and then set it as a hot spare. Upon doing so and, importantly, saving the changes, the RAID controller will then start restoring the data.
Searching for replacement drives? Look no further.
We hope we’ve helped you solve the cause of your hard drive’s mystery symptoms. In case you still can’t get to the root of the malady or just want to learn more about hard drives, here are some more resources you can check out:
As always, feel free to reach out with any questions. Comment down below or drop us a line at info@techmikeny.com.
]]>When it comes to types of server hard drives, there are so many variations out there that it can become challenging to keep track of all the considerations. "Can my server read a 4Kn drive?" "Can I mix SAS and SATA drives in the same machine?" "What about SSDs versus HDDs?"
While we've written extensively on hard drives, we thought it would be helpful to summarize all the types, formats, and interfaces in one place. Consider it our "Hard Drives Greatest Hits!"
We could write a whole blog post on SSDs vs. HDDs (oh, and we have).
The abridged version: HDDs have been around for decades, and the technology has been improved and perfected. SSDs are a newer storage medium based on Flash technology (which is now ubiquitous in those little USB thumb drives).
Keep these very high-level principles in mind when considering between SSDs and HDDs:
Advanced Format (AF) drives take advantage of larger data sectors to maximize storage volumes. Whereas traditional drives have 512-byte sectors that require more spaces between sectors (and consequently, volume on the drive is used up by empty space), AF drives minimize those gaps with larger byte sizes. Fewer gaps equal more volume on the drive for storing data.
Advanced Format Drives come in two configurations: 4Kn (4K = 4096 bytes; n = native) and 512e (e = emulation). 4Kn drives utilize 4096-byte sectors (hence, '4K'); 512e drives have 4K bytes per physical sector but maintain 512 bytes per logical sector. The advantage of 512e formatted drives is they will work in devices running an OS that does not support 4Kn sectored drives.
Advanced Format drives are unquestionably a performance improvement over the 512-byte predecessors. However, proceed with caution. While 14th Gen Dell PowerEdge servers and 10th Gen HPE ProLiant servers are AF compatible, previous generations are not, especially when you account for RAID controller compatibility. (Read our blog post on 4Kn drives referenced below for a full breakdown!)
RESOURCES! ALWAYS! The above is a barebones explanation of Advanced Format Drives. If you want to deep-dive, check out our blog post: Compatibility Pain Points: 4Kn Hard Drives and Backwards Compatibility with RAID Controllers.
Perhaps no other interface gets more customer questions than SAS vs. SATA. It's an understandable query since SATA and SAS connectors/drives look virtually identical to the naked eye (see below).
Here are some high-level SAS vs. SATA principles:
You can check out our blog post on SAS vs. SATA drives for an in-depth analysis.
We would be remiss not to share our Hard Drive product page, where you can view all of TechMikeNY's in-stock hard drives!
Some customers have asked our sales reps before pulling the trigger on an order, "are your refurbished drives SMR or CMR?"
SMR (Shingled Magnetic Recording) drives are a type of drive technology that maximizes the drive's platter density. To put it in simple terms, 'Shingled' Magnetic Recording drives follow the same architectural principle as a roof's overlapping shingles, increasing the roof's surface area. The main advantage of SMR drives is higher drive volumes (due to their higher density). Most often, they are for very large drives used for NAS machines.
CMR (Conventional Magnetic Recording) drives are a more traditional hard drive technology that does not utilize a shingled or overlapping storage design.
A couple of main points to consider when it comes to CMR vs. SMR:
Hard drives are understandably a big support topic. At TechMikeNY, we believe an informed customer is a happy customer! Hopefully, with these high-level summaries (and deep-dive links), you can feel equipped with knowledge.
Here are more resources that may come in handy for your hard drive purchases:
Do you feel we missed any essential hard drive topics? We'd love to hear from you! Leave a comment below!
]]>A flex bay is a type of modular bay designed for… you guessed it, flexibility! Using a flex bay, you can house different types of storage, such as hard drives (HDDs) or Solid State Disks (SSDs). It's a neat feature to have if you want to be able to easily upgrade your server's capabilities as your own needs grow (without having to go through the hassle of setting up a totally new machine). In this blog post, we’ll go over what makes flex bays so great and why you might need one with your server.
Form Factors: Nowadays, flex bays most commonly come in one of two form factors: 3.5-inch and 2.5-inch. (This matches the current standard for HDD and SSD form factors.)
Connectors: Flex bays typically use standard connectors, such as SAS or SATA, which allow the builder to connect compatible SSDs or HDDs to the server.
SAS and SATA drive ports require compatible connectors. (For a more detailed explanation of the difference between the two, see SAS vs SATA.)
The above images show a rear Dell flex bay in an R730XD. Note that unlike the R730XD, the R740XD has an option for 2x 3.5" bays or 4x 2.5" bay.
The above image shows a mid bay in an R730XD. Here, having the mid bay requires the Dell flex bay to be installed.
Once installed, one method for configuring your drives is through hardware RAID, a method where each of the drives is connected to a RAID controller such as a PERC card for DELL or an HP RAID controller. (If you’re planning on taking this route, here’s a tutorial on how to set up a RAID array in a Dell PowerEdge.) Alternatively, you can use software RAID (managed through an OS-based utility software, such as ZFS) or direct access.
TL;DR: Dell flex bays make it easier to upgrade or replace drives without having to reinstall operating systems or move drives around within a server. Hard drive bays are a valuable feature for anyone who wants to ensure that their server is adaptable, efficient, and future-proof!
The idea of buying refurbished tech may not initially sound like the most glamorous option. After all, isn’t that just a fancy term for used, beat-up gear? Well… not quite.
When we say “buying refurbished”, we don’t mean scouring Craigslist for sketchy, half-broken devices. We're referring to the process of purchasing custom gear from reputable refurbishers who offer reliable, high-quality tech that's less likely to break down and more sustainable for your budget.
In this blogpost, we’ll talk about the benefits that make buying refurbished just as good (if not better) than buying new. So, if you're ready to save some cash and potentially upgrade your gear in the process, let's dive into the advantages of buying refurbished tech.
The moment you drive your brand new car out of the dealer’s lot, it’s already lost 30-40% of its value. It’s still the same car, of course. It just became that much cheaper because technically, it’s no longer “new”. Refurbished technology products behave similarly; a piece of equipment labeled as “refurbished” just means that it’s been previously returned to a manufacturer or vendor for any reason, whether or not it’s been sold in the market. Regardless of how well the products actually work or the length of their use, you’ll see products get immediately marked down the moment they leave the original equipment manufacturer (OEM). If you manage your company’s purchasing, or are someone who buys and upgrades their technology often, buying refurbished could save you thousands upon thousands of dollars without sacrificing quality. If you’re just making a one-time purchase for yourself, maybe looking at a product and some accessories, you could still save hundreds of dollars; working with spare parts is really that much cheaper. No matter who you are, buying refurbished will result in saving cash one way or another.
More importantly, buying refurbished allows you to afford much higher/better specs on a much lower budget. So if you don’t care about labels, and you understand how to evaluate different refurbishers, you’ll be able to put a limited budget to much better use by tailoring your server. One server could have thousands of possible configurations, so why be limited by what the OEM or distributor has to offer? When you buy a refurbished server, all of these options are made available to you at a fraction of the cost, not to mention a fraction of the time; going straight to a custom configure-to-order page avoids the hassle of contacting an OEM-certified distributor to get a quote or check on availability.
Yep. Purchasing refurbished products from a reputable company means that the products have been thoroughly tested and any damages fixed. Since every single component gets re-tested, it may even mean that you are less likely to experience malfunction. In the event that you do run into any issues, reputable refurbishers will offer you a warranty and a return policy in case any of the products do fail (so be sure to check out the warranty). And as opposed to returns for new-in-box products, which can get costly and complicated, these returns are usually paid for by the refurbisher (to ship) and rarely come with a restocking fee.
When buying a new server, the lead times are often painfully long, even when stock is readily available. That brings us to another beauty of purchasing refurbished servers; due to the nature of these kinds of orders, your purchase is usually ready to ship within one business day, and often the same day you purchase it. Even when configuring your own specs, these built-to-order servers are usually available to ship within a day or two at the most.
Plus, purchasing from refurbishers means they’ll have plenty of spare parts on hand. If you need them, replacement parts are readily available and can ship to you as fast as you are able to purchase them.
As the amount of electronics produced on a yearly basis grows, so does the towering mountain of discarded electronic waste. Each year, the United States alone disposes of over one million electronic devices that end up harming our soil and water supplies. Based on current trends, reports predict that the total amount of e-waste generated worldwide will reach 67 million tons (74 million metric tonnes) by 2030, almost doubling the output from 2014.
That’s where electronic refurbishers come in. Refurbishing electronics reduces the demand for new products, which decreases the need for the extraction of raw materials and the energy required to produce and transport new products, resulting in a smaller carbon footprint.
The largest refurbishing companies also have certifications in place that require annual audits to ensure that all electronic waste is being handled properly. Refurbishing electronics is an effective way to reduce the demand for new products, and give old electronics a second life. By choosing to buy refurbished, you’re also taking an important step towards reducing the amount of e-waste that ends up in landfills. Way to promote a sustainable approach to technology!
Many sellers across various channels in the US list their refurbished products with free shipping. Yes, free. Shipping is often costly given how large and heavy servers are. When purchasing new, shipping and handling are costs that are quite considerable. These costs will also largely vary depending on the location of the third-party distributor you will be using. In the refurbished servers world however, shipping is either provided for free, or is quoted at cost (with no margin added by the reseller).
Of course, you’ll want to make sure you research the refurbisher you are buying from. Here are some qualities you will want to look for:
Full disclosure - A good refurbisher will be transparent with its customers.
Certified - Refurbishers that are certified by the original manufacturer are more likely to provide good quality products.
Warranty/Extended - A refurbisher that offers their customers extended warranties (90+ days) is a refurbisher that stands by the quality of their products
Return Policy - A return policy of 90+ days is a good one. If the refurbisher accepts their products back, it speaks to the confidence they have in the products they sell.
For a comprehensive comparison of new vs. refurbished servers, check out this handy table:
TechMikeNY is an established brand with over 20 years’ experience refurbishing and reselling high-quality servers. With R:2-v3, ISO-140001, ISO-9001 and ISO-45001 certifications, we pride ourselves on providing the best quality refurbished products at an affordable price.
Looking for good deals on used servers? You’re in the right place.
Continuing our series on Server Anatomy 101, we’re going to go over the basics of Network Interface Cards (NICs)! These bad boys are the vital components responsible for connecting your server to a network. When you’re building your own server, understanding the basics of NICs is the key to ensuring your server is able to handle the demands of your network. In this blog post, we’ll go over how to choose the perfect NIC for your server build.
A Network Interface Card (NIC) is a hardware component that provides a connection between a computer/server and a network, and then manages communication across that connection. The primary difference between NICs and Network Daughter Cards (NDCs) is that NDCs are usually specific to certain server or computer models, whereas NICs are standalone PCI cards that can be added to a system to provide network connectivity.
When selecting your NIC, you’ll want to make sure that it aligns with the type of cable interface you’ll be using to connect your server to the network.
The most common type of connectivity for NICs is Ethernet. These are usually connected through Cat5e, Cat6, and Cat6a cables. Cat5e cables typically support speeds up to 1Gbps, and are widely used to connect NICs in both home and business networks; Cat6 cables are capable of speeds up to 10Gbps, and are generally meant for supporting higher-performance networks (such as data centers).** Cat 6a cables have even higher speeds, with a maximum of 10Gbps as well, but with longer supported distances than Cat6. Other non-ethernet connectivity options for high-speed and low-latency networks include fiber optic connectors and Infiniband.
NICs have different kinds of ports for different connectivity. For example, if your network uses Cat5e, Cat6, or Cat6a cable, you’ll need to select a NIC with an RJ45 jack that is compatible with an 8P8C modular connector. (You’ll also need to check that the connector you’re using is designed for the cable you’re using; for example, some RJ45 connectors are designed for use with stranded cable, while others are meant for use with solid core cable.)
If your network uses fiber optic cables, a NIC with a Small Form-factor Pluggable (SFP) or QSFP connector will generally fit the bill. Fiber optic cables support all connections above 10GBs ( 25GB/40GB/100GBs). If you’re connecting to an Infiniband network, use an Infiniband connector (which typically uses the SDR, DDR, and QDR connectors). Using a faster NIC may result in slightly more significant power usage, so be sure to keep your power supply in mind.
Note: SFP ports can have modules that allow you to use copper cables (such as Cat5e, etc.), but RJ45 ports are not compatible with SFP ports (or fiber optic cables).
This may seem intuitive, but for optimal performance, the NIC you’re using should match the speed of your network. If you’re building a server that needs to support higher speeds but are stuck with a NIC that can’t handle it, you might end up with lower overall performance and potential data loss, as a NIC not meant to handle high-speed data transmission could drop or corrupt packets. To ensure that you won’t have to replace your NIC later as your network evolves, consider its potential growth; think bandwidth requirements of your applications, the number of users accessing your network, and the overall network traffic.
Another factor to pay attention to is your NIC’s Quality of Service Capabilities (known as QoS). QoS is a network management technique that helps prioritize network traffic to ensure that more important data is transmitted faster and more reliably than other data. It’s kind of like a real-time triage. In other words, QoS ensures that your network resources are utilized in a way that provides a consistent level of service for different types of network traffic.
Some NICs come with built-in QoS capabilities, while others rely on routers or network switches. A NIC with built in QoS functionality gives you more control; having better visibility into how your network traffic is being managed allows you to make more informed decisions. These are also more cost effective than purchasing a separate network switch with QoS capabilities, especially for smaller networks. However, if you’re looking for more centralization of, say, a larger network, you may be okay with relying on a network switch.
A NIC can either have 1, 2, or 4 ports. Having multiple ports, as in the case of 2-port and 4-port NICs, provides you with the option to connect your server to multiple networks or to bond two ports together (to provide increased bandwidth). If you want a higher level of network connectivity and redundancy, you’ll want to use a NIC with more ports.
Last, but definitely not least: compatibility. In order for your NIC to work as intended (or work at all), it needs to be compatible with both your server and the operating system that is installed on your server. Some NICs may require additional software or drivers to function properly. If the NIC is incompatible, you may run into connectivity issues, slow speeds, or issues pertaining to the server’s communication with other devices, so make sure you research compatibility before purchasing. In general, older NICs will work on newer servers, but may not work with newer OS versions; older servers may or may not accept newer NICs.
Network interface cards are a critical component for the smooth functioning of your entire network infrastructure. For the best results, it's important to research your system requirements and carefully consider your speed and quality-of-service needs when choosing the right NIC for your network.
Unsure of what kind of NIC you might need? Contact us at info@TechMikeNY.com, and one of our techs will happily walk you through it. That’s what we’re here for!
]]>]]>
1. Recycled Motherboard Necklace
For something wearable, try turning your old circuit-boards into custom jewelry! This one’s a great option for the tech lover who wants to display their passion on their sleeve (or on their chest).
Materials
Tools
Take an old circuit board and find the part of it you’d like to turn into a necklace. Carefully cut it using metal snips or shears, and then smooth any rough edges with a file. Fill the resin mold with prepared resin, and place the component in. To make sure the surface is smooth, use a heat gun to remove air bubbles.
Once the resin is fully cured, you’ll have a piece you can secure inside of the pendant or medallion. Place the pendant on whatever necklace chain you’d like, and ~boom~. Tiffany’s, who?
(Depending on your decorative casing of choice, you may need to drill a hole through the piece of resin coated motherboard to attach the two together.)
2. Custom Knife Block / Magnetic Organizer
Tools:
Placed between two wooden planks, the magnets from hard drives can be transformed into a knife mount! If your significant other isn’t much of a chef, you can use this as a mount for organizing drill bits, turn it into a magnetic rack with some small j-hooks, or really, whatever your heart desires. No power tools required.
Cut two identical lengths of wooden planks for the two sides of the bar. Add a spacer, by cutting down four narrow strips. Attach them to one of the planks using wood glue. With superglue, secure your hard drive magnets to the underside of that same plank, then switch back to wood glue to seal your block with the other long wooden plank.
When you’re ready to put it up on the wall, attach keyhole brackets to the back of your mount, leave room for hanging screws, and secure it to the wall. (We would recommend checking if your recipient likes the gift before making it a permanent fixture in their home.)
(*Hard drive magnets are ideal for this, but if your planks are thin enough, a different strong magnet would also do the trick.)
3. Hard Drive Persistence of Vision Clock (Ian Smith)
This one’s definitely more ambitious and elaborate. It’ll cost you about $60, but if you want an unforgettable homemade gift, this hard drive clock will steal the show.
POV is a phenomenon where an object appears to be present even when it is no longer in view. A POV clock is a device that uses POV to display time information by spinning a hard drive at high speed and illuminating its disk platter with LEDs.
To make a POV clock, you will need:
One of the methods to build a hard drive POV clock relies on using a hard drive’s spindle motor. Spindle motors rotate the platters in a hard drive and are driven by a motor controller that regulates speed. To make the clock, you’ll need to disassemble the hard drive, attach the new motor to the spindle, wire the LEDs and the circuit to the microcontroller, program the microcontroller to display the time, and mount the components in the casing.
Alternatively, you can use the hard drive’s data access time. By measuring the time it takes for the hard drive’s read/write head to move from track to track, you can calculate the current time. (Note: you would have to modify the hard drive's firmware to calculate the time based on the head's position.)
Whichever method you choose, building a hard drive clock can be a very rewarding project. For a more detailed walk-through, check out this great instructables tutorial.
4. MP3 Mixtape
Have an old Mp3 player you’re not using anymore? Maybe an iPod nano you haven’t touched in over a decade? Before you take it straight to an electronic recycling center (never a landfill, of course), make a mixtape! That way, instead of just sending a Spotify link or a QR code, you can give a physical gift with much more of a personal touch.
Okay, fair, this one’s more of a “reuse” than an “upcycle”, but we thought we’d include it because it’s sweet and simple. Making this one is pretty simple. Clear the device of its old data, curate a playlist that reminds you of your sweetheart, and load the songs you want onto your old MP3 player!
5. Hard Drive Caddy Picture Frame
Pictured: https://techmikeny.com/products/dell-1-8in-r-series-caddy
If you have a hard drive caddy that’s a different size from your new gear, consider using it as a picture frame. Measure the caddy, size and print out a picture of you and your special someone, secure the picture with a bit of glue or some pins, and voila! Caddies can be beautiful pieces of gear, but if yours doesn’t match the Valentine’s mood, you can customize it by writing a message on the frame or painting it a new color.
6. Motherboard Custom Placemat / Coaster Set
For a more practical decorative gift, consider making a custom motherboard placemat or set of coasters. Though simpler and relatively niche, you’ll be left with a very durable piece that is likely far more unique than something store-bought.
Materials:
Clean your motherboard thoroughly and remove any components that may get in your way, such as stand-off screws, heat sinks, and connectors. Then, cut the cork to a size that is slightly larger than the motherboard. Cut the foam backing to match the size of the cork, then glue the foam to the cork and attach the motherboard facedown to the other side. For an even more refined, polished look, you could coat the final product with epoxy resin, but make sure to use it safely. And for coasters, simply cut smaller pieces, with similar ratios of motherboard size to cork/foam size.
7. Belt Buckle
Making this one follows the same principle as the motherboard necklace, but with a practical twist. The trick here is to attach your component of the motherboard to an existing belt buckle and use screws to keep it in place. (The motherboard, that is. We can’t really speak for a foolproof method for keeping your belt in place, aside from maybe adjusting the notches.)
None of these doing the trick? If you still want to find a Valentine's Day gift for your techie that’s good for the planet, just get them something from TechMikeNY that’ll *drive* them mad. Or that will make them *short circuit*. Or that they *can’t resist*. You get the idea.
This week, we're going back to basics with a video tutorial on an important step in building your own server: applying thermal paste! From proper heatsink removal techniques to our preferred application method, we've got you (and your processor) covered.
Note: this tutorial demonstrates thermal paste application on a 12th generation Dell server. To apply thermal paste onto a 14th gen server, you would need to follow a modified procedure to remove the heatsinks.
1. Start out by removing the heatsinks.
Each heatsink has 4 pins to hold it in place. Take a screwdriver with a standard Phillips head and unscrew each one.
2. Take some rubbing alcohol and a rag and clean the processors of debris and old thermal paste. (You don’t want to get gunk stuck in between the processor & the heat sink, and you also don’t want to mix old and new thermal paste.)
3. From here, you’re going to take your thermal paste of choice and apply a pea-sized amount of thermal paste to the center of the processor.
There are different methods of applying thermal paste, but the pea-sized works best for us.
4. Make sure that you do not apply too much thermal paste, as it can get underneath the processor and disrupt the contact between the CPU and the pins.
5. Once you see that everything is in place, place them back into their slots and screw them back in, going in a star-shaped pattern.
And now you’re all done. Happy cooling! What is your go-to method for applying thermal paste? What tutorials would you like to see next?
]]>