chloebagjapanonline.com

The Intelligent Way to Construct Your Own Personal Computer

20

Despite having access to the best resources, not even the most seasoned vendors and system builders always get it right. While there has never been a better time to build your own, there have never been so many options or ways to go wrong. These days’ computers are intricate systems made up of many moving parts. The least common denominator’s capability often determines the entire system’s performance ceiling. In other words, even with the latest graphics card (GPU), a high-performance computer’s graphics performance will suffer if the central processing unit (CPU) isn’t strong enough to keep the GPU pipelines busy with work and if the system’s memory isn’t fast enough.

In light of this, you should take the same steps when planning and constructing varying-sized computers. Pick parts that you know will go together nicely and complement one another. The next step is to perform extensive testing and benchmarking on your designs to ensure they function as intended. An untimely breakdown is a worst possible scenario. I’ve culled applicable sections from our best-practices documentation and internal build and design process to assist you in creating your own.

There is far too much to cover on this topic to do so in a single article, so I’ve broken it up into two articles. The focus of this article is on the components that make up a PC’s brain, including:

Main Processing Unit (processor)
Reflection on (RAM)
This Board’s Mother (or main board)
The energy source (PSU)
The second installment of this series will cover the remaining aspects of the PC.

The Data Storage Module (hard disk or HDD)
The Visual Core Processor (GPU)
This Is Why
Calming Effects (HSF or heatsink & fan)
When can you get your own?
The upsides to handling things on your own are:

Pros

You have the best idea of what you want so that it will be built precisely to your specifications.
You can customize the parts to your specifications and compare prices to find the best deal.
If something goes wrong with something you built, you will know how to fix it, which could save you time.
It has the potential to be entertaining!
Cons

If you make poor component selections, you may have to live with the final product or sell it for a loss.
Component retailers offer limited assistance in the event of component instability or incompatibility.
Retailer recommendations for optimal component choice vary widely in quality and are sometimes downright dubious and self-serving.
Prepare for long hours and challenges with little help because you are the designer, builder, installer, tester, and support engineer.
You could spend a long time learning things you didn’t care about.
Drivers, drivers, drivers…. that’s all I’ve got to say.
Perhaps you were hoping I would include something about how much it costs as a Pro. I haven’t done so because, in most cases, it no longer holds water. Many prefabricated machines are available for sale at prices that are only slightly higher than what you would pay for the individual parts. Buying prefabricated beats building from scratch if you factor in the time and money spent on labor.

Plan, Choose, Normalize, Enhance, and Construct…

If I haven’t scared you away, let’s dive into what you must do. Some parts merit a brief look back in time because, despite popular belief, we are still dealing with the results of design and construction choices made twenty or more years ago.
Main Processing Unit (processor)

The central processing unit (CPU) is the brain of the machine. The central processing unit (CPU) is involved in every aspect of computer operation (Central Processing Unit). These days, a computer’s processor consists of millions of transistors connected via a network, all working together to carry out the instructions of the computer’s operating system and any installed programs. For instance, a processor with a clock speed of 1GHz can execute one billion instructions per second, each of which requires the passage of a certain number of clock cycles. As impressive as that number may sound, it pales compared to the millions upon millions of instructions in even the most basic modern app or game

since the early 1970s; exponential growth in computing performance and power has been accurately described by a concept known as Moore’s Law. Three years from now, a computer will likely be twice as powerful as its modern-day counterpart.
As a result, increasing the processor’s clock speed has typically been the standard method for manufacturers to boost performance. This would allow it to process more commands in less time. This is why the best single-core Pentium in 2006 had a clock speed of 3.8GHz and 55 million transistors, while the earliest Intel processors from the 1980s had a clock speed of 5MHz and 20,000 transistors. Intel’s technological efforts stalled out at this point due to silicon junction leakage. More than these frequencies, the current semiconductor technology we use stops working. Primarily because of the heat produced during operation and the large amount of energy that leaks out around the transistor junction. This is why CPU heatsinks have gotten more extensive, and fans have gotten louder and more potent over time.

Intel’s Core 2 Duo and Core 2 Quad technology take a roundabout approach, allowing multiple processors to share a single silicon die (see picture right). As seen in the image above, this handles the load presented by games and applications by processing it in parallel rather than sequentially (known as multi-threading). Until recently, the manufacturing process to create the last Pentium, 65nm, was also used to develop multi-core processors. Then in the first quarter of 2008, Intel began mass-producing 45nm processors based on the same Core 2 designs, codenamed Yorkfield, but using the newer Hafnium Hi-K semi-conductor technology, which is more relaxed and more efficient than the older silicon technology. Nehalem, a new type of processor architecture, became available in the fourth quarter of 2008. The more powerful Quick Path Interconnect (QPI) has supplanted the FSB, and a new socket and memory controller are also included (LGA1366). The Westmere codenamed processors will feature a new die shrink to 32nm by 2010. After that, the roadmap becomes a little hazier. For more information, check out Intel’s website.

Examining the processor technology roadmaps of Intel and AMD is essential to determine which company can provide the highest-performance computing. With high-performance cooling, Intel’s new Core i7 and Yorkfield processors have pushed quad-core clock speeds past 3GHz (33%+ over performance! ) and up to 4GHz when overclocked. This article is from the first quarter of 2009. The Core i7 is a big, hot CPU with more going on in it than ever before, thanks to its integrated memory controller; without efficient and effective cooling technology and the delivery of clean, stable power to the processor, you won’t be able to take full advantage of its performance ceiling. In most cases, a standard PC’s highest factory-installed clock speed is 3.2 GHz.
In Reminiscence Of

Mainstream system builders rarely give memory, a potential performance bottleneck, any real thought. The memory speeds range from PC2-3200 to PC3-16000 and beyond. PC2 or PC3 stands for DDR2 or DDR3 memory, and 3200 or 16000 for the bandwidth in megabits per second. Whether you can afford double- or quad-channel DDR2 or DDR3 memory, you should use the highest bandwidth memory available. The memory speed does matter if you intend to use your custom-built PC for video editing, photography, computer-aided design (CAD), 3D graphics, or gaming. We also carefully consider several other factors that have been shown to have a significant impact on memory performance:

Core clock speed, or the rate at which the memory bus operates (as modified for DDR2/3)
Core memory bus speed multiplier for data rates (DDR, DDR2, and DDR3)
Memory can be made to operate at faster clock speeds but with more excellent latency delays (access cycle delays), making it, on occasion, slower than high-quality memory operating at lower frequencies with lower latencies. Memory with lower latency, such as PC2-6400 at 800MHz and 4-4-3-5, will typically outperform PC2-8500 at 1066MHz and 5-5-5-15.

Many companies now sell computers with standard packages of memory that conform to the PC2-5300 (667MHz) specification and have average latency. Usually, that’s due to a surplus of the item sold from a warehouse. PC2-8500 memory is recommended as a bare minimum (1066MHz). Since it has lower latency and is packaged in a way that improves heat dissipation, it can even outperform some of the fastest DDR3 RAM. In many cases, the memory with the highest specifications exceeds what JDEC standards allow. It would be best if you got some high-quality DDR3 RAM to be prepared for the future (say 1600MHz C8).
You should also ensure that you have a sufficient supply. The recommended minimum amount of memory for dual-channel boards is 2GB; for triple-channel boards (DDR3 only), that number should be raised to 4GB. The practical limit for a 32-bit operating system (Windows) is around 3GB, so if you can upgrade to 64-bit, go for it.
This Board’s Mother (main board)

The motherboard is the base of a computer where all the other parts are installed and connected. The PCI bus (PCIe 2.0, for the graphics and sound cards), the network (USB2, Firewire IEEE1394, WiFi and Ethernet), the storage (IDE, SATA-II, RAID), BIOS configuration, bus clock management, memory controller, hardware management and monitoring, and power supply regulation to the CPU and memory are all interfaces hosted by the motherboard chipset (typically nVidia or Intel-based, known as Northbridge and Southbridge). Which processors can be used, up to what FSB (front side bus) speed, and in what range are determined by the chipset on the motherboard (by sockets such as Intel LGA775 or AMD)? This has changed with Intel’s Nehalem and X58 Chipset, which relocate the memory controller from the motherboard to the CPU. The resulting increase in memory bandwidth is staggering.

A BIOS that allows for precise control and monitoring of system components is essential for the high level of performance tuning that is necessary. Manufacturers typically opt for motherboards and chipsets at least a year old due to compatibility and support concerns. You could take the safe route, keep doing what you’ve always done, or take a chance on cutting-edge gear. If you intend to overclock your CPU, choose a board with a good reputation in this regard (you will need flexible Base Clock speeds for Core i7). Ensure it has a modern CPU socket, high-bandwidth storage and PCI bus, a configurable BIOS, and fast memory, preferably DDR3. A good DDR2 board, however, is now competitive with some DDR3 in terms of price.
Keep your focus on the PCI express lanes. The number of available lanes in your Intel chipset limits your motherboard’s available PCI Express (PCIe) slots. Since data is transferred to and from the card in parallel, the more lanes a given place has, the faster it can run. Here are some of the most popular chipsets available now, along with the number of routes they support:

I-45, 16-lanes P45 (2 of PCIe x8)
Route 55, 16 lanes (2 of PCIe x8)
X48, 32-lane configuration (2 of PCIe x16)
32-lane X-section (2 of PCIe x16)
X58, 32 lanes (2 of PCIe x16, or 4 of PCIe x8)
A 46-lane nVidia 680 (2 of PCIe x16, 1 of PCIe x8, 6 of PCIe x1)
A 32-lane nVidia 750 (2 of PCIe x 16)
Forty-eight lanes for the nVidia 780 (2xPCIe x16, 1xPCIe x1-1.0).
The nVidia 790 has 48 lanes, split between two PCI Express x16 and one PCI Express x1 interface.
You’ll need as many PCIe slots with x16 lanes as possible to run a high-powered SLI configuration. A board with two PCIe x16 expansion slots is a minimum requirement for future upgrades.

The energy source (PSU)

Increasing the power output of your computer has the unintended consequence of increasing the amount of electricity it consumes. The supply of electricity is essential not only for supplying electricity but also for supplying transient electricity at the precise moment it is needed. What the power supply can and should provide is specified by the ATX 2.3 standard. It’s surprising how many power supplies commonly used by major manufacturers fail such a fundamental check. The 300-400W range of many commercially available power supplies is also woefully insufficient. When you factor in the power requirements of the CPU (which can be over 100W), the high-power graphics cards (which can be up to 200W each), the many disk drives, PCI adapters, USB devices, and possibly even a water cooling system, you can see why a computer can quickly reach its power limit. The goal is to determine how quickly you can achieve a power consumption of 1 kilowatt (1000 watts). It’s surprising how much energy a high-end computer’s graphics card, processor, and storage unit can consume.

Aiming for 600–800W or more would be best and preferably exceed the ATX standard requirements to give yourself some wiggle room for future upgrades. Multi-rail switch mode power supplies are commonplace now because they are simpler and more affordable to manufacture. However, having a single rail capable of handling currents greater than 100A provides greater design freedom than possible if you had to allocate rails carefully according to their current capacities. Power supplies with large 120-140mm fans are recommended for those who value silence above all else. These giant fans increase airflow while decreasing airspeed, thus reducing cooling noise.

Alan leads the technology team at UK-based computer manufacturer Cryo Performance Computers (http://www.cryopc.co.uk). He is responsible for developing new methods for designing PCs for games and other resource-intensive contexts. Cryo PC is a provider of high-end, specialized computers for various uses.

Read also: Multilevel Monitoring Equipment – Light beer Still Active and Powerful?