Misconceptions about computer hardware


(9 minute read)

Today we are going to talk about 5 common misconceptions that people have about CPUs. You can watch a video on this topic here.
With technology advancing so rapidly around us, sometimes misconceptions can work their way into our common understanding.
Without further delay, let’s get started.

#1 You can compare CPUs by core count and clock speed

If you've been around technology for a while, at some point you may have heard someone make the following comparison: "CPU A has 4 cores and runs at 4 GHz. CPU B has 6 cores and runs at 3 GHz. Since 4*4 = 16 is less than 6*3 = 18, CPU B must be better".

Taken on their own and all else being equal, a processor with 6 cores will be faster than the same design with 4 cores. Likewise, a processor running at 4 GHz will be faster than the same chip running at 3 GHz. However, once you start adding in the complexity of real chips, the comparison becomes meaningless.
There are workloads that prefer higher frequency and others that benefit from more cores.
One CPU may consume so much more power that the performance improvement is worthless.
One CPU may have more cache than the other, or a more optimized pipeline.
The list of traits that the original comparison misses is endless. Please never compare CPUs this way.


#2 Clock speed is the most important indicator of performance

Certainly, core speed has an impact, but once you reach a certain point, there are other factors that play a much bigger role. CPUs can spend a lot of time waiting on other parts of the system, so cache size and architecture is extremely important. This can reduce the wasted time and increase the performance of the processor.

The broad system architecture can also play a huge part.
It's entirely possible that a slow CPU can process more data than a fast one if its internal architecture is better optimized.
If anything, performance per watt is becoming the dominant factor used to quantify performance in newer designs.


#3 The main chip powering your device is a CPU


This is something that used to be absolutely true, but is becoming increasingly less true every day.
We tend to group a bunch of functionalities into the phrase "CPU" or "processor" when in reality, that's just one part of a bigger picture.
The current trend, known as heterogeneous computing, involves combining many computing elements together into a single chip.

.                                 A SoC

Generally speaking, the chips on most desktops and laptops are CPUs. For almost any other electronic device though, you're more than likely looking at a system on a chip (SoC).
A desktop PC motherboard can afford the space to spread out dozens of discrete chips, each serving a specific functionality, but that's just not possible on most other platforms. Companies are increasingly trying to pack as much functionality as they can onto a single chip to achieve better performance and power efficiency.
In addition to a CPU, the SoC in your phone likely also has a GPU, RAM, media encoders/decoders, networking, power management, and dozens more parts. While you can think of it as a processor in the general sense, the actual CPU is just one of the many components that make up a modern SoC.

#4 Technology node and feature size are useful for comparing chips


There's been a lot of buzz recently about Intel's delay in rolling out their next technology node.
When a chip maker like Intel or AMD designs a product, it will be manufactured using a specific technology process.
The most common metric used is the size of the tiny transistors that make up the product.

This measurement is made in nanometers and several common process sizes are 14nm, 10nm, 7nm, and 3nm. It would make sense that you should be able to fit two transistors on a 7nm process in the same size as one transistor on a 14nm process, but that's not always true. There is a lot of overhead, so the number of transistors and therefore processing power doesn't really scale with the technology size.
Another potentially larger caveat is that there is no standardized system for measuring like this. All the major companies used to measure in the same way, but now they have diverged and each measure in a slightly different way. This is all to say that the feature size of a chip shouldn't be a primary metric when doing a comparison. As long as two chips are roughly within a generation, the smaller one isn't going to have much of an advantage.

#5 Processors will always keep getting faster


One of the most famous representations of the technology industry is Moore's Law. It is an observation that the number of transistors in a chip has roughly doubled every 2 years. It has been accurate for the past 40 years, but we are at its end and scaling isn't happening like it used to.
If we can't add more transistors to chips, one thought is that we could just make them bigger. The limitation here is getting enough power to the chip and then removing the heat it generates. Modern chips draw hundreds of Amps of current and generate hundreds of Watts of heat.

Today's cooling and power delivery systems are struggling to keep up and are close to the limit of what can be powered and cooled. That's why we can't simply make a bigger chip.
If we can't make a bigger chip, couldn't we just make the transistors on the chip smaller to add more performance? That concept has been valid for the past several decades, but we are approaching a fundamental limit of how small transistors can get.
With new 7nm and future 3nm processes, quantum effects start to become a huge issue and transistors stop behaving properly. There's still a little more room to shrink, but without serious innovation, we won't be able to go much smaller. So, if we can't make chips much bigger and we can't make transistors much smaller, can't we just make those existing transistors run faster? This is yet another area that has given benefits in the past, but isn't likely to continue.
While processor speed increased every generation for years, it has been stuck in the 3-5GHz range for the past decade. This is due to a combination of several things. Obviously it would increase the power usage, but the main issue again has to do with the limitations of smaller transistors and the laws of physics.
As we make transistors smaller, we also have to make the wires that connect them smaller, which increases their resistance. We have traditionally been able to make transistors go faster by bringing their internal components closer together, but some are already separated by just an atom or two. There's no easy way to do any better.


Putting all of these reasons together, it's clear that we won't be seeing the kind of generational performance upgrades from the past, but rest assured there are lots of smart people working on these issues.



Comments

Post a Comment