The device you are reading this article has a chip at its core, whether it is a smartphone or a computer. These chips are made of billions of transistors. Every year we are getting devices that are faster and performs better than the previous year. IPhone’s Bionic or Qualcomm’s Snapdragon, Intel or AMD’s processor, NVIDIA’s new graphics – discussions, criticisms, judgments, and analyses on social media and YouTube about how fast these things are.
Transistors are at the root of all this. Apple’s latest mobile chip A13 Bionic has about 8.5 billion transistors. How can there be so many transistors in this small chip? What will happen if a conflict is created between two transistors? Let’s find out.
Transistors and process nodes
Before we talk about Moore’s Law, let’s clear up the idea.
A transistor is an electrical component that controls the flow of electricity. A processor or chip is made up of a combination of transistors.
These transmit signals through the flow of electricity and fulfill our instructions. A chip runs in machine language. The universal machine language is binary, 0, and 1. That is, all the work is done by 0 and 1. In this case, each transistor carries a value. 0 means there will be no electric current, 1 means there will be electric current. Besides, a chip design has many accessories that set them apart from each other.
Now let’s talk about process nodes.
Processors are made using a technology called photolithography.
A picture of architectural design of the processor is printed in silicon, the method by which this is done is called a process node. The value of the process node depends on how many small transistors manufacturers can make.
Basically, the value of the process node refers to the distance from one transistor to another. The advantage of relatively small process nodes is that they are energy efficient so that they can solve more mathematical problems without overheating. Because they are small, they can hold many more transistors in the same amount of space, which dramatically increases performance.
For example, AMD’s new third-generation Raizen processors are built on 7-nanometer project nodes and produce more performance and less heat than Intel’s 14 nanometers at the same amount of current.
Although a processor’s performance does not just depend on the process node, it is a big issue. Also, the node design can be different. There is no difference between the 7 nanometers of TSMC and the 100 nanometers of the upcoming Intel. But what happens when the process node is reduced to near-zero?
In 1985, Intel co-founder Gordon Moore published a research paper. There, he said, The computing power of a chip will double every two years. For almost half a century, his predictions have been more or less consistent, and Moore’s formula has been widely known for its success.
In fact, our technology is becoming so powerful and cheap that we sometimes forget about its Innovation. Do we think about automobile or airplane technology? How far have they come today? How far can we go? Or what will happen when technology reaches its limit?
It is clear that chip technology has reached its peak. Since 2000, the continuity of Moore’s Law has been declining. The first microprocessor Intel 4004, had 2300 transistors. It could be seen with the naked eye. Each transistor was 10 microns in size. At present, the average processor is 14 nanometers. And now there are about 100 million transistors per square millimeter. 7-nanometer transistors have hit the market. Maybe 5 nanometers will be unveiled next year. But slowly, we are getting to the point of 1. Then? Before long, we may realize that we are no longer able to move forward. We will fall into mechanical limitations.
Electricity is the result of the flow of electrons, and transistors work in the flow of electricity. But as their numbers increase and as they get closer to each other, they are drifting towards instability. Each processor has a working thermal limit. Along the way, it is 100 degrees Celsius.
Many people are now familiar with the term processor overclock.
Overclocking is when a processor is taken out of its prescribed capacity. This is done by gradually increasing the processor core voltage. Excessive overheating can cause the processor to burn out. These disturbances go through the transistor. A processor burns out when one transistor collides with another.
The heat generated in this work is controlled with high-quality coolers. This is why it is better to stay away from one transistor to another for overclocking. But at present, there is not much overclocking available.
According to law, we were supposed to go to 7 GHz, but we were only around 5 GHz at this time. The new 7-nanometer AMD processors can’t be overclocked. This is because the manufacturers are releasing the market with the highest capability of a processor. If the process node is further reduced, it is expected that there will be fear of overclocking.
The death of Moore’s Law is certain. Now we have to find another way. The next concept of 1 nanometer is quantum computing. But it also works below freezing, so it is not suitable for consumer use. However, some measures may be taken in the future.
New architecture design and optimization
Each chip maker has a different processor design. Each company’s processor maintains a variety of things, including how it will follow its instructions and decision-making. This means that an Intel and AMD chip with the same number of transistors can give different performances. Before the arrival of AMD’s Raizen, there were not many headaches with Multicore.
The multicore concept is that if I get maximum performance from one core, I will have to add another core to get more performance. But the problem is that performance is not growing at the current rate at which the processor core is growing. This is because the software is not fully optimized to use these extra cores completely. For example, currently, the processor of AMD’s highest desktop lineup is the Threadripper 3990X. It has 64 cores. As such, it is supposed to get extraordinary performance from it. But except for a handful of software, it does not perform well. Also, the core optimization of the current Triple-A games is limited to 4 or 6 cores, and single-core performance is important in gaming. So in the future, if our programs and processors are more optimized, there will be no waste of performance.
AI chips can play a significant role in performance. The application of machine learning is increasing day by day.
Graphics cards have been used in AI training for almost a decade. When creating a processor, it is programmed how it will work. Experts believe that if a processor can create its workspace after use, its performance will increase repeatedly. An example is NVIDIA’s DLSS (Deep Learning Super Sampling) technology that enhances gaming performance through AI.
Another alternative is to use a substance other than silicone on the chip. Chip control by any current other than electrons. These are not new ideas. Scientists are already researching these. Use of photons instead of electrons. Graphene instead of silicon. Extensive research is also being done on DNA or Spintronic Transistors.
But it may take a few more decades to come to market commercially.
Quantum computers are a fantastic addition to our world of technology. Google, Microsoft has started using quantum computers in their work. But it is still in its infancy, so it still has many limitations. It takes time for something like that to reach consumers.
Finally, after the demise of Moore’s formula, we have to leave the electron-silicon world to another world. The good news is that we have options. Needless to say, future computing technology will be more exciting.
References are added below:
1.https://towardsdatascience.com/moores-law-is-dead-678119754571 2.https://www.techrepublic.com/article/moores-law-is-dead-three-predictions-about-the-computers-of-tomorrow/ 3.https://techterms.com/definition/transistor
Featured Image: Getty Images