Non classé

Intel announces that its gaming graphics processors will not be a competition

Intel initially announced that the upcoming processors will support the acceleration of ray tracing technology, joining NVIDIA, AMD and home gaming devices in support of this technology. This is a very good thing, and it means that the company will be keen not to fail in supporting modern technologies.

But the bad news is that the company has chosen to resort to the fast and easy way in building those graphic processors … which is a path that does not include building a real coherent architecture from scratch, but rather using what the company already has. All the company will do is use the integrated GPU cores that it already has, improve them a little, then put the largest number of them in one chip to call this monster a graphic processor!

intel graphics

First of all, we should not underestimate Intel’s efforts to improve its integrated graphics processors, which are close to the lower processors of AMD and NVIDIA .. But these processors are limited in capabilities and frequency and are very small in the number of cores .. They do not represent huge obstacles in writing command controllers and drivers for them. .

But what Intel does with this hardware is the biggest problem, as the company wants to cram an extremely large number of those cores (1024 cores) into one chip, and connect these cores logically and programmatically with each other, instead of linking them to Hardware like NVIDIA and AMD do .. and for those who do not They know, hardware connection to cores is the basis and the fastest and most secure option for consistent high performance.

Hardware connectivity means, in short, not disrupted, that the graphics processor contains hardware units inside it that handle the task of distributing data on the cores and achieving a rate of 100% exploitation of those cores. This is called the hardware scheduling or hardware scoreboarding. The benefit of the hardware distributor is that it is the most capable of dealing with All kinds of data expected and unexpected, and it is much faster.

Without this hardware distribution, the processor needs a logical distributor or a software scheduling, and this is through the Driver, which predicts the data in advance and distributes it in a consistent pattern to the cores, which disturbs the cores’ consumption rate and makes it always less than 100% .. and at rates Great as well! Especially because the graphic data is not very predictable, and it changes constantly between each frame image.

This is the way Intel has already taken, unfortunately, the previous generation of the Gen11 company contained a weak hardware distributor, but the company completely removed it in the new generation to be replaced by a software distributor .. Then it was not satisfied with that, but rather doubled the number of cores that this software distributor would bear, which means that the burden The company has to write powerful driver controllers that have become a doubler.

For comparison, we send you a journey back in time to the GTX 580 graphics cards from NVIDIA and the HD 6970 cards from AMD .. The GTX 580 came with only 512 graphics cores compared to the 1536 cores in the HD 6970, meaning that the AMD card had three times the number of cores, and with That came with less performance than the GTX 580 by no small difference .. Why? Because the GTX 580 contained a hardware hub, the HD 6970 only worked with a software hub.

This is the danger that Intel puts itself in, as it is designing a graphics processor with an extremely large number of cores, and with only a software distributor, in order to confront the giant processors from NVIDIA and AMD that are currently working with hardware distributors.

But the worst is yet to come, if the maximum number Intel can cram into a single GPU is 1024 cores .. Moreover, the distribution efficiency becomes very poor! So there is only another quick solution at Intel, which is to use more than one graphics processor on one chip! Which is what Intel has already announced, the next graphics processors will come with mini chips glued together, the maximum splice limit will be 4 Tiles, and each chip contains 1024 cores, and the slices are connected to each other with a special bus .. for a total of 4096 cores.

Which is very bad! The graphic processor will now work as small graphic processors stuck together in a large graphics processor, in the form of double, triple or quadruple, as if it were multiple graphics cards in an SLI or Crossfire array .. which are those arrays that depend completely on the support of driving controllers and drivers, to the point that Any defect in these definitions leads to very poor performance of the matrix to the extent that it may work with a single graphics card in the worst conditions! The consequence of this is that the 4096-core Intel graphics processor may literally behave like a 1024-core processor! Its performance is very licky.

In short, Intel has doubled the burden of writing definitions twice on itself, once by relying on a software distributor, and again by relying on sticking small graphic processors into one large processor, it has chosen the easiest and fastest way, to design a large graphics processor, by quickly cramming small nuclei into it, then cramming Complete graphics processors in it also very quickly, which is a way to save time and effort on Intel, but it is not enough to confront the giants of NVIDIA and AMD who are designing huge graphics processors from scratch, with thousands of cores grouped together with a hardware distributor in one large graphics processor, which guarantees them reliable performance in The worst and harsh conditions Take advantage of all parts and units of the GPU.

The most important aspect that seems to be still absent from Intel’s awareness is that of writing the definitions and driver controllers needed to make the card and necessary for compatibility with DirectX and OpenGL libraries and compatibility with thousands of previous games over the ages, and the upcoming games .. It is a very difficult and arduous task. Which is why no competitor has battled NVIDIA or AMD for decades.

Writing command controllers is basically a cumulative knowledge acquired over the years and years, which Intel has little knowledge of, and the obvious thing is that the definitions need special skills to write them, some described them with Arkane dark magic skills, the number of lines of NVIDIA definition exceeds the number of lines in the Windows operating system Himself! It is not clearer than that from the paradoxes in the size and complexity of the definitions between AMD and NVIDIA, as the latter in particular spent lavishly on creating impeccable definitions for the DirectX 11 library, and made it a similar example. It is an achievement that AMD has not been able to achieve so far, and has focused on libraries that are less dependent on drivers such as DirectX 12 and Vulkan. It is the veteran company in the world of cartoons!

How would it be with a beginner like Intel? How will the situation be when the company has doubled the effort to write and design those definitions exponentially over what is required of NVIDIA and AMD, and it is in all of that in the judgment of a fearful novice ?!

Writing driver definitions to get full compatibility with thousands of previous games is the biggest obstacle that Intel will ever face .. and it is an obstacle that will continuously make its solutions below AMD or NVIDIA, especially with the sluggish and sloppy approach it uses to build graphics processors.

Even the same architecture that Intel uses is not the same as the NVIDIA and AMD architectures, in the aspect of polygon rate processing and tessellation, Intel achieves the lowest possible performance rate, with two polygons at each frequency, while AMD achieves 4 polygons per frequency, and NVIDIA achieves the highest rate ever With 6 polygons in each frequency. All this and we do not know anything about the Pixel Fill Rate, the Texels Fill Rate, nor the Texels Filtering Rate … which are also expected to be lower than NVIDIA and AMD have. All this and we also did not know how Intel accelerated ray tracing nor the strength of its performance in it, which is the power that is expected to be much less than NVIDIA.

Intel will reap the harvest of what it sows, as it goes to the battlefield of adults with a weak and deficient integrated graphics processor architecture, and then on top of that it builds a large graphic processor on its basis by means of hasty stacking and stacking method as agreed, with the least effort possible, and completely relying on definitions and controllers that have only limited experience In its writing … this method may be suitable for building graphical processors for artificial intelligence, cloud computing and scientific, … but it is not at all suitable for graphics game processors that need every atom and every drop of performance from the GPU, and it cannot afford to waste a fraction of a second in the travel of data through the cores of the processors Small graphics stuck together, and does not bear the slowness and deadlock of software distributors that are less efficient and perform much of the hardware distributors.

For all this and more, Intel will lose heavily in its first attempt to create a graphics processor for games, and on top of all that it will come late, in 2021, and perhaps at the end as well .. to be the third consecutive failed attempt .. Intel may improve after that or perhaps not .. Intel is in trouble. It needs a fast graphics processor to cope with the sweeping progress of NVIDIA in the market of artificial intelligence and computing .. But NVIDIA has done so with a normal gaming processor, while Intel will not be able to categorically repeat that. Its maximum ambition right now is to design a GPU that’s good in computing, but extremely modest in games, and falls short of NVIDIA and AMD’s rival.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

Close
Close