Connect with us

Blog

UPDATE: Intel cans consumer LRB: Compatibility, Performance and nVidia?

Archivebot

Published

on

In our extensive coverage of Intel’s project Larrabee, one question persisted: is it worth for Intel to invest billions of dollars on a market that cannot be dominated? Intel Larrabee as a desktop card for consumers – is dead. As a consequence of a chain of events, Intel’s executive management decided to stop pouring millions of dollars in a bird that failed to fly.

As of Friday, December 4, 2009 Intel decided to stop investing into Larrabee as a consumer project. In a statement given to us by Nick Knupffer, who worked as Intel’s spokesman for Larrabee, it was stated that “Larrabee silicon and software development are behind where we had hoped to
be at this point in the project. As a result, our first Larrabee product will not be launched as a standalone discrete graphics product, but rather be used as a software development platform for internal and external use.”

The news comes after the negative reaction by analysts and the press over the last two public presentations at IDF Fall 2009 in San Francisco [CA] and SC09 in Portland [OR]. After a lot of effort and overclocking, Larrabee did manage to reach 1TFLOPS in SGEMM performance test.

However, the problems with Larrabee were all too great. As we wrote in our detailed analysis, Intel sank over three billion dollars [estimate, grand number will probably never be known] into the project, and according to our highly-positioned sources – it needed another billion to billion and a half to make it work. But even the sudden departure of an executive that led the project would not solve the quintessential problem – AMD and nVidia not just created GPUs that support the IEEE 754-2008 specification but are also unbelievably fast.

According to the benchmark used, the AMD Radeon HD 5850 puts out 750 GFLOPS, bringing it very close to LRB figures. We don’t have a number for the Radeon HD 5870 because we simply don’t have any 5870 boards in our BSN* Labs at this moment and naturally, our HD 5970 only runs off a single GPU at 775 MHz, so it is not exactly the top performer. This is due, in part, to the fact that the benchmark does not support multiple GPU cores.

When it comes to NV100-class hardware, we should expect around 80% efficiency and around 1.2TFLOPS, i.e. 20% faster than a heavily overclocked Larrabee. With the Larrabee’s release date set in late 2010, regardless of promises given to large OEMs who openly doubted Intel’s execution. What the response of high-level executives that now feel mislead will be – remains to be seen.

Of course, everything said above was related to computational performance. When it comes to graphics, this was a very painful sequence for Intel. The core of the problem was the fact that Intel’s insistance on pushing Ray-tracing and criticizing Rasterization at times when all games use Rasterizing.

There is also a potentially large issue with infringment of nVidia IP. Unlike ATI’s IP that is freely accessed by Intel as a consequence of AMD-Intel cross-license agreement, Intel is currently embroided in a fierce legal battle with nVidia with potentially dire consequences – we are trying to get more information from both sides and as soon as we collect the answers, we’ll run an additional story.

But as far as Larrabee existing on the consumer desktop, regardless of being an add-in card, integrated with the CPU on a multi-chip module or inside the architecture [Haswell?], Larrabee ended up in the same place as those nVidia chipsets for Lynnfield and Nehalem – on ice.

Update #1, December 6, 01:56AM GMT – Following our story, we spoke with Tim Sweeney, the author of Unreal Engine and one of most hands-on programmers on the face of the planet. In a brief discussion Tim explained to us that Larrabee had a lot of merits but was perhaps approached the other way:

“I see the instruction set and mixedscalar/vector programming model of Larrabee as the ultimate computingmodel, delivering GPU-class numeric computing performance and CPU-classprogrammability with an easy-to-use programming model that willultimately crush fixed-function graphics pipelines.  The model will berevolutionary whether it’s sold as a Express add-in card, an integratedgraphics solution, or part of the CPU die.

To focus on Teraflops misses a larger point about programmability:Today’s GPU programming models are too limited to support large-scalesoftware, such as a complete physics engine, or a next-generationgraphics pipeline implemented in software.  No quantity of Teraflopscan compensate for a lack of support for dynamic dispatch, a full C++programming model, a coherent memory space, etc.”

As you can read for yourself, Tim was quite happy with the way how Larrabee looked from a developer standpoint as it is a very flexible platform. When Intel fixes the issues in the second or third generation of architecture and hopefully builds a second generation of silicon with graphics in mind, project Larrabee just may as well play a larger role. However, only time will tell what will happen with the development of discrete graphics parts from nVidia and AMD. We thank Tim for taking a part of his Saturday to speak with us.

Original Author: Theo Valich


Webmaster’s note: This news article is part of our Archive, if you are looking for up-to date articles we would recommend a visit to our technology news section on the frontpage. Additionally, we take great pride in our Home Office section, as well as our VPN Reviews, so be sure to check them out as well.