Well, it’s finally done. AMD has digested ATI, or at least to the outside world there will be less of a bulge in its belly. While reading about new Radeon benchmarks I’ve noticed the announcement that ATI mark is no more, and everything will be just AMD from now on.
I don’t know if people actually care that much. Unlike the marketing department, that has to justify its existence by throwing around “proven” ideas to rename this into that or change a color or a font face of some logo at a cost of a few million bucks (instead of improving, say, drivers which still suck pretty badly) regular people see the facts. They are buying a high-performance card from “ATI”, be it labeled ATI, AMD, BlahblahVideo or anything else. So, I’d say “save your dollars”, cause it’s probably just makes NVidia smile.
What else probably makes NVidia smile (at least for now) is the idea of embedded graphics. While they seemingly had a fall-out with Intel, thus being cut off from a chance of producing integrated chipsets for modern Intel CPUs, NVidia seems to be doing okay with ION I/II (regardless of drawbacks of the second platform, being slower in some cases) and quite well with discreet graphics.
Embedded GPUs are perfect for business — underpowered graphics, that is a couple steps behind discreet models, a bit cheaper (never underestimate a desire to save a buck at the expense of usability) and easier to service. While gaming enthusiasts would continue to enjoy speed and feature advantage.
Although I may be wrong. The real idea is to convince everyone that graphics and CPU on chip is better, and then force gaming enthusiasts to swap out the whole silicon monster, instead of just throwing not-quite-the-latest videocard into grandma’s computer. If idea of spiral direction of computer industry development is right, next we’ll see more and more memory integrated into the same chip too. That way when new game demands a video card with Direct X 12 with Bézier curve acceleration (No More Triangular Faces on your game characters!) built-in sub-surface scattering and vector-displacement distortion engine, plus at least 8GB of VRAM for textures, you’ll have to throw away your previous version of AMD Viginthon Plus XX, and plug in the next revision.
Until Intel and NVidia will announce the new revolutionary idea, of having even more powerful dedicated holo-video processor, and physics processor, connected by CompuQube bus, using all sides of the chip, inserted closely into a cube-like socket. And then the rage will be to have a newer, faster graphics, that lives by itself, uses it’s own memory, etc etc.