[wplug] video card power consumption -- performance comparison ...

Bryan J. Smith b.j.smith at ieee.org
Mon Sep 10 10:40:34 EDT 2007


On Mon, 2007-09-10 at 09:35 -0400, Patrick Wagstrom wrote:
> Actually, offloading video work to the cards is accomplished using XvMC, 
> something that Intel, NVidia, and Via Unichrome chips support.

There are different types of Xv* support.  Some do a better job of
overlaying than others.

> Also, for what it's worth, NVidia was the 3rd player to get AIGLX driver 
> support.  First was Intel because RH modified the driver themselves, 
> then the open soure ATI, then about a month later came Nvidia.

I never said nVidia was first.  I just said the second AIGLX became
commonly available in Red Hat betas, they supported it with a beta
driver themselves.  And nVidia's support was most excellent.  nVidia was
very much behind it's support on the commercial front, instead of using
a full GLX approach.

The Intel driver is a bit lacking in areas IMHO, including not
supporting a couple, major composite functions.  ATI didn't make their
open source drivers, DRI did (under sponsorship of either the Weather
Channel or Weather Service, cannot remember).

> I'll also point out that the open source ATI driver supports 3d while the 
> open source Nvidia driver not

Actually, this is _not_ true.

First off, based on nVidia's original source code release, there was an
open source driver for the NV0x (TNT/GeForce) series, although it's not
supported beyond much NV10/11 (GeForce2).

The ATI drivers were based on the R100 (Radeon 7500-8000), written by
DRI (under sponsorship), not ATI themselves.  They were later adapted
for R200 (Radeon 8500-9200) and people have hacked in some support for
R300+ after ATI closed up the specifications.

Secondly, saying "3D" is like saying "Firewall."  What does it do?  Is
it a "FreeD" (simple 3D framebuffer) or does it actually support
advanced functions, and to what level?

Intel's open source driver's support is not only pathetic, but Intel
itself _hordes_ IP.  nVidia, for the longest time, could _not_ release
the AGPgart code to the kernel because it contained Intel IP.  That's
why only nVidia's closed source driver game with the AGPgart for its
nForce chipsets.  Once PCIe arrived, Intel stopped treating AGP like a
"trade secret," and the AGPgart for nForce chipsets went into the
kernel.  Intel does _not_ provide a kernel driver, which kills
performance.  Beyond that, they have _failed_ to support various
features in their i945/955 GPUs.

ATI's is tolerable for a R200 (i.e., 5 year old) technology level.  But
that's it.  It's severely lacking.

Again, saying "3D support" is like saying "firewall."  If you're using
Intel or ATI open source drivers, it's like using ipchains without
stateful filtering/tracking.

> (nouveau is getting there, but not quite yet).  Of course, I'm a
> pragmatist here, and realize that most likely I'll be using nvidia's
> closed driver.

The problem I have is that people don't differentiate between "open
standard" and "open source."  They use some examples that were "closed
standard" and then say "that's the problem with nVidia" when nVidia has
the _best_ "open standards" support of anyone.

That's the reason why 98% of the CAM/EDA world didn't go Windows/DirectX
turn-of-the-century.  Because Linux offered nVidia drivers, ports were
made overnight from Irix/GLX, Solaris/GLX, etc... to Linux/GLX.  That
right there is where 80% of Linux's current, corporate desktop usage
came from.

Had nVidia not offered that, I wouldn't have been supporting Linux in
the CAM/EDA space, and I'd have far less exposure to Linux in general.
But because nVidia offered such, and stuck with "open standards," we
still have open standards-based software.

nVidia _tried_ to release the source code to its drivers.
Unfortunately, not only Microsoft and SGI, but _Intel_ has barred them
from doing that.  It's because of 3rd party IP.  If it comes down to a
"crippled" open source driver due to IP issues, and a "full featured,"
but "standards compliant" driver, I'll take the latter.

People also forget that it's large the "kernel" driver that's the
problem, which is also heavy Intel IP.  The user-space X11 driver is
100% license compatible as closed source.  ;)

> You really should check out the newest Intel drivers.  Support for XvMC, 
> 3d acceleration good enough for AIGLX, and full xrandr 1.2 support -- 
> which means on the fly output detection.

That's still _nothing_ for 3D.  "Good enough" for AIGLX means "basic"
AIGLX support, not full 3D AIGLX feature support.

Also, there are varying levels of Xv* support.

> I saw keithp demo this last 
> year at the Boston GNOME Summit and it was hella cool.  Just plug in a 
> new monitor and X recognizes it, configures it, and extends your desktop 
> without having to restart your session.  Heck, with it, you don't need 
> that ugly xorg.conf file anymore.

Yes, I know, I've done that with Intel drivers too.  But if the scan
rate is screwed up, I have to run a 3rd party utility.

But it's been like that with nVidia for years now (probably closing in
on 6 years), and even their MIT 2D driver has been fairly well supported
(because it doesn't contain 3rd party IP).  I know, I had one of the
first GeForce Go chipsets (NV17) in a notebook in spring 2001.  ;)

At the presentation, you saw I was able to bring up the nVidia control
panel and do anything.  In fact, the "issue" I was having is that my
nVidia driver automagically used the maximum resolutions of both
outputs, and I just wanted to match my LCD's output to the projector (as
my LCD was much higher resolution).

> Also, afaik, the Intel cards are the 
> only ones that support true hardware rotation.  It's nice being able to 
> rotate a 1680x1050 display to portrait mode.

Huh?  Been doing that on ATI and nVidia too.  Although I haven't tried
it on a chipset integrated version, so maybe they don't on those.

And I don't know what you mean by "true hardware rotation"?  Maybe I'm
ignorant of something.

> Yup, it is overkill.  But my current card 6600 is fanless and didn't 
> cost much, which made it a good candidate for my MythTV box.  The fact 
> of the matter is that GeForce 4mx will run high-def for MythTV just 
> fine.  However, just try finding one of those in a PCI-Express interface.
> The integrated 8000 series GPUs aren't out yet.  In fact, there still 
> are very few board with the 7050 integrated.  Anyway, performance is not 
> an issue, but power consumption and heat are the issue.  There used to 
> be a fanless 5000 series PCI-E nvidia card that some folks had, 
> unfortunately, I can't find it anymore.  It seems like that may be 
> ideal.  However, there is always the lure of Quake 4 in the living room.
> Anyway, while that's a good regurgitation on the current state of video 
> cards and drivers, it doesn't get me any closer to getting information 
> on power consumption.

True.

I was just pointing out I've done video and overlay playback on various
chipsets, and nVidia kills everyone on Linux.  And that's before we
touch on 3D.

To AMD's credit, they just got new drivers out for Linux that are
finally "serious."  And it will be interesting to see what their
specification release (the first since R200 series, which was far from
complete) does.

-- Bryan

P.S.  I'm obviously biased because I've been running CAM/EDA solutions
on Linux for a good 10 years.  Once nVidia's drivers hit about 8 years
ago, it was a Godsend.  It also headed off many ports headed towards
commodity Windows/DirectX, as Linux was also a commodity solution -- and
supported GLX _immediately_ (no 3D porting required ;).  People forget
what nVidia has done for Linux, and continue to do (especially on their
chipsets too).


-- 
Bryan J. Smith         Professional, Technical Annoyance
mailto:b.j.smith at ieee.org   http://thebs413.blogspot.com
--------------------------------------------------------
        Fission Power:  An Inconvenient Solution




More information about the wplug mailing list