This is not a bad product. It does have all the nice Tonga features (especially FreeSync) and good tesselation performance, whatever this is worth. But the price is a little bit higher than what would make a great deal. At $190, for example, this card woule be the best card in middle territory, in my opinion. We'll have to see how it plays, but I suspect this card will probably find its place in a few months and after a price drop.
Yeah, it's like every AMD GPU...overpriced for what it is. They need to drop the prices across the entire line about 15% just to become competitive. The OC versions of the 380X is selling for dollars less than some GTX970's, which use less power, are more efficient, are around 30% faster, and you could argue have better drivers and compatibility.
To my understanding, the most significant reason for the decreased power consumption of Maxwell 2 cards ( the 950-60-70 etc.) was due to the lack of certain hardware in the chips themselves specifically pertaining to double precision. Nvidia seems to recommend Titan X for single precision but Titan Z for DP workloads. I bring this up because so many criticize AMD for being "inefficient" in terms of power consumption but if AMD did the same thing would they not see similar results? Or am I simply wrong in my assumption? I do believe AMD may not be able to do this currently due to the way their hardware and architecture is configured for GCN but I may be wrong about that as well, since I believe their 32 bit and 64 bit "blocks" are "coupled" together. Obviously I am not a chip designer or any sort of expert in this area so please forgive my lack of total knowledge and therefore the reason for me asking in hopes of someone with greater knowledge on the subject educating myself and the many others interested.
It's more complex than that (AMD has used high density libraries and has very aggressively clocked its GPUs), but yes reducing DP performance could improve performance per watt. I will note however that was done on the Fury X; it's just that it was bottlenecked elsewhere.
At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?
The answer is both. The problem is, they have multiple mainstream architectures with multiple GPU designs\capabilities in each. Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have.
NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications.
@Samus:: "At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?"
Both
@Samus: "The answer is both."
$#1+
@Samus: " Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have."
They tried the gaming only route with the 6xxx series. They went back to compute oriented in the 7xxx series. Which of these had more success for them?
@Samus: "NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications."
This would make a lot of sense save for one major issue. AMD wants the compute capability in their graphics cards to support HSA. They need most of the market to be HSA compatible to incentivize developers to make applications that use it.
HSA and DP64 capacity have nothing in common. People constantly confuse GPGPU capability with DP64 support. nvidia GPU have been perfectly GPGPU capable and in fact they are even better than AMD ones for consumer calculations (FP32). I would like you to name a single GPGPU application that you can use at home that makes use of 64bit math.
You asked a question that's been answered in the post you reply to. Amd wants to influence the market to support fp64 compute because it's ultimately more capable. No consumer programs using fp64 compute is exactly why amd is trying so hard to release cards capable of it, to influence the market.
It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts. Nvidia is only being competitive thanks to the CUDA dominance they have built and their aggressive driver tuning for pro applications.
@FriendlyUser: "It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts."
This. AMD has a vision where GPU's are far more important to compute workloads than they are now. Their end goal is still fusion. They want the graphics functions to be integrated into the CPU so completely that you can't draw a circle around it and you access it with CPU commands. When this happens, they believe that they'll be able to leverage the superior graphics on their APUs to close the performance gap with Intel's CPU compute capabilities. If Intel releases better GPU compute, they can still lean on discrete cards.
Their problem is that there isn't a lot of buy-in to HSA. In general, there isn't a lot of buy-in to GPU compute on the desktop. Sure there are a few standouts and more than a few professional applications, but nothing making the average non-gaming user start wishing for a discrete graphics card. Still, they have to include the HSA (including DP compute) capabilities in their graphics cards if they ever expect it to take off.
HSA in and of itself is a great concept and eventually I expect it will gain favor and come to market (perhaps by another name). However, it may be ARM chip manufacturers and phones/tablets that gain the most benefit from it. There are already some ARM manufacturers who have announce plans to build chips that are HSA compatible. If HSA does get market penetration in phones/tablets first as it looks like may happen, I have to wonder where all the innovative PC programmers went that they couldn't think of a good use for it with several years head start.
The HSA foundation is partially founded by ARM which means they are already working on it (but as you said there isnt much motivation to make HSA enabled apps). AMD is the only high profile and headline grabbing member of it so they tend to get the most press because of clickbait articles. And a lot (if not all) of nvidia's efficiency improvements do come from the lower transistor density (also the main reason they can say their TDP is so low since the chip has a larger surface area with which to dissipate the same amount of heat as the same chip made using AMD's high density libraries would have), improvements to the memory and reductions in DP capabilities.
As everyone has noted, the cards are uncomfortably close to the higher tier (390 and 970). So, the 380X is not overpriced with respect to the competition from nvidia, but with respect to the 390. The jump in performance is so great, that we should either hope the 390 goes at $300 (practically eliminating the 380X) or the 380X completely dominates the sub-200 territory.
@Samus: "They need to drop the prices across the entire line about 15% just to become competitive."
That wouldn't fix the biggest pricing problem shown in this review. The 380X is priced too closely to the 390 given the performance difference. Drop them both by 15% and the 380X is still priced too closely to the 390. I'll leave the rest of you to argue performance vs premium cooler value on the high end and 390/390X vs GTX970/GTX980 performance per dollar, but I submit that a flat 15% drop is too simple an answer to the problem due to competition within their own lineup.
At this point, NVIDIA or AMD, I'm not sure I would get anything other than an ASUS cooling system. I have the STRIX version of the GTX 970 and it really is fantastic.
It Depends on what you need. The Stock Blower Coolers keep hot air out of the case, so for Small Form Factor Builds, your not going to want Asus's coolers since they dump the hot air back into the case.
I have a mATX case with water cooling and internal padding all around to keep the noise down, and my ASUS Strix GTX 960 is not making a sound and the temp in the case does not go above 50-52 degrees celsius even after hours of playing. The problem with GPUs sucking air out from the rear and blowing the same air out is, they have to generate all of the airflow themselves, which always gets really noisy compared to using the air passing through a case.
I had the Asus GTX970 Turbo and it had the grindiest ball bearing fan I've ever heard. It brought me back to the Athlon's YS Tech and Delta days. The "Titan" cooler on my old GTX770 was virtually silent in comparison.
So Asus has their duds, but the Strix seems to be a great cooler if you don't need a blower...but many of us do. In a bit a shame toward Asus, I replaced their Turbo with a PNY 970 (also a blower) and the PNY feels cheaper, but cools better and makes less noise.
Don't get me wrong here, I really like ASUS stuff - but they have let me down several times on cheapo video card cooling systems. Nasty sleeve bearing fans on half-height Radeon 6580s that vibrate then seize, which was really cheeky considering the box had a "high quality fan omg!!" thing as part of its marketing material.
Ended up replacing the half-height card with a passively cooled one - and a nearby 80 mm case fan - so I couldn't have a crappy onboard fan, since every other card on the market seemed to be carrying the same stupid POS fan. I couldn't even spend more to get a better one!
EVGA is great, but they don't make Radeon cards. It's important to point out, as well, that EVGA is actually NOT NVidia's OEM partner. PNY is. PNY makes a ton of cards based off NVidia's reference designs, which I think are the best. The Titan cooler used on reference 770/780/970/980 GPU's, specifically the vapor-chamber variant, is unsurpassed by any other partners'. That's why almost every partner makes at least one variant of these GPU's with the Titan cooler. They don't make many, because the rumor is NVidia charges $30 for the vapor chamber cooler and it is more expensive to manufacture the cards because of the installation (GPU binding) technique.
But EVGA has probably the best, easiest to deal with warranty. Unfortunately I've had to use it.
From what I understand Sapphire started with the vapor chamber type cards on a few of their Radeons 6 years ago.. Interesting that Nvidia went that route. I'd never heard of any other company doing it before and didn't know they had that on their high end coolers..
Fair enough, I am generalizing based on an observation pool of 2, which I shouldn't do, but I really enjoy having a silent GPU that doesn't go over 65C! It seems that cooling technology has progressed across the board, which is great news for everyone.
I agree they have a nice cooling system. They may even have the best at the moment. That said, I do believe they have some good competition in this area. MSI impressed me with their Twin Frozer design back before Asus had a DirectCU design out. They've been constantly improving since then. Saphire (much as I dislike them) released some very appealing vapor chamber designs. EVGA had pretty decent blower coolers, but nothing really standout until their second revision of their non-blower design (ACX 2.0). The ACX 2.0+ is copper heaven. I don't really favor designs that just throw another fan at it without really giving much thought to the heatsink design like Gigabyte's Windforce cards. I feel like MSI set the bar with their original Twin Frozer cards and since then, MSI, Asus, Saphire (sigh), and now EVGA have been vying for dominance in the cooling department.
"jeffrey - Thursday, July 02, 2015 - link Ryan Smith, any update on GTX 960 REPLY Ryan Smith - Thursday, July 02, 2015 - link As soon as Fury is out of the way. "
Looks like you missed a comment, because there was one at one point that said that there will not be a 960 review. I don't have a direct link to it because I'm not obsessed with the topic, but yeah, it was said.
You're right, I don't know when that was said. I waited months for that review, because I trust the unbiased reviews here and wanted to buy a new graphics card based on info I could count on. And I just really don't like when you keep promising things, stringing your readers on, and then never delivering. I don't care if the review gets posted, it's just how it was handled. I've been raised that a person's word means something (he told me personally on twitter that he would do it). And I'm sure you could go on that this is the internet etc. but when you've been reading a site since it's start, it's content means something, at least to me. I guess it's why we're all here to some extent. If you go back on your word, then you should at least let ppl. know, a bit more officially, than in a comment section of, I would assume, another article.
The problem with reviewing the GTX960 is the drivers have been optimizing around improving its performance all year, and every single card performs so different. The GTX960 overclocks incredibly well, some people hit 1400MHz is the board has the right power configuration. This is why you hear people talk about the GTX960 completely trumping the R9 28x/38x's when in reality, both GPU's give and take blows in various games at stock.
But when overclocked, the GTX960 is a bit faster than an overclocked R9 28x/38x. I think this causes a lot of reviewers to tip-toe around these cards. And when you consider a GTX960 with 4GB is over $200 and a GTX970 with 4GB (er technically 3.5GB) is $260-$280 after rebate, it becomes muddled.
I own the GTX 960. Mine hits 1450MHz easily. I think they all do because temps and power consumption barely budge. I also own the R9 380 in 4GB configuration.
Drivers have barely improved the 960 performance and the 380 is faster in almost every game I own. Overclocking gets the 960 to parity, or slightly better to a small extent where you can't really tell the difference.
Of the two I'd take the 960 simply due to the efficiency, but driver updates have really made no difference to anything other than the games they were optimised for. Library titles have seen no improvement.
I also waited a while, checking the site often. I think I spotted some 960 benchmarks slipped into some analysis article on here many months later, but no post to say they'd been done, let alone a full review.
It's not just the graphics section either. I really don't get why there are so many announcements and so few reviews on here these days. It's a shame because reviews like this one here are the reason I like AnandTech so much.
I wanted the GTX 960 reviewed because it seems to be pretty good for an HTPC, since the power consumption is lower than most chips. I know the card is good, but I wanted an AnandTech review :)
Power difference is irrelevant in desktop PC's a 75W difference... 20 hours of usage at Full load is ~20p in the UK
a 1000 hours of gaming (a years worth?) for an extra £10
This does not factor in some of this usage is in Winter months, so the extra heat generated reduces the amount of heat required from other sources, thus reducing other heating costs
Card idle so low now that it does not make much of a difference. Even leaving my 4 year old monster on 24/7 is costing me maybe $10-15/year, and with the improved idle power load on newer cards (I am running a 570), it would probably cut that in half.
Not to say that you should go crazy and leave things on all of the time because it 'does not matter'... but unless you are running something with a 24/7 load like a render box or a server, then power costs is not a true consideration. Heat generation due to an inefficient card may be a consideration, but not the price of the power used.
Too close in price to the 970/390. Anyone spending that much will stretch the extra few $ for the much faster card. Price/performance isn't good enough - needs to be $200.
Except when you literally can't afford those extra $ - IE in the UK, the 380X starts around £190, the 970 starts around £250 (using Overclockers.co.uk as a reference).
The cost difference there, if all you're doing is upgrading a GPU, is significant enough where you can't really say 'ooh, it's only a little more' - if we were talking £190 and £220, that'd be different.
Likewise, if you're configuring a whole system, and aren't an *avid* gamer (IE, as a survey of one, I mostly dick about in Serious Sam 3 and Metro 2033/Last Light - both of which are far better with a chunky GPU if you like your shiny goodness) then the £60 difference is better spent elsewhere, like RAM, storage, or a larger monitor.
Horses for courses, but if you're trying to eke out as much overall value as possible for a machine without horribly compromising on performance, AMD make a hell of a lot of sense.
Me? I'm waiting for Fiji to come down below £200. And, you know, to get a new job. Which'd probably help, natch.
Re: "Finally, we’re also unable to include compute benchmarks for R9 380X at reference clocks, as AMD’s drivers do not honor underclocking options with OpenCL programs."
Would someone please be so kind as to explain "underclocking options with OpenCL programs" to me please? Why do the cards need to be underclocked when running OpenCL programs?
The card we received is the STRIX R9 380X OC, which comes with a factory overclock of 1030MHz, versus 970MHz for a reference card. We underclock this to get reference performance, however underclocking doesn't work with OpenCL programs.
My guess is that these cards are factory OC'd, which means that they would need to be underclocked to run an apples-to-apples comparison at true 'stock' settings.
Just for you, I tested it using my i3-2100/HD7750/W10 test mule. VSync globally disabled in CCC, VSync disabled in Dota 2, Frame Target set to 60fps. Steam overlay shows 60fps and I see no signs of tearing or stuttering. To my knowledge, it never stopped working.
Hmm.., it should have some tearing because it doesnt really sync with the monitor anyway, mate. Can you set it to 65, 70, 75? Mine doesnt work in LoL, I set it to 60, but it always fires up over 150fps+
LoL does have its own fps limiter, so perhaps that's causing a mix-up in the software. Also, LoL might be running in fake fullscreen mode whereas the catalyst fps limiter specifies it will "Reduce power consumption by running full-screen applications at reduced frame rates." I'm gonna go try a round of LoL now because you have me curious.
Tonga is an epic disaster. It is less than 10% more efficient than tahiti in terms of performance per watt, and in terms of performance per transistor (fps per mm^2) it apeears to be actually worse. Meanwhile, Nvidia releases maxwell which outperformas kepler on both these metrics not by some paltry 10% or less, but by a very wide margin.
All the GCN architecture is a disaster. With TeraScale architecture AMD could fight with smaller dies and less W for a bit less performance. With GCN AMD has to compete using larger and power hungry dies that have brought it to go in red also in the graphics division, while with older TeraScale it at least could be at least on par. GCN is an architecture not up with that of the competition. DP64 presence is not the problem, as AMD has kept on reducing it influence over every GCN step (starting from 1/4FP and ending to 1/24FP) with no real results under the power consumption term. They probably could just spare few mm^2 on the die, but they are too way back with memory compression (I can't really believe they never thought about that) and their bus are way too big, expensive and power hungry. All the architecture is a fail. And DX12 is not going to solve anything, as if they ever raise their performances of 10% over the competition, they are still way back in efficiency both in terms of W and die size.
Nope. V-sync is off, and I can vouch that the instantaneous framerate does go over 60fps. That's just an amusing case of cards at this performance segment coming very close to averaging 60fps.
For the price matchup table on the first page, the 4GB 960 starts at $220 vs $180 for the 2gb model. NVidia might not be splitting them apart by model number; but pricewise it has cards at both slots.
Current cheapest 4gb GTX 960 on newegg (USA) is $180 w/o rebates. Next cheapest is $185 w/ additional $20 MIR. Next cheapest is $199 w/o rebate. Next cheapest is $210.
There are plenty of 4gb GTX 960 cards for much less than $220.
Irrelevant: 1. Most 4K TVs that can do 4K60 4:4:4 over HDMI 2.0 can also do 1080p120, native input. 2. Games like Dota 2, LoL, and nearly all games prior to 2013 can be play 4K no problem by GPUs like this. 3. 4K60 video (YouTube, GameStream, etc.)
This card is no better, and actually probably worse, than my three year old GTX 770 4GB and at best is equal to a GTX 960, which can be had for easily under $200.
The 380X is often 15% faster than the 960, and sometimes 30% faster... for average FPS. When it comes to the 99th percentile or minimum framerates there's just no comparison, 380x lays the smackdown on the 960.
Sometimes the 960 can do quite well, but it usually loses by quite a bit.
Your 770 is slower than the 960in some games, a bit faster in others. It is not as fast as the 380x, which is the same approximate performance of AMD's old 7970, which is a nearly four year old card.
My 770 is at 1400 MHz core / 7940 MHz memory; trust me, neither the GTX 960 or this 380x are beating me and I'm not digging into my wallet until Pascal comes out. It was tough when the GTX 980 Ti was released, but I'm sticking to my guns.
At 1080p, which is where the 960 and 380x should be competing (because if you buy either of these for 1440p+, you're a moron), if they had gotten a 960 4GB for comparison, there wouldn't be much difference. You can get a 960 4GB, which is a one year old card, for less than $200 and it's essentially just as good at stock. The few frames the 380x wins in this review is mostly due to the VRAM limit on the 960 2GB.
Plus you can overclock a 960 to insane levels, so why spend $229 on the 380x when you can spend $180 on a GTX 960 4GB and overclock it if you want more speed?
The sad thing is how all you make comparisons on this kind of technology. GPU scales well when made fat. So the point of "performance" is really moot when doing comparisons. It's like saying that the 750Ti is the same as a GTX480 because they perform similarly. This card (like all the new AMD 300 series) are simply fat, bloated, clocked at their limit GPUs that are sold under cost to compete with smaller more efficient architectures created by the competition (that is selling them at premium prices). This 380X card is a complete fail in trying to make AMD advance in its fight. Competition has done marvelous things meanwhile: they came with a GPU, the GM106, which is half the GK104 in term of size and power consumption, and has the same performances. This is the progress the competition did while AMD passed from GCN 1.0 to GCN 1.2, which has only few tricks and hacks but nothing really good to bring that already obsolete architecture to the new level of competition. Sorry, but if you are excited by this kind of "evolution" and you do not understand where this has brought "your favorite company" to, you really deserve to stay a generation back in terms of innovations. And be happy of this Tonga which will be sold for few bucks in few month and be completely forgotten when Pascal will annihilate it at it first iteration.
Comparing a 2.5 year old card that cost $450-500 against a $230 card.... and complaining if AMD is even trying... your bias is showing sir. You shouldn't feel the need to upgrade yet in my opinion, unless of course your card is being crippled by NVIDIA's drivers, whoops!
GTX 770 launched at $399, not $450. Interestingly, the GTX 770 was a smaller chip and drew less power. So, tossing the consumer economics aside, SpartyOn raises a good point.
The power consumtion of the 380X under load is lower with Furmark than it is with Crysis 3, while it is the opposite with the GTX 960. Any thoughts on that?
So after all this time, this graphics card has the same performance as the now 2 years old GTX760? Right... I'm beginning to think the 760 was the best purchase of my life.
Same performance? You may need to re-check benchmarks across the web. R9 380X is more than 40% faster than a GTX760 2GB. TPU has it 43% faster at 1080P and 45% faster at 1440P: http://www.techpowerup.com/reviews/ASUS/R9_380X_St...
If you only have a 2GB version of the 760, you are also reducing texture quality in many games like Titanfall, Shadow of Mordor and have choppiness in Watch Dogs, AC Unity, Black Ops 3, and simply cannot even enable highest textures in some games like Wolfenstein NWO.
R9 380X isn't anything special when we've seen GTX970/290/290X/390 for $250-270 but it beats your card easily by 35-40%.
The 380x was a pointless launch. 50 dollars less you can just get the 380 which is only 10% slower. Or 50 more dollars and just get the 390 which blows the 380x away. This card targets a very narrow range and wasn't really needed imo.
The 380X may come with extra features over the 7970, however has TrueAudio ever truly been tested? Its addition was to help reduce CPU usage and it would be a shame if it went unused in favour of the motherboard sound.
Ah, I'm still undecided what to do with respect to replacing my SLI GTX 680s. I'm in Canada so we're getting murdered by the exchange rate (GTX 970 is $380-$450, R9 390 is $450, the first R9 380X cards are $330...).
Guess I'll just hold on to my 680s a while longer.
I don't think that 380x is a bad card, by any means. It just needs to replace 380 4GB in that price slot and/or the OEMs to bin 380 with only 2 GB of RAM and let the 4G only for 380x. Currently, although understandable for the "novelty" factor, I saw in my country the Asus 380x Strix OC with the same price that some (discounted or not) GTX 970 or R9 390 cards (including an Asus Strix 970 OC :-) ) which is hilarious. Also with like 50% more than a Sapphire Nitro 380 4GB that I bought on Black Friday. Or AMD could simply replace 380 with 380x if the yields are good enough...
I'm not sure I agree. I'm still rocking an HD 7750 (perhaps "wheeling," as in "wheelchair" might be more appropriate at this point). It was mid-tier when I got it, but now it's not really sufficient. I'd like to play games at 1920x1080, but i don't really care; 1360x768 is good enough for me. But my current card can't even deliver that at 60 fps any longer (I actually think the card has deteriorated, because I played Dragon Age Inquisition last year on decent settings, and now I can't run it at 15 fps on Low and 1024x768). Anyway, most people using mid-tier I imagine are more like me. We don't purchase an upgrade as soon as one is available--we want to get the highest mid-tier we can, which involves waiting. If I'd replaced my 7750 two months ago, today I would have an inferior mid-tier card, which certainly will become important a year from now when I'm trying to get just a bit more performance out of it. 10% doesn't matter today, but 18 months from now that 10% will mean the difference between playable and unplayable. The rules of high end gaming aren't applicable at the mid-tier range, because we don't buy an upgrade simply because it's an upgrade.
Another proof graphics industry badly needs new manufacturing process. Possibilities of what can be achieved at 28nm (GPU wise) seem to be exhausted. It will get interesting with new process + HBM2, until then it's going to be a stall water.
Unless someone was able to snag the 380X at a significant discount, I would have trouble justifying not spending the extra money to jump to the 390. That really looks like it would be $60 well spent.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
101 Comments
Back to Article
FriendlyUser - Monday, November 23, 2015 - link
This is not a bad product. It does have all the nice Tonga features (especially FreeSync) and good tesselation performance, whatever this is worth. But the price is a little bit higher than what would make a great deal. At $190, for example, this card woule be the best card in middle territory, in my opinion. We'll have to see how it plays, but I suspect this card will probably find its place in a few months and after a price drop.Samus - Monday, November 23, 2015 - link
Yeah, it's like every AMD GPU...overpriced for what it is. They need to drop the prices across the entire line about 15% just to become competitive. The OC versions of the 380X is selling for dollars less than some GTX970's, which use less power, are more efficient, are around 30% faster, and you could argue have better drivers and compatibility.SunnyNW - Monday, November 23, 2015 - link
To my understanding, the most significant reason for the decreased power consumption of Maxwell 2 cards ( the 950-60-70 etc.) was due to the lack of certain hardware in the chips themselves specifically pertaining to double precision. Nvidia seems to recommend Titan X for single precision but Titan Z for DP workloads. I bring this up because so many criticize AMD for being "inefficient" in terms of power consumption but if AMD did the same thing would they not see similar results? Or am I simply wrong in my assumption? I do believe AMD may not be able to do this currently due to the way their hardware and architecture is configured for GCN but I may be wrong about that as well, since I believe their 32 bit and 64 bit "blocks" are "coupled" together. Obviously I am not a chip designer or any sort of expert in this area so please forgive my lack of total knowledge and therefore the reason for me asking in hopes of someone with greater knowledge on the subject educating myself and the many others interested.CrazyElf - Monday, November 23, 2015 - link
It's more complex than that (AMD has used high density libraries and has very aggressively clocked its GPUs), but yes reducing DP performance could improve performance per watt. I will note however that was done on the Fury X; it's just that it was bottlenecked elsewhere.Samus - Tuesday, November 24, 2015 - link
At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?The answer is both. The problem is, they have multiple mainstream architectures with multiple GPU designs\capabilities in each. Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have.
NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications.
BurntMyBacon - Tuesday, November 24, 2015 - link
@Samus:: "At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?"Both
@Samus: "The answer is both."
$#1+
@Samus: " Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have."
They tried the gaming only route with the 6xxx series. They went back to compute oriented in the 7xxx series. Which of these had more success for them?
@Samus: "NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications."
This would make a lot of sense save for one major issue. AMD wants the compute capability in their graphics cards to support HSA. They need most of the market to be HSA compatible to incentivize developers to make applications that use it.
CiccioB - Tuesday, November 24, 2015 - link
HSA and DP64 capacity have nothing in common.People constantly confuse GPGPU capability with DP64 support.
nvidia GPU have been perfectly GPGPU capable and in fact they are even better than AMD ones for consumer calculations (FP32).
I would like you to name a single GPGPU application that you can use at home that makes use of 64bit math.
Rexolaboy - Sunday, January 3, 2016 - link
You asked a question that's been answered in the post you reply to. Amd wants to influence the market to support fp64 compute because it's ultimately more capable. No consumer programs using fp64 compute is exactly why amd is trying so hard to release cards capable of it, to influence the market.FriendlyUser - Tuesday, November 24, 2015 - link
It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts. Nvidia is only being competitive thanks to the CUDA dominance they have built and their aggressive driver tuning for pro applications.BurntMyBacon - Tuesday, November 24, 2015 - link
@FriendlyUser: "It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts."This. AMD has a vision where GPU's are far more important to compute workloads than they are now. Their end goal is still fusion. They want the graphics functions to be integrated into the CPU so completely that you can't draw a circle around it and you access it with CPU commands. When this happens, they believe that they'll be able to leverage the superior graphics on their APUs to close the performance gap with Intel's CPU compute capabilities. If Intel releases better GPU compute, they can still lean on discrete cards.
Their problem is that there isn't a lot of buy-in to HSA. In general, there isn't a lot of buy-in to GPU compute on the desktop. Sure there are a few standouts and more than a few professional applications, but nothing making the average non-gaming user start wishing for a discrete graphics card. Still, they have to include the HSA (including DP compute) capabilities in their graphics cards if they ever expect it to take off.
HSA in and of itself is a great concept and eventually I expect it will gain favor and come to market (perhaps by another name). However, it may be ARM chip manufacturers and phones/tablets that gain the most benefit from it. There are already some ARM manufacturers who have announce plans to build chips that are HSA compatible. If HSA does get market penetration in phones/tablets first as it looks like may happen, I have to wonder where all the innovative PC programmers went that they couldn't think of a good use for it with several years head start.
Asomething - Tuesday, November 24, 2015 - link
The HSA foundation is partially founded by ARM which means they are already working on it (but as you said there isnt much motivation to make HSA enabled apps). AMD is the only high profile and headline grabbing member of it so they tend to get the most press because of clickbait articles. And a lot (if not all) of nvidia's efficiency improvements do come from the lower transistor density (also the main reason they can say their TDP is so low since the chip has a larger surface area with which to dissipate the same amount of heat as the same chip made using AMD's high density libraries would have), improvements to the memory and reductions in DP capabilities.tamalero - Tuesday, November 24, 2015 - link
anyone can explain me why everyone says the new gpus are overpriced?their pricepoints seems to be similar to the performance of the nvidia cards.
the table on the first page shows clearly.
Even the review shows the 970 and the AMD 390 trading blows and have the same price point.
so, what did I miss? why suddenly fanboys demand even 15% reduction to "become competitive" ?
FriendlyUser - Tuesday, November 24, 2015 - link
As everyone has noted, the cards are uncomfortably close to the higher tier (390 and 970). So, the 380X is not overpriced with respect to the competition from nvidia, but with respect to the 390. The jump in performance is so great, that we should either hope the 390 goes at $300 (practically eliminating the 380X) or the 380X completely dominates the sub-200 territory.Anyway, overall it's a very good product.
just4U - Friday, November 27, 2015 - link
Well.. here in Canada that's not quite the case. A 380/960 /w 4G mem sells for 300ish.. the 380X $330.A 970 runs you $450-500 and a 390 $430+ No way their priced similar to the 970/390.
BurntMyBacon - Tuesday, November 24, 2015 - link
@Samus: "They need to drop the prices across the entire line about 15% just to become competitive."That wouldn't fix the biggest pricing problem shown in this review. The 380X is priced too closely to the 390 given the performance difference. Drop them both by 15% and the 380X is still priced too closely to the 390. I'll leave the rest of you to argue performance vs premium cooler value on the high end and 390/390X vs GTX970/GTX980 performance per dollar, but I submit that a flat 15% drop is too simple an answer to the problem due to competition within their own lineup.
Azix - Wednesday, January 13, 2016 - link
but people were fine with the 960 at the same price...zeeBomb - Monday, November 23, 2015 - link
Ryan smith blessed us with a great graphics card review.maecenas - Monday, November 23, 2015 - link
At this point, NVIDIA or AMD, I'm not sure I would get anything other than an ASUS cooling system. I have the STRIX version of the GTX 970 and it really is fantastic.jasonelmore - Monday, November 23, 2015 - link
It Depends on what you need. The Stock Blower Coolers keep hot air out of the case, so for Small Form Factor Builds, your not going to want Asus's coolers since they dump the hot air back into the case.theduckofdeath - Tuesday, December 1, 2015 - link
I have a mATX case with water cooling and internal padding all around to keep the noise down, and my ASUS Strix GTX 960 is not making a sound and the temp in the case does not go above 50-52 degrees celsius even after hours of playing. The problem with GPUs sucking air out from the rear and blowing the same air out is, they have to generate all of the airflow themselves, which always gets really noisy compared to using the air passing through a case.Samus - Monday, November 23, 2015 - link
I had the Asus GTX970 Turbo and it had the grindiest ball bearing fan I've ever heard. It brought me back to the Athlon's YS Tech and Delta days. The "Titan" cooler on my old GTX770 was virtually silent in comparison.So Asus has their duds, but the Strix seems to be a great cooler if you don't need a blower...but many of us do. In a bit a shame toward Asus, I replaced their Turbo with a PNY 970 (also a blower) and the PNY feels cheaper, but cools better and makes less noise.
evilspoons - Tuesday, November 24, 2015 - link
Don't get me wrong here, I really like ASUS stuff - but they have let me down several times on cheapo video card cooling systems. Nasty sleeve bearing fans on half-height Radeon 6580s that vibrate then seize, which was really cheeky considering the box had a "high quality fan omg!!" thing as part of its marketing material.Ended up replacing the half-height card with a passively cooled one - and a nearby 80 mm case fan - so I couldn't have a crappy onboard fan, since every other card on the market seemed to be carrying the same stupid POS fan. I couldn't even spend more to get a better one!
Margalus - Monday, November 23, 2015 - link
I wouldn't get anything other than a EVGA cooling system.. I have the ACX 2.0 verions of a 970 and a 980 ti, and they are really fantastic... lolSamus - Tuesday, November 24, 2015 - link
EVGA is great, but they don't make Radeon cards. It's important to point out, as well, that EVGA is actually NOT NVidia's OEM partner. PNY is. PNY makes a ton of cards based off NVidia's reference designs, which I think are the best. The Titan cooler used on reference 770/780/970/980 GPU's, specifically the vapor-chamber variant, is unsurpassed by any other partners'. That's why almost every partner makes at least one variant of these GPU's with the Titan cooler. They don't make many, because the rumor is NVidia charges $30 for the vapor chamber cooler and it is more expensive to manufacture the cards because of the installation (GPU binding) technique.But EVGA has probably the best, easiest to deal with warranty. Unfortunately I've had to use it.
tamalero - Tuesday, November 24, 2015 - link
If use Sapphire's DualX and triX for the AMD camp imho.I'm still with my trusty 7950 dual X OC. and works wonders!
just4U - Friday, November 27, 2015 - link
From what I understand Sapphire started with the vapor chamber type cards on a few of their Radeons 6 years ago.. Interesting that Nvidia went that route. I'd never heard of any other company doing it before and didn't know they had that on their high end coolers..maecenas - Monday, November 23, 2015 - link
Fair enough, I am generalizing based on an observation pool of 2, which I shouldn't do, but I really enjoy having a silent GPU that doesn't go over 65C! It seems that cooling technology has progressed across the board, which is great news for everyone.BurntMyBacon - Tuesday, November 24, 2015 - link
@maecenasI agree they have a nice cooling system. They may even have the best at the moment. That said, I do believe they have some good competition in this area. MSI impressed me with their Twin Frozer design back before Asus had a DirectCU design out. They've been constantly improving since then. Saphire (much as I dislike them) released some very appealing vapor chamber designs. EVGA had pretty decent blower coolers, but nothing really standout until their second revision of their non-blower design (ACX 2.0). The ACX 2.0+ is copper heaven. I don't really favor designs that just throw another fan at it without really giving much thought to the heatsink design like Gigabyte's Windforce cards. I feel like MSI set the bar with their original Twin Frozer cards and since then, MSI, Asus, Saphire (sigh), and now EVGA have been vying for dominance in the cooling department.
just4U - Friday, November 27, 2015 - link
Strix is nice but MSI's cooling solution is just as good.olivaw - Monday, November 23, 2015 - link
Did I miss the GTX 960 review???funkforce - Monday, November 23, 2015 - link
Haha, funny you should ask that...Check the comments...
January 2015
http://www.anandtech.com/show/8923/nvidia-launches...
http://www.anandtech.com/show/9547/nvidia-launches...
http://www.anandtech.com/comments/9390/the-amd-rad...
"jeffrey - Thursday, July 02, 2015 - link
Ryan Smith, any update on GTX 960
REPLY
Ryan Smith - Thursday, July 02, 2015 - link
As soon as Fury is out of the way. "
http://www.anandtech.com/comments/9621/the-amd-rad...
extide - Monday, November 23, 2015 - link
Looks like you missed a comment, because there was one at one point that said that there will not be a 960 review. I don't have a direct link to it because I'm not obsessed with the topic, but yeah, it was said.funkforce - Monday, November 23, 2015 - link
You're right, I don't know when that was said. I waited months for that review, because I trust the unbiased reviews here and wanted to buy a new graphics card based on info I could count on.And I just really don't like when you keep promising things, stringing your readers on, and then never delivering. I don't care if the review gets posted, it's just how it was handled. I've been raised that a person's word means something (he told me personally on twitter that he would do it). And I'm sure you could go on that this is the internet etc. but when you've been reading a site since it's start, it's content means something, at least to me. I guess it's why we're all here to some extent. If you go back on your word, then you should at least let ppl. know, a bit more officially, than in a comment section of, I would assume, another article.
Samus - Monday, November 23, 2015 - link
The problem with reviewing the GTX960 is the drivers have been optimizing around improving its performance all year, and every single card performs so different. The GTX960 overclocks incredibly well, some people hit 1400MHz is the board has the right power configuration. This is why you hear people talk about the GTX960 completely trumping the R9 28x/38x's when in reality, both GPU's give and take blows in various games at stock.But when overclocked, the GTX960 is a bit faster than an overclocked R9 28x/38x. I think this causes a lot of reviewers to tip-toe around these cards. And when you consider a GTX960 with 4GB is over $200 and a GTX970 with 4GB (er technically 3.5GB) is $260-$280 after rebate, it becomes muddled.
OrphanageExplosion - Tuesday, November 24, 2015 - link
I own the GTX 960. Mine hits 1450MHz easily. I think they all do because temps and power consumption barely budge. I also own the R9 380 in 4GB configuration.Drivers have barely improved the 960 performance and the 380 is faster in almost every game I own. Overclocking gets the 960 to parity, or slightly better to a small extent where you can't really tell the difference.
Of the two I'd take the 960 simply due to the efficiency, but driver updates have really made no difference to anything other than the games they were optimised for. Library titles have seen no improvement.
dananski - Monday, November 23, 2015 - link
I also waited a while, checking the site often. I think I spotted some 960 benchmarks slipped into some analysis article on here many months later, but no post to say they'd been done, let alone a full review.It's not just the graphics section either. I really don't get why there are so many announcements and so few reviews on here these days. It's a shame because reviews like this one here are the reason I like AnandTech so much.
olivaw - Monday, November 23, 2015 - link
I wanted the GTX 960 reviewed because it seems to be pretty good for an HTPC, since the power consumption is lower than most chips. I know the card is good, but I wanted an AnandTech review :)drwhoglius - Monday, November 23, 2015 - link
From Steam Hardware Survey http://store.steampowered.com/hwsurvey/directx/October 2015 results
GTX 970 3.80%
GTX 960 2.16%
R9 200 Series 1.06%
R9 300 Series not yet measurable (or too new to be measured as GTX 950 isn't measured either)
Tikcus9666 - Monday, November 23, 2015 - link
Power difference is irrelevant in desktop PC's a 75W difference... 20 hours of usage at Full load is ~20p in the UKa 1000 hours of gaming (a years worth?) for an extra £10
This does not factor in some of this usage is in Winter months, so the extra heat generated reduces the amount of heat required from other sources, thus reducing other heating costs
jasonelmore - Monday, November 23, 2015 - link
what about idle power consumption? which uses 24/7 365 days.it can add up
CaedenV - Monday, November 23, 2015 - link
Card idle so low now that it does not make much of a difference. Even leaving my 4 year old monster on 24/7 is costing me maybe $10-15/year, and with the improved idle power load on newer cards (I am running a 570), it would probably cut that in half.Not to say that you should go crazy and leave things on all of the time because it 'does not matter'... but unless you are running something with a 24/7 load like a render box or a server, then power costs is not a true consideration. Heat generation due to an inefficient card may be a consideration, but not the price of the power used.
rviswas - Monday, November 23, 2015 - link
i was gonna say difference between gtx 860 and this card is less than 5w at idle is says in the review itself look at power consumption.rviswas - Monday, November 23, 2015 - link
gts 960 I meanChaser - Monday, November 23, 2015 - link
Use your AMD GPU to help heat your home. Spoken like a true AMD apologist. LOLlooncraz - Monday, November 23, 2015 - link
Except he's running an nVidia card.Dribble - Monday, November 23, 2015 - link
Too close in price to the 970/390. Anyone spending that much will stretch the extra few $ for the much faster card. Price/performance isn't good enough - needs to be $200.Beany2013 - Tuesday, November 24, 2015 - link
Except when you literally can't afford those extra $ - IE in the UK, the 380X starts around £190, the 970 starts around £250 (using Overclockers.co.uk as a reference).The cost difference there, if all you're doing is upgrading a GPU, is significant enough where you can't really say 'ooh, it's only a little more' - if we were talking £190 and £220, that'd be different.
Likewise, if you're configuring a whole system, and aren't an *avid* gamer (IE, as a survey of one, I mostly dick about in Serious Sam 3 and Metro 2033/Last Light - both of which are far better with a chunky GPU if you like your shiny goodness) then the £60 difference is better spent elsewhere, like RAM, storage, or a larger monitor.
Horses for courses, but if you're trying to eke out as much overall value as possible for a machine without horribly compromising on performance, AMD make a hell of a lot of sense.
Me? I'm waiting for Fiji to come down below £200. And, you know, to get a new job. Which'd probably help, natch.
AndrewJacksonZA - Monday, November 23, 2015 - link
Re: "Finally, we’re also unable to include compute benchmarks for R9 380X at reference clocks, as AMD’s drivers do not honor underclocking options with OpenCL programs."Would someone please be so kind as to explain "underclocking options with OpenCL programs" to me please? Why do the cards need to be underclocked when running OpenCL programs?
Thank you.
Ryan Smith - Monday, November 23, 2015 - link
The card we received is the STRIX R9 380X OC, which comes with a factory overclock of 1030MHz, versus 970MHz for a reference card. We underclock this to get reference performance, however underclocking doesn't work with OpenCL programs.AndrewJacksonZA - Monday, November 23, 2015 - link
OK, got it, thanks Ryan.CaedenV - Monday, November 23, 2015 - link
My guess is that these cards are factory OC'd, which means that they would need to be underclocked to run an apples-to-apples comparison at true 'stock' settings.Zeus Hai - Monday, November 23, 2015 - link
Can anyone confirm that AMD's Frame Limiter still doesn't work on Windows 10?nathanddrews - Monday, November 23, 2015 - link
That's news to me.Just for you, I tested it using my i3-2100/HD7750/W10 test mule. VSync globally disabled in CCC, VSync disabled in Dota 2, Frame Target set to 60fps. Steam overlay shows 60fps and I see no signs of tearing or stuttering. To my knowledge, it never stopped working.
Zeus Hai - Monday, November 23, 2015 - link
Hmm.., it should have some tearing because it doesnt really sync with the monitor anyway, mate. Can you set it to 65, 70, 75? Mine doesnt work in LoL, I set it to 60, but it always fires up over 150fps+Dirk_Funk - Monday, November 23, 2015 - link
LoL does have its own fps limiter, so perhaps that's causing a mix-up in the software. Also, LoL might be running in fake fullscreen mode whereas the catalyst fps limiter specifies it will "Reduce power consumption by running full-screen applications at reduced frame rates." I'm gonna go try a round of LoL now because you have me curious.Asomething - Tuesday, November 24, 2015 - link
Mine does, was just benching my new 290x and forgot to turn it off so my results were skewed by the 75fps frame cap i set.nirolf - Monday, November 23, 2015 - link
There's "ASUS R9 Fury OC" mentioned in the first table in the Overclocking section.Ryan Smith - Monday, November 23, 2015 - link
Thanks.Shadowmaster625 - Monday, November 23, 2015 - link
Tonga is an epic disaster. It is less than 10% more efficient than tahiti in terms of performance per watt, and in terms of performance per transistor (fps per mm^2) it apeears to be actually worse. Meanwhile, Nvidia releases maxwell which outperformas kepler on both these metrics not by some paltry 10% or less, but by a very wide margin.CiccioB - Tuesday, November 24, 2015 - link
All the GCN architecture is a disaster.With TeraScale architecture AMD could fight with smaller dies and less W for a bit less performance.
With GCN AMD has to compete using larger and power hungry dies that have brought it to go in red also in the graphics division, while with older TeraScale it at least could be at least on par.
GCN is an architecture not up with that of the competition.
DP64 presence is not the problem, as AMD has kept on reducing it influence over every GCN step (starting from 1/4FP and ending to 1/24FP) with no real results under the power consumption term. They probably could just spare few mm^2 on the die, but they are too way back with memory compression (I can't really believe they never thought about that) and their bus are way too big, expensive and power hungry.
All the architecture is a fail. And DX12 is not going to solve anything, as if they ever raise their performances of 10% over the competition, they are still way back in efficiency both in terms of W and die size.
Kalessian - Monday, November 23, 2015 - link
The Crysis 3 numbers don't make sense to me, vsync get left on or something?Ryan Smith - Monday, November 23, 2015 - link
Nope. V-sync is off, and I can vouch that the instantaneous framerate does go over 60fps. That's just an amusing case of cards at this performance segment coming very close to averaging 60fps.DanNeely - Monday, November 23, 2015 - link
For the price matchup table on the first page, the 4GB 960 starts at $220 vs $180 for the 2gb model. NVidia might not be splitting them apart by model number; but pricewise it has cards at both slots.tviceman - Monday, November 23, 2015 - link
Current cheapest 4gb GTX 960 on newegg (USA) is $180 w/o rebates. Next cheapest is $185 w/ additional $20 MIR. Next cheapest is $199 w/o rebate. Next cheapest is $210.There are plenty of 4gb GTX 960 cards for much less than $220.
nathanddrews - Monday, November 23, 2015 - link
So... how much longer is AMD going to pretend that HDMI 2.0 doesn't exist? DP adapters are still MIA.extide - Monday, November 23, 2015 - link
Next gen GPU's dude...medi03 - Monday, November 23, 2015 - link
Cause dat 4k resolution is golden on card that can barely push 1440p...nathanddrews - Monday, November 23, 2015 - link
Irrelevant:1. Most 4K TVs that can do 4K60 4:4:4 over HDMI 2.0 can also do 1080p120, native input.
2. Games like Dota 2, LoL, and nearly all games prior to 2013 can be play 4K no problem by GPUs like this.
3. 4K60 video (YouTube, GameStream, etc.)
SpartyOn - Monday, November 23, 2015 - link
This card is no better, and actually probably worse, than my three year old GTX 770 4GB and at best is equal to a GTX 960, which can be had for easily under $200.Is AMD even trying anymore?
looncraz - Monday, November 23, 2015 - link
Did you read the same review I did?The 380X is often 15% faster than the 960, and sometimes 30% faster... for average FPS. When it comes to the 99th percentile or minimum framerates there's just no comparison, 380x lays the smackdown on the 960.
Sometimes the 960 can do quite well, but it usually loses by quite a bit.
Your 770 is slower than the 960in some games, a bit faster in others. It is not as fast as the 380x, which is the same approximate performance of AMD's old 7970, which is a nearly four year old card.
SpartyOn - Monday, November 23, 2015 - link
My 770 is at 1400 MHz core / 7940 MHz memory; trust me, neither the GTX 960 or this 380x are beating me and I'm not digging into my wallet until Pascal comes out. It was tough when the GTX 980 Ti was released, but I'm sticking to my guns.At 1080p, which is where the 960 and 380x should be competing (because if you buy either of these for 1440p+, you're a moron), if they had gotten a 960 4GB for comparison, there wouldn't be much difference. You can get a 960 4GB, which is a one year old card, for less than $200 and it's essentially just as good at stock. The few frames the 380x wins in this review is mostly due to the VRAM limit on the 960 2GB.
Plus you can overclock a 960 to insane levels, so why spend $229 on the 380x when you can spend $180 on a GTX 960 4GB and overclock it if you want more speed?
Sushisamurai - Monday, November 23, 2015 - link
Errr... Isn't the 960 a rebadge of the 770?Sushisamurai - Monday, November 23, 2015 - link
Note: rebadge in the sense that the hardware is super similar, minus the maxwell gen 2 featuresSushisamurai - Monday, November 23, 2015 - link
Oops I lied. The 770 is not comparable to the 960; I'm assuming it's better. Mind u, the 280X and 770 were comparable back in the day.silverblue - Monday, November 23, 2015 - link
Yep, as the 770 is essentially a tweaked 680, which traded blows with the 7970/7970GE,CiccioB - Tuesday, November 24, 2015 - link
The sad thing is how all you make comparisons on this kind of technology. GPU scales well when made fat. So the point of "performance" is really moot when doing comparisons. It's like saying that the 750Ti is the same as a GTX480 because they perform similarly.This card (like all the new AMD 300 series) are simply fat, bloated, clocked at their limit GPUs that are sold under cost to compete with smaller more efficient architectures created by the competition (that is selling them at premium prices).
This 380X card is a complete fail in trying to make AMD advance in its fight. Competition has done marvelous things meanwhile: they came with a GPU, the GM106, which is half the GK104 in term of size and power consumption, and has the same performances. This is the progress the competition did while AMD passed from GCN 1.0 to GCN 1.2, which has only few tricks and hacks but nothing really good to bring that already obsolete architecture to the new level of competition.
Sorry, but if you are excited by this kind of "evolution" and you do not understand where this has brought "your favorite company" to, you really deserve to stay a generation back in terms of innovations. And be happy of this Tonga which will be sold for few bucks in few month and be completely forgotten when Pascal will annihilate it at it first iteration.
britjh22 - Monday, November 23, 2015 - link
Comparing a 2.5 year old card that cost $450-500 against a $230 card.... and complaining if AMD is even trying... your bias is showing sir. You shouldn't feel the need to upgrade yet in my opinion, unless of course your card is being crippled by NVIDIA's drivers, whoops!tviceman - Monday, November 23, 2015 - link
GTX 770 launched at $399, not $450. Interestingly, the GTX 770 was a smaller chip and drew less power. So, tossing the consumer economics aside, SpartyOn raises a good point.britjh22 - Monday, November 23, 2015 - link
The 770 2GB launched at $399, but the 4gb launched at anywhere from $450 to $500 depending on the model.200380051 - Monday, November 23, 2015 - link
The power consumtion of the 380X under load is lower with Furmark than it is with Crysis 3, while it is the opposite with the GTX 960. Any thoughts on that?Ryan Smith - Monday, November 23, 2015 - link
The power demands on the CPU are much more significant under a game than under FurMark.Also, that specific GTX 960 is an EVGA model with a ton of thermal/power headroom. So it's nowhere close to being TDP limited under Crysis.
Edit: My apologies to one of our posters. It looks like I managed to delete your post instead of replying to it...
The True Morbus - Monday, November 23, 2015 - link
So after all this time, this graphics card has the same performance as the now 2 years old GTX760?Right... I'm beginning to think the 760 was the best purchase of my life.
RussianSensation - Monday, November 23, 2015 - link
Same performance? You may need to re-check benchmarks across the web. R9 380X is more than 40% faster than a GTX760 2GB. TPU has it 43% faster at 1080P and 45% faster at 1440P:http://www.techpowerup.com/reviews/ASUS/R9_380X_St...
If you only have a 2GB version of the 760, you are also reducing texture quality in many games like Titanfall, Shadow of Mordor and have choppiness in Watch Dogs, AC Unity, Black Ops 3, and simply cannot even enable highest textures in some games like Wolfenstein NWO.
R9 380X isn't anything special when we've seen GTX970/290/290X/390 for $250-270 but it beats your card easily by 35-40%.
Laststop311 - Monday, November 23, 2015 - link
The 380x was a pointless launch. 50 dollars less you can just get the 380 which is only 10% slower. Or 50 more dollars and just get the 390 which blows the 380x away. This card targets a very narrow range and wasn't really needed imo.Makaveli - Monday, November 23, 2015 - link
I believe the difference in Shadow of Mordor between the 7970 and the 380x at 1080p may only be clockspeed and not a difference from Tahiti or Tonga!silverblue - Monday, November 23, 2015 - link
The 380X may come with extra features over the 7970, however has TrueAudio ever truly been tested? Its addition was to help reduce CPU usage and it would be a shame if it went unused in favour of the motherboard sound.silverblue - Monday, November 23, 2015 - link
Slight correction, it was to provide better effects, though I imagined that it would help a little with CPU usage anyway.Makaveli - Monday, November 23, 2015 - link
The only difference between them that counts is GCN 1.0 vs 1.2 TrueAudio has to be supported by the game and modor doesn't support it.Cryio - Monday, November 23, 2015 - link
You guys REALLY need to switch to a Skylake i7 4.5 GHz with DDR4 3000+ system for benching GPUs.That Ivy 4.2 GHz is certainly holding back AMD GPUs, core parking issues, not as fancy drivers and all.
Ryan Smith - Monday, November 23, 2015 - link
The GPU testbed is due for a refresh. We'll be upgrading to Broadwell-E in 2016 once that's available.godrilla - Monday, November 23, 2015 - link
Newegg has the PowerColor r9 390 for $265.99 preblackfriday deal, for a bit more you get 20% more performance and double the vram!r3loaded - Tuesday, November 24, 2015 - link
Good to see my 7970 is still holding up very well even after almost four years!Enterprise24 - Tuesday, November 24, 2015 - link
7970 from 2011 can still compete with 380X in 2015.CiccioB - Tuesday, November 24, 2015 - link
Even the GTX480 can compete with GTX750Ti... something to be proud of, isn't it?evilspoons - Tuesday, November 24, 2015 - link
Ah, I'm still undecided what to do with respect to replacing my SLI GTX 680s. I'm in Canada so we're getting murdered by the exchange rate (GTX 970 is $380-$450, R9 390 is $450, the first R9 380X cards are $330...).Guess I'll just hold on to my 680s a while longer.
Mugur - Wednesday, November 25, 2015 - link
I don't think that 380x is a bad card, by any means. It just needs to replace 380 4GB in that price slot and/or the OEMs to bin 380 with only 2 GB of RAM and let the 4G only for 380x. Currently, although understandable for the "novelty" factor, I saw in my country the Asus 380x Strix OC with the same price that some (discounted or not) GTX 970 or R9 390 cards (including an Asus Strix 970 OC :-) ) which is hilarious. Also with like 50% more than a Sapphire Nitro 380 4GB that I bought on Black Friday. Or AMD could simply replace 380 with 380x if the yields are good enough...SolMiester - Wednesday, November 25, 2015 - link
LOL, bit late to the party isnt it, anyone buying mid range has probably already purchased this generation...aria - Monday, November 30, 2015 - link
I'm not sure I agree. I'm still rocking an HD 7750 (perhaps "wheeling," as in "wheelchair" might be more appropriate at this point). It was mid-tier when I got it, but now it's not really sufficient. I'd like to play games at 1920x1080, but i don't really care; 1360x768 is good enough for me. But my current card can't even deliver that at 60 fps any longer (I actually think the card has deteriorated, because I played Dragon Age Inquisition last year on decent settings, and now I can't run it at 15 fps on Low and 1024x768). Anyway, most people using mid-tier I imagine are more like me. We don't purchase an upgrade as soon as one is available--we want to get the highest mid-tier we can, which involves waiting. If I'd replaced my 7750 two months ago, today I would have an inferior mid-tier card, which certainly will become important a year from now when I'm trying to get just a bit more performance out of it. 10% doesn't matter today, but 18 months from now that 10% will mean the difference between playable and unplayable. The rules of high end gaming aren't applicable at the mid-tier range, because we don't buy an upgrade simply because it's an upgrade.HollyDOL - Thursday, November 26, 2015 - link
Another proof graphics industry badly needs new manufacturing process. Possibilities of what can be achieved at 28nm (GPU wise) seem to be exhausted. It will get interesting with new process + HBM2, until then it's going to be a stall water.lazymangaka - Tuesday, December 1, 2015 - link
Unless someone was able to snag the 380X at a significant discount, I would have trouble justifying not spending the extra money to jump to the 390. That really looks like it would be $60 well spent.Faultline1 - Friday, December 18, 2015 - link
What causes the 390 to be below the 380s in the Vantage Pixel fill benchmark test?