Latest StoriesPCs
    Tested: Microsoft Surface Studio Review

    We test and review Microsoft's new Surface Studio all-in-one PC, putting it front of cartoonists and graphic designers to see how the 28-inch touchscreen compares with digitizers like Wacom's Cintiq. Here's what we think about the Surface Studio's display, compact computer hardware, and unique hinge that connects them.

    Tested: Microsoft Surface Book Performance Base Review

    While Microsoft didn't announce a proper successor to its Surface Book for this holiday, they released an update to the laptop with a Performance Base model. We test the Surface Book with increased battery capacity and a new discrete GPU, as well as update you on what the past year has been like using the Surface Book as a primary work laptop.

    Hands-On with Epic Games Robo Recall for Oculus Touch

    Epic Games--the makers of Unreal--have just announced their first full VR game: Robo Recall. We playtest a demo of this virtual reality shooter at Oculus Connect, using the new Touch motion controllers. After chatting with Epic Games about the gameplay design ideas in Robo Recall, Jeremy and Norm share their impressions.

    Tested Tours VR Projects at USC's Mixed Reality Lab

    At USC's Institute for Creative Technologies, computer scientists and engineers have been tinkering with virtual reality, augmented reality, and everything in between. We're given a tour of ICT's Mixed Reality Lab, where projects explore the intersections of VR and accessibility, avatars, and even aerial drones.

    Tested: Nvidia GTX 1060 Rains on the RX480

    AMD dreamt of mid-range glory when they shipped the Radeon RX480. The RX480 offered a great little package, including performance which matched high-end cards from past generations, lower power utilization, and a compact package suitable for most cards.

    True to form, Nvidia came along and crushed AMD's dreams.

    AMD announced its intent to pursue the ordinary gamer's heart months ago. Perhaps AMD's true high-end, code-named Vega, wouldn't be ready. Maybe AMD realized Nvidia would try to capture the high-end first. Either way, AMD laid their strategy bare for the world to see – including a certain Santa Clara-based GPU company.

    So it should surprise no one that Nvidia launched the GTX 1060 scant three weeks after the RX480 hit the street. At first, it seemed Nvidia's new mainstream card might not really be mainstream. Initial pricing suggested pricing closer to $300, based on Nvidia's own "Founder's Edition" card, which the company offers direct to users. Several weeks after the launch, pricing parity has hit, however. Prices for GTX 1060s running at stock clock speeds range from $249 to $329 depending on clock frequencies and cooler configurations. Radeon RX480 8GB cards run from $239- $279 while 4GB cards run right around $200. Availability for either the GTX 1060 or the RX480 remain spotty, suggesting demand still runs pretty high weeks after launch.

    So which should you buy? As always, let's look at the numbers.

    Tested: Radeon RX480 Video Card Review

    Going for second place seems like a weird business strategy, but the RX480 GPU fits in with AMD's CPU strategy of trying hard to stay in second place in a race where only two major players exist.

    It's also a smart strategy, at least on the GPU end. Make the most of what you have, and go for the mass market. The potential volume for $200 graphics cards dwarfs that of cards like the GTX 1070, which costs about twice as much.

    So can a $240 graphics card deliver performance necessary for modern DX12 gaming? Let's take a look at the numbers – first, the GPU specs, then performance.

    By the Numbers

    Nvidia's GTX 970 looks to be AMD's main target for the RX480 when it comes to performance. So let's take a look at the specs of the two GPUs side-by-side (chart below).

    Nvidia shader ALU (called CUDA cores by the company), and AMD's shader cores (which AMD refers to as stream processors) differ architecturally, so you can't really compare performance based on the number of ALUs. The clock frequency for the RX480 disappoints a little – I'd expect more from 14nm FinFET logic. The good news lies with the die size. At 230mm2, AMD likely has some pricing flexibility.

    I also appreciate the fact that AMD finally dumped the DVI port. Owners of older displays may be disappointed, but it's really time to move beyond DVI to a more modern interface. An owner of a DVI-only monitor will need to buy an adapter, however, unless they're willing to replace said monitor.

    Beyond the raw specs, AMD offers several interesting features which Nvidia can't quite match. The Polaris GPU includes native support for FP16 (16-bit floating point, aka half-precision), which can be useful in certain types of GPU compute applications, but unlikely to factor in much with games. Nvidia's Pascal converts FP16 to FP32, then uses that converted format, which reduces FP16 performance a bit.

    The geometry engine includes features supporting small, instanced objects, such as an index cache. That will help games which uses instancing, mostly real-time or turn-based strategy games which might throw hundreds of similar objects onto the screen.

    Fun With Thermal Imaging and Graphics Cards

    I've recently been using the Seek Thermal Compact IR camera sensor to shoot photos of graphics cards under load. I received one of this nifty devices after being a beta tester, but it languished on a shelf for months. I recently dug the device out and used it to capture thermal images of graphics card under load. What follows is by no means a rigorous assessment of GPU thermal patterns, but it's certainly interesting to take a look at what the thermal output "looks" like.

    The Seek compact uses a 206 x 156 pixel autofocus sensor which also captures temperature data in either Fahrenheit or Celsius with a stated range of -40 degrees F to 626 degrees F (-40C to 330C). The iOS version I use attaches to the phone via a Lightning connector; the Android version uses a micro-USB connector. Seek requires an app, which you download for free; the list price of the Compact is $249. The app can shoot still images or video and can select the color palette, which I just left at the default orange-red setting. The app also scales the images up to a more useful 824 x 464 pixels.

    Despite the relatively small size of the resulting images, you can still pick up definite patterns. I captures these images while running the Rise of the Tomb Raider benchmark at 4K with 4x MSAA and all the various graphical quality settings maxed out. I did attempt to capture the thermal images at roughly the same time in the benchmark, during the second scene inside an enormous cavern with running waterfall.

    I shot these thermal images looking at the back of the card, so not all images will be equal. For example, some of the Nvidia-based cards include a back plate which spreads out the heat a bit more than a bare PCB. While I attempted to collect a temperature reading at the hottest point, this is just a momentary capture in time. I may shoot thermal video capture another time.

    So with all these caveats in mind, let's take a look at the images. I present these in order of oldest to newest GPU.

    Tested: eVGA GeForce GTX 1070 Video Card

    I'd crown the new GTX 1070 as the new God-Emperor of gaming GPUs, except that this card really the baby sister to the GTX 1080, which offers even better performance. On the other hand, eVGA's GeForce GTX 1070 SC costs $439 -- $10 shy of Nvidia's own "Founder's Edition" -- while delivering clock frequencies roughly 6% higher than the reference clocks. Audible noise levels seem slightly lower as well.

    While I ran the usual set of benchmarks on the card, I've been living with with eVGA's GTX 1070 in my main system for nearly a week, running games on my 3440 x 1440 pixel Dell U3415w display. Subjectively, I could tell little difference between this card and the GTX 1080 Founder's Edition I'd been running. I did have to dial back ambient occlusion a bit in Tom Clancy's The Division. Doom, Mirror's Edge Catalyst, XCOM2, and several VR titles on the HTC Vive all seemed to run with excellent frame rates on gorgeously high settings.

    So What's a GTX 1070?

    Take a part that starts out life as a potential GTX 1080 GPU, disable one graphics processing cluster, and voila! You now have a GTX 1070 chip. Each graphics processing cluster consists of 5 graphics compute cores (which Nivdia dubs "streaming multiprocessors" or SMs for short). Let's break down the differences with the reference design -- er, Founder's Edition –in the table below.

    The GTX 1070 uses less exotic GDDR5 memory, clocking said memory at a pretty serious 4GHz – faster than the 7gbps memory used in previous generations. So the GTX 1070 includes fewer shader cores, slightly lower clock frequencies, slower memory, and should cost roughly $300 less.

    Nvidia suggests some 3rd party cards will be priced as low as $379, though all currently available 1070 cards seem to cost more than $400. Availability remains tight, but a cards from MSI and Gigabyte seem to be available. Supply will no doubt catch up with demand after several months.

    Why You Need a Dedicated Testbed

    I admit to a certain laziness in my advanced years. When I was editor of ExtremeTech years ago, I maintained a certain rigor about having dedicated testbeds for graphics and CPU. A dedicated test system requires some specific tender loving care to ensure you get reproducible results. In addition, you want the test system to allow the component under test to shine. So several key aspects need to be maintained:

    • You need to keep the OS fairly clean. In the era of Windows 10 and SSDs, the OS doesn't need to be pristine, but you need to be sure you don't have a lot of background stuff running. That may seem obvious, until you realize that performance testing these days often require always-connected service applications such as Steam or Adobe Creative Cloud. If any of this class of apps start downloading updates in the background, that directly affects performance testing.
    • The same goes true with Windows updates. If you're testing on Windows 10, you really need to use Windows 10 Pro so you can have some control over update scheduling.
    • Check to make sure no extraneous applications might be running while you're running performance tests. For example, installing AMD Radeon graphics drivers often happily default to recording game videos while you play, which will adversely affect benchmarking.
    • Check for VSYNC anomalies, such as Nvidia drivers with adaptive or fast sync enabled.

    I've been out of the benchmarking game long enough that I needed to run performance tests on my production system, which is not ideal. While I've been careful to disable or mitigate performance-sucking background apps, doing so proved tedious every time I needed to run a performance test. An interesting side effect of swapping out numerous graphics cards included the need to re-authenticate Steam and Origin, as well as Chrome and some other online apps.

    So I'm building a duplicate system as a dedicated test system. I believed with all sincerity that modern PCs running Windows 10 would result in highly similar benchmark results on identical systems, even if one system included applications cruft. I discovered I was wrong — the clean system generates performance test results roughly 5-7% better than the older system. On the other hand, I experienced some sense of relief when I found that the relative order of the results remained unchanged — the differences were 5-7% across the board.

    Hands-On with Razer OSVR HDK 2 Virtual Reality Headset

    We're at E3 this week checking out new virtual reality games and hardware. First up is Razer's new OSVR Hacker Development Kit 2. We learn about its display and lens system, how Razer is making this more of a consumer device, and get a hands-on demo. Here's why we're hopeful but cautious about this $400 headset.

    Tested: ODROID C2 $42 Computer

    More tiny computers! This week, Patrick Norton stops by the Tested office to review the Odroid C2, a tiny ARM-based computer that can run Linux and has several advantages over the Raspberry Pi 3. We talk about the importance of USB and Ethernet throughput for these computers, and what projects you can use them for.

    Testing: GeForce GTX 1080 Compute Performance

    Can Nvidia's new flagship compute? Sure it does. But how well?

    Out of idle curiosity, I ran a couple of OpenCL compute-oriented benchmarks on the GTX 1080 and three other GPUs. Bear in mind that this is more quick-and-dirty benchmarking, not rigorously repeated to validate results. The results, however, look interesting and the issue of compute on new GPUs bears further investigating.

    The Setup

    These tests ran on my existing production system, a Core i7-6700K with 32GB DDR4 running at the stock 2,133MHz effective. I used four different GPUs: GTX 1080, Titan X, GTX 980, and an AMD Radeon Fury Nano. The GTX 1080 used the early release drivers, while the other GPUs ran on the latest WHQL-certified drivers available from the GPU manufacturer's web site.

    As you can see from the table below, all four GPUs ran at the reference frequencies, including memory. When I show the results, I don't speculate on the impact of compute versus memory bandwidth or quantity. As I said: quick and dirty.

    GPUGTX 1080Titan XGTX 980Radeon Fury Nano
    Base Clock1.6GHz1.0GHz1.126GHz1.0Ghz
    Boost Clock1.73GHz1.075GHz1.216GHz1.05GHz
    Memory TypeGDDR5XGDDR5GDDR5HBM
    Memory Bandwidth320GB/s336GB/s224GB/s512GB/s

    CompuBench CL

    The first benchmark, CompuBench CL from Hungary-based Kishonti, actually consists of a series of benchmarks, each focusing on a different compute problem. Because the compute tasks differ substantially, CompuBench doesn't try to aggregate them into a single score. So I show separate charts for each test. CompuBench CL 1.5 desktop uses OpenCL 1.1.

    Maker Faire 2016: Pocket CHIP $49 Portable Computer

    Last year, we were impressed by Next Thing Co's $9 CHIP computer. At Maker Faire 2016, we were able to check out their PocketCHIP housing, which puts CHIP into a portable console package that runs Linux and indie game console Pico-8. Here's what you can do with the $49 system!

    Tested: Mechanical Gaming Keyboards

    What makes a good mechanical keyboard? And why are peripheral companies releasing new gaming keyboards so frequently? Patrick and Norm discuss the state of this essential accessory, and how the switches in new keyboards from Corsair, Razer, and Logitech compare. Which type of switch do you prefer?

    Tested: Nvidia GeForce GTX 1080 Video Card

    GTX 1080 seems like such an odd product name, since it brings up the specter of gaming on a 1080p display. The GTX 1080 kills 1080p gaming dead, makes 1440p gaming the new normal, and finally puts 4K gaming within reach of a single GPU. While the GTX 1080 offers great performance, other attributes make the new GPU attractive for gamers. Let's be clear: the GTX 1080 represents the fastest single GPU graphics card you can buy, but performance may not be the primary reason to buy this card.

    By the Numbers

    Let's first touch on base specifications. Based on Nvidia's latest Pascal GPU architecture, Nvidia builds the GTX 1080 on a 16nm FinFET process at Taiwan's TSMC fab. This represents the first process shrink for an Nvidia GPU in two architectural generations, since the original Kepler-based GTX 680 moved to 28nm. FinFET technology incorporates transistors which extend vertically (the "fin"). FinFET reduces current leakage, enabling greater power efficiency. This allows Nvidia to build monster GPU chips without creating space heaters, if you will.

    That process technology allows Nvidia to create 7.2 billion transistor GPU using a 314mm2 die, considerably smaller than the GTX 980 die while stuffing an additional two billion transistors. This smaller, denser chip clocks at 1.6GHz base clock and 1.73GHz in boost mode; the GPU looks like it offers substantial overclocking headroom, if that floats your boat.

    In addition to all the process technology goodness, the GTX 1080 uses Micron's shiny new GDDR5X memory technology, which transfers data at 10 gigatranfers per second, boosting memory bandwidth by 30% over the GTX 1080 and within striking distance of the memory bandwidth of the massive GTX Titan while using a narrower, 256-bit memory bus. Pascal also improves on Maxwell's memory compression with its fourth generation delta color compression. Depending on game title, the new color compression techniques improve bandwidth 15-30%.

    The bottom line: the GTX 1080 has almost as many shader cores as the GTX 980 Ti, runs them 60% faster, and can move data almost as quickly. Based on these numbers alone, we'd expect a serious performance uptick.

    The State of Hard Drives in the SSD Age

    If you're a PC performance enthusiast without severe budget constraints, you're probably running an SSD in your system. Solid state drive prices continue to plummet, dropping below $0.22 per gigabyte on some 1TB models. While older systems may continue to run a secondary hard drive with rotating platters, newer systems, most users can get by with a single 1TB SSD.

    Alas, I'm not "most users". I just bought a Western Digital Black 6TB hard drive, which spins at 7,200RPM and includes a 128MB cache. The 6TB drive actually replaces two other drives, an aging 4TB WD drive and a really old 2TB Western Digital model. Why on earth do I need a 6TB drive? In reality, I don't — the 4TB drive alone would be adequate; I had two drives for historical reasons that no longer apply.

    On the other hand, the 4TB drive looked like it might need replacing. Adobe Lightroom occasionally rebuilds its catalog when you exit the program. I recently postponed the rebuild because my 4TB drive began making really weird noises during catalog rebuild, and seemed to take forever. A quick CHKDSK revealed no serious errors, but the noise and time to rebuild worried me. So I bought a new drive, and as I did, I thought to myself, why not 6TB? (This despite the fact that combining the 2TB drive contents onto the 4TB drive still leaves me with almost 2TB free).

    My pictures folder contains 1.07TB worth of photos, the documents folder is holds 242GB, and the downloads folder houses 1.3TB. Pruning some stuff out of the downloads folder would likely save 500GB, but that's about it. I'm a digital packrat, with terabytes to fill up. The 6TB drive also runs faster than the older 4TB, at least according to Storage Review, so that's a factor in its favor.

    The interesting thing about the drive, though, is its cost: $269, which translates to less than a nickel per gigabyte. It's going to be some time before SSDs approach that price point. I also use hard drives in my NAS. I've got a Drobo5n attached to my network, which contains five Western Digital Red 3TB NAS drives. Those drives cost even less, at a scant $0.04 per gigabyte. I'd hate to try to build a 15GB Drobo array with SSDs.

    Nvidia Announces GeForce GTX 1080 and 1070

    This may be the video card VR early-adopters have been waiting for. Last Friday, Nvidia announced the highly-anticipated consumer release of its Pascal GPU architecture in two GTX 1000 series video cards. Priced at $380 and $600, the GeForce GTX 1070 and 1080 each theoretically outperform last year's Titan X, an incredible feat given that Nvidia's previous flagship was priced at $1000. The Maxwell-based Titan X, if you recall, was the first video card I tested that you could comfortably play games at 4K resolution without any graphical compromises, which bodes well for these new cards. That performance is due to Pascal's architecture (with technologies like Simultaneous Multi-Projection and CUDA optimizations), which Nvidia claims is twice as efficient as Maxwell, while also benefitting from TSMC's 16nm FinFET process (Maxwell was built off of a 28nm process). GTX 1080 will run at a staggering 1607MHz core clock, from GTX 980's 1126MHz. The higher-end card will also be the first to utilize GDDR5X memory.

    Nvidia is also clearly aware that there's a huge potential customer base in virtual reality early adopters with the 1000 series cards. Aside from sheer pixel-pushing performance--which VR applications are more than happy to gobble up--the cards are supposed to be optimized for VR rendering tasks like lens distortion correction and stereo rendering. GeForce Experience will also have a new photo mode called Ansel, which will allow gamers to control a free-moving camera in-engine to take high-resolution 360-degree stereo screenshots for viewing in VR headsets. I can't wait to test these cards out, and it'll be interesting to see how AMD positions its upcoming Polaris graphics cards against Pascal.

    The Full-Tower PC Case is a Dinosaur

    I'd like to touch on something I ranted about a bit in the April 19 Improbable Insights podcast. The full-tower case is a dinosaur.

    Look, I know some of you out there love your triple-GPU, overclocked, liquid-cooled monster PCs. I love that you love building and using these lumbering beasts, and more power to you. However, most people don't game on triple-4K displays, and the headaches of managing SLI and CrossFire to get a good gaming experience gives me heartburn thinking about it. I know, because I've run SLI rigs, only to be disappointed with lackluster game support, awful image artifacts, and all that heat. I suppose it's a good thing that DX12 offers improved support for multiple GPUs, but game publishers still see multi-GPU setups as fringe cases. (Haha, see what I did there?)

    Unless you're dead set on running three GPUs, you don't need a full-size ATX motherboard. Most higher-end micro-ATX boards implement SLI and CrossFireX support, so you can run your twin graphics cards if you so desire. Micro-ATX mobos typically have four expansion slots; with the right slot setup, you could have your dual GPUs plus another card, be it a PCIe SSD or sound card. You can find a rich selection of micro-ATX motherboards offering serious overclocking support, amenities for liquid cooling, and other high-end features. Only a few years ago, only a few paltry micro ATX boards existed, mainly serving price-conscious buyers. Not so today.

    Mini-ITX motherboards allow you to build even smaller systems, as I did with my itty-bitty gaming rig. As with micro-ATX boards, the selection of mini-ITX boards expanded substantially over the past few years, and even include boards aimed at high end gaming — though you're still limited to one graphics card.

    How To Choose Your PC Processor

    Choosing the right PC processor lies at the intersection of what you need, what you can afford, what you want to accomplish, and your self image.

    The focus here is on desktop processors — in particular, desktop systems you plan on building youreself. Since laptop CPUs ship inside complete systems, that's a topic for another day. Note also that these are my rules of thumb. You may see things differently. When I've written these articles for other publications, I try to be dispassionate, but this time it's all about my choices.

    Let's run down each of these intersecting elements, shall we?

    Need

    I used to believe understanding your need to be the most important factor. I'm not convinced that's true any longer, mostly because even relatively low-end processors offer outstanding performance these days. Entry-level quad-core AMD processors can be had for under $100, while Intel's lowest-cost quad-core CPUs cost just a bit under $190, going back to Ivy Bridge, now three generations back. I'd steer away from dual-core desktop processors these days, since even web browsers now spawn multiple threads.