Latest StoriesPCs
    AMD's CPUs You Should Consider For Your Next PC Build

    After floundering for the last five years with their Bulldozer architecture and its derivatives, AMD is releasing processors based on a new architecture called Zen. The Ryzen CPUs, starting with the high end chips launching this March, have been made to tackle Intel head on.

    On March 2nd AMD is releasing three high end CPUs aimed at gamers, content creators, and enthusiasts, all with 8 cores and 16 threads. The Ryzen 7 1800X is the flagship with a base clock of 3.6GHz, a boost speed of 4.0GHz, a TDP of 95W, and retails for $500. AMD is claiming that this chip will outperform Intel's core i7 6900K by 9% in multi threaded work and is dead even in single thread performance. The 6900K is also an 8 core/16 thread CPU, has a clock speed of 3.2GHz and a turbo of 3.7GHz. It'll also run you about $1050.

    In the middle is the 1700X with a base clock speed of 3.4GHz and a 3.8GHz boost clock. This is also a 95W TDP chip. AMD claims this will significantly outperform the core i7 6800K, which has 2 fewer cores, in multi threaded workloads by 39%. The 1700X will cost slightly less at $400 compared to about $425 for the 6800K.

    Finally, the 1700 rounds out the high end. For $330 you're getting a CPU with a base clock of 3.0GHz, a boost speed of 3.7GHz, and a TDP of only 65W. Intel's core i7 7700K ($350), which AMD is choosing to compare to, only has 4 cores and a TDP of 91W. The i7's 4.2GHz clock and 4.5GHz turbo will be faster in single threaded performance, but AMD is claiming up to 46% better performance in multi threaded applications.

    Later this year AMD will also release chips for the Ryzen 5 class, which sits in the middle, and the Ryzen 3 class, which will be more budget oriented. Leaked benchmarks of the Ryzen 5 1600X, a 6 core/12 thread CPU, show it outperforming many i7 processors, so that's definitely something to look out for in a few months.

    First Look at Dell's Canvas 27-Inch Display

    We get up close to Dell's Canvas, a 27-inch touchscreen and Wacom-pen enabled display that's meant to be used in place of your desktop keyboard. Here's how Dell expects artists to use the device in Windows 10, how it works with their rotating dial, and why they think it's different than a Wacom Cintiq.

    Hands-On: HTC Vive Tracker and Deluxe Audio Strap

    We go hands-on with HTC's new Vive Tracker, which allows developers to make positionally-tracked wireless accessories for Virtual Reality. We test tracked rifles, baseball bats, and even a firehose. Plus, we put on HTC's new Deluxe Audio Strap, which makes the Vive much more comfortable to wear.

    Razer's "Project Valerie" 3-Screen Gaming Laptop Prototype

    We check out Razer's Project Valerie, a concept gaming laptop that has three 17-inch 4K screens built into its chassis. Running an Nvidia GTX 1080, we see Battlefield One running across all three displays and chat with Razer about why they built this insane prototype.

    Tested: HP Omen 17 Gaming Laptop

    We've been testing the HP Omen 17, the first laptop we've tested running on Nvidia's GeForce GTX 1070--a full powered Pascal GPU. That means this is truly a desktop replacement: a portable powerhouse that can run full roomscale virtual reality off of just one AC power outlet. But there are some tradeoffs that allow this fast gaming PC to be priced at just $1500.

    Tested: Microsoft Surface Studio Review

    We test and review Microsoft's new Surface Studio all-in-one PC, putting it front of cartoonists and graphic designers to see how the 28-inch touchscreen compares with digitizers like Wacom's Cintiq. Here's what we think about the Surface Studio's display, compact computer hardware, and unique hinge that connects them.

    Tested: Microsoft Surface Book Performance Base Review

    While Microsoft didn't announce a proper successor to its Surface Book for this holiday, they released an update to the laptop with a Performance Base model. We test the Surface Book with increased battery capacity and a new discrete GPU, as well as update you on what the past year has been like using the Surface Book as a primary work laptop.

    Hands-On with Epic Games Robo Recall for Oculus Touch

    Epic Games--the makers of Unreal--have just announced their first full VR game: Robo Recall. We playtest a demo of this virtual reality shooter at Oculus Connect, using the new Touch motion controllers. After chatting with Epic Games about the gameplay design ideas in Robo Recall, Jeremy and Norm share their impressions.

    Tested Tours VR Projects at USC's Mixed Reality Lab

    At USC's Institute for Creative Technologies, computer scientists and engineers have been tinkering with virtual reality, augmented reality, and everything in between. We're given a tour of ICT's Mixed Reality Lab, where projects explore the intersections of VR and accessibility, avatars, and even aerial drones.

    Tested: Nvidia GTX 1060 Rains on the RX480

    AMD dreamt of mid-range glory when they shipped the Radeon RX480. The RX480 offered a great little package, including performance which matched high-end cards from past generations, lower power utilization, and a compact package suitable for most cards.

    True to form, Nvidia came along and crushed AMD's dreams.

    AMD announced its intent to pursue the ordinary gamer's heart months ago. Perhaps AMD's true high-end, code-named Vega, wouldn't be ready. Maybe AMD realized Nvidia would try to capture the high-end first. Either way, AMD laid their strategy bare for the world to see – including a certain Santa Clara-based GPU company.

    So it should surprise no one that Nvidia launched the GTX 1060 scant three weeks after the RX480 hit the street. At first, it seemed Nvidia's new mainstream card might not really be mainstream. Initial pricing suggested pricing closer to $300, based on Nvidia's own "Founder's Edition" card, which the company offers direct to users. Several weeks after the launch, pricing parity has hit, however. Prices for GTX 1060s running at stock clock speeds range from $249 to $329 depending on clock frequencies and cooler configurations. Radeon RX480 8GB cards run from $239- $279 while 4GB cards run right around $200. Availability for either the GTX 1060 or the RX480 remain spotty, suggesting demand still runs pretty high weeks after launch.

    So which should you buy? As always, let's look at the numbers.

    Tested: Radeon RX480 Video Card Review

    Going for second place seems like a weird business strategy, but the RX480 GPU fits in with AMD's CPU strategy of trying hard to stay in second place in a race where only two major players exist.

    It's also a smart strategy, at least on the GPU end. Make the most of what you have, and go for the mass market. The potential volume for $200 graphics cards dwarfs that of cards like the GTX 1070, which costs about twice as much.

    So can a $240 graphics card deliver performance necessary for modern DX12 gaming? Let's take a look at the numbers – first, the GPU specs, then performance.

    By the Numbers

    Nvidia's GTX 970 looks to be AMD's main target for the RX480 when it comes to performance. So let's take a look at the specs of the two GPUs side-by-side (chart below).

    Nvidia shader ALU (called CUDA cores by the company), and AMD's shader cores (which AMD refers to as stream processors) differ architecturally, so you can't really compare performance based on the number of ALUs. The clock frequency for the RX480 disappoints a little – I'd expect more from 14nm FinFET logic. The good news lies with the die size. At 230mm2, AMD likely has some pricing flexibility.

    I also appreciate the fact that AMD finally dumped the DVI port. Owners of older displays may be disappointed, but it's really time to move beyond DVI to a more modern interface. An owner of a DVI-only monitor will need to buy an adapter, however, unless they're willing to replace said monitor.

    Beyond the raw specs, AMD offers several interesting features which Nvidia can't quite match. The Polaris GPU includes native support for FP16 (16-bit floating point, aka half-precision), which can be useful in certain types of GPU compute applications, but unlikely to factor in much with games. Nvidia's Pascal converts FP16 to FP32, then uses that converted format, which reduces FP16 performance a bit.

    The geometry engine includes features supporting small, instanced objects, such as an index cache. That will help games which uses instancing, mostly real-time or turn-based strategy games which might throw hundreds of similar objects onto the screen.

    Fun With Thermal Imaging and Graphics Cards

    I've recently been using the Seek Thermal Compact IR camera sensor to shoot photos of graphics cards under load. I received one of this nifty devices after being a beta tester, but it languished on a shelf for months. I recently dug the device out and used it to capture thermal images of graphics card under load. What follows is by no means a rigorous assessment of GPU thermal patterns, but it's certainly interesting to take a look at what the thermal output "looks" like.

    The Seek compact uses a 206 x 156 pixel autofocus sensor which also captures temperature data in either Fahrenheit or Celsius with a stated range of -40 degrees F to 626 degrees F (-40C to 330C). The iOS version I use attaches to the phone via a Lightning connector; the Android version uses a micro-USB connector. Seek requires an app, which you download for free; the list price of the Compact is $249. The app can shoot still images or video and can select the color palette, which I just left at the default orange-red setting. The app also scales the images up to a more useful 824 x 464 pixels.

    Despite the relatively small size of the resulting images, you can still pick up definite patterns. I captures these images while running the Rise of the Tomb Raider benchmark at 4K with 4x MSAA and all the various graphical quality settings maxed out. I did attempt to capture the thermal images at roughly the same time in the benchmark, during the second scene inside an enormous cavern with running waterfall.

    I shot these thermal images looking at the back of the card, so not all images will be equal. For example, some of the Nvidia-based cards include a back plate which spreads out the heat a bit more than a bare PCB. While I attempted to collect a temperature reading at the hottest point, this is just a momentary capture in time. I may shoot thermal video capture another time.

    So with all these caveats in mind, let's take a look at the images. I present these in order of oldest to newest GPU.

    Tested: eVGA GeForce GTX 1070 Video Card

    I'd crown the new GTX 1070 as the new God-Emperor of gaming GPUs, except that this card really the baby sister to the GTX 1080, which offers even better performance. On the other hand, eVGA's GeForce GTX 1070 SC costs $439 -- $10 shy of Nvidia's own "Founder's Edition" -- while delivering clock frequencies roughly 6% higher than the reference clocks. Audible noise levels seem slightly lower as well.

    While I ran the usual set of benchmarks on the card, I've been living with with eVGA's GTX 1070 in my main system for nearly a week, running games on my 3440 x 1440 pixel Dell U3415w display. Subjectively, I could tell little difference between this card and the GTX 1080 Founder's Edition I'd been running. I did have to dial back ambient occlusion a bit in Tom Clancy's The Division. Doom, Mirror's Edge Catalyst, XCOM2, and several VR titles on the HTC Vive all seemed to run with excellent frame rates on gorgeously high settings.

    So What's a GTX 1070?

    Take a part that starts out life as a potential GTX 1080 GPU, disable one graphics processing cluster, and voila! You now have a GTX 1070 chip. Each graphics processing cluster consists of 5 graphics compute cores (which Nivdia dubs "streaming multiprocessors" or SMs for short). Let's break down the differences with the reference design -- er, Founder's Edition –in the table below.

    The GTX 1070 uses less exotic GDDR5 memory, clocking said memory at a pretty serious 4GHz – faster than the 7gbps memory used in previous generations. So the GTX 1070 includes fewer shader cores, slightly lower clock frequencies, slower memory, and should cost roughly $300 less.

    Nvidia suggests some 3rd party cards will be priced as low as $379, though all currently available 1070 cards seem to cost more than $400. Availability remains tight, but a cards from MSI and Gigabyte seem to be available. Supply will no doubt catch up with demand after several months.

    Why You Need a Dedicated Testbed

    I admit to a certain laziness in my advanced years. When I was editor of ExtremeTech years ago, I maintained a certain rigor about having dedicated testbeds for graphics and CPU. A dedicated test system requires some specific tender loving care to ensure you get reproducible results. In addition, you want the test system to allow the component under test to shine. So several key aspects need to be maintained:

    • You need to keep the OS fairly clean. In the era of Windows 10 and SSDs, the OS doesn't need to be pristine, but you need to be sure you don't have a lot of background stuff running. That may seem obvious, until you realize that performance testing these days often require always-connected service applications such as Steam or Adobe Creative Cloud. If any of this class of apps start downloading updates in the background, that directly affects performance testing.
    • The same goes true with Windows updates. If you're testing on Windows 10, you really need to use Windows 10 Pro so you can have some control over update scheduling.
    • Check to make sure no extraneous applications might be running while you're running performance tests. For example, installing AMD Radeon graphics drivers often happily default to recording game videos while you play, which will adversely affect benchmarking.
    • Check for VSYNC anomalies, such as Nvidia drivers with adaptive or fast sync enabled.

    I've been out of the benchmarking game long enough that I needed to run performance tests on my production system, which is not ideal. While I've been careful to disable or mitigate performance-sucking background apps, doing so proved tedious every time I needed to run a performance test. An interesting side effect of swapping out numerous graphics cards included the need to re-authenticate Steam and Origin, as well as Chrome and some other online apps.

    So I'm building a duplicate system as a dedicated test system. I believed with all sincerity that modern PCs running Windows 10 would result in highly similar benchmark results on identical systems, even if one system included applications cruft. I discovered I was wrong — the clean system generates performance test results roughly 5-7% better than the older system. On the other hand, I experienced some sense of relief when I found that the relative order of the results remained unchanged — the differences were 5-7% across the board.

    Hands-On with Razer OSVR HDK 2 Virtual Reality Headset

    We're at E3 this week checking out new virtual reality games and hardware. First up is Razer's new OSVR Hacker Development Kit 2. We learn about its display and lens system, how Razer is making this more of a consumer device, and get a hands-on demo. Here's why we're hopeful but cautious about this $400 headset.

    Tested: ODROID C2 $42 Computer

    More tiny computers! This week, Patrick Norton stops by the Tested office to review the Odroid C2, a tiny ARM-based computer that can run Linux and has several advantages over the Raspberry Pi 3. We talk about the importance of USB and Ethernet throughput for these computers, and what projects you can use them for.

    Testing: GeForce GTX 1080 Compute Performance

    Can Nvidia's new flagship compute? Sure it does. But how well?

    Out of idle curiosity, I ran a couple of OpenCL compute-oriented benchmarks on the GTX 1080 and three other GPUs. Bear in mind that this is more quick-and-dirty benchmarking, not rigorously repeated to validate results. The results, however, look interesting and the issue of compute on new GPUs bears further investigating.

    The Setup

    These tests ran on my existing production system, a Core i7-6700K with 32GB DDR4 running at the stock 2,133MHz effective. I used four different GPUs: GTX 1080, Titan X, GTX 980, and an AMD Radeon Fury Nano. The GTX 1080 used the early release drivers, while the other GPUs ran on the latest WHQL-certified drivers available from the GPU manufacturer's web site.

    As you can see from the table below, all four GPUs ran at the reference frequencies, including memory. When I show the results, I don't speculate on the impact of compute versus memory bandwidth or quantity. As I said: quick and dirty.

    GPUGTX 1080Titan XGTX 980Radeon Fury Nano
    Base Clock1.6GHz1.0GHz1.126GHz1.0Ghz
    Boost Clock1.73GHz1.075GHz1.216GHz1.05GHz
    Memory Bandwidth320GB/s336GB/s224GB/s512GB/s

    CompuBench CL

    The first benchmark, CompuBench CL from Hungary-based Kishonti, actually consists of a series of benchmarks, each focusing on a different compute problem. Because the compute tasks differ substantially, CompuBench doesn't try to aggregate them into a single score. So I show separate charts for each test. CompuBench CL 1.5 desktop uses OpenCL 1.1.

    Maker Faire 2016: Pocket CHIP $49 Portable Computer

    Last year, we were impressed by Next Thing Co's $9 CHIP computer. At Maker Faire 2016, we were able to check out their PocketCHIP housing, which puts CHIP into a portable console package that runs Linux and indie game console Pico-8. Here's what you can do with the $49 system!