Quantcast
Latest StoriesFeatured
    Tested Explains: How Google's Project Ara Smartphone Works

    Project Ara is real, and Google has its fingers on the pulse of the technologies required to make modular smartphones a reality. Given the overwhelming public response to the Phonebloks concept, it's something that users seem to want, too. But whether or not Project Ara modular phones have a future in the smartphone marketplace will largely depend on whether or not there's a strong hardware ecosystem to support it. The custom PC market wouldn't have flourished a decade ago if component manufacturers weren't making user-friendly video cards, storage drives, motherboards, and power supplies--the building blocks of a PC. That's the point of this week's Ara Developers Conference: getting partners excited and educated about how they can build hardware to support that vision for a modular phone.

    The two-day conference, which was also streamed online, coincided with the release of the Project Ara MDK, or Module Developers Kit. This MDK provides the guidelines for designing Ara-compatible hardware, and along with the technical talks presented at the conference, offer the first clear look in the technologies that make Ara possible, if not completely practical. I attended the conference and read through the MDK to get a high-level understanding Google's plans for Ara, which goes far to address the concerns we and experts have had about the modular phone concept. I'm not yet a believer, but at least this clearly isn't a pipe dream. The following are what I consider the important takeaways from what Google has revealed so far.

    A brief note: the conference was also the first public showing of a Project Ara working prototype (past photos have been of non-functioning mockups), though the unit was unable to boot up and had a cracked screen. A little appropriate, given that both the main processing unit and screen are replaceable modules.

    Project Ara is two core components: the Endoskeleton and the Module

    On the hardware side, Google has laid out specific guidelines for how Project Ara phones can be built. The most important piece of hardware is the chassis, or what Project Ara leads are calling the "Endoskeleton." Think of this as an analogue to a PC case--it's where all the modular components will attach. In fact, it reminds me a lot of the design of Razer's Project Christine, in that a central "spine" traverses the length of Project Ara phones, with "ribs" branching out to split the phone into rectangular subsections. In terms of spatial units, the Endoskeleton (or Endo) is measured in terms of blocks, with a standard phone being a 3x6 grid of blocks. A mini Ara phone spec would be a 2x5 grid, while a potential large phone size would be a 4x7 grid.

    Fitting into the spaces allotted by the Endos structure would be the Project Ara Modules, the building blocks that give the smartphone its functionality. These modules, which can be 1x1, 2x1, or 2x2 blocks, are what Google hopes its hardware partners will develop to sell to Project Ara users. Modules can include not only basic smartphone components like the display, speakers, microphone, and battery, but also accessories like IR cameras, biometric readers, and other interface hardware. The brains of a Project Ara phone--the CPU and memory--live in a primary Application Processor module, which takes up a 2x2 module. (In the prototype, the AP was running a TI OMAP 4460 SoC.) While additional storage can be attached in separate modules, you won't be able to split up the the AP--processor, memory, SD card slot, and other core operational hardware go hand-in-hand.

    Awesome Jobs: Meet Linda Gormezano, Polar Bear Poop Tracker

    Understanding the changing dietary habits of polar bears is the key to seeing how climate change and shrinking polar ice is affecting their lifestyles. And the best way to know what’s happening with their diet? Look at their poop, of course! Linda Gormezano, an ecologist at the American Museum of Natural History in New York City, has trained her dog Quinoa to help her find the best samples left by bears as they cross the frozen Canadian tundra. Gormezano chatted with us about why poop is such a useful scientific specimen and what it’s like to spend months living in a camp in the heart of polar bear country.

    A grouping of adult male polar bears along the coast of western Hudson Bay in summer (photo credit: Robert F. Rockwell)

    What’s ecology and how does it apply to polar bear research?

    Ecology is the interaction between animals and the environment. What we’re studying is how polar bears behave on land with respect to available food -- what they eat and where they eat it. What I’m particularly interested in is how they hunt other animals and how the calories they gain from consuming them are going to affect their annual energy budget as their access to ice becomes more limited.

    We collect scat and hair samples non-invasively. After consuming food on the ice or on land some bears leave scat. Also some bears rest right along the coast, bedding down in sand and grass where they leave hairs behind, while others head further inland and leave hair in dens.

    Linda Gormezano and her dog, Quinoa. (photo credit: AMNH)

    What, exactly, is an energy budget?

    Nobody really knows how often polar bears in western Hudson Bay capture seals, but they get a certain amount of energy from consuming seals they hunt out on the ice and that energy allows them to survive on land for 4-5 months each year. If the ice melting earlier each year causes polar bears to have less time to hunt seal pups in spring, they may be taking in fewer calories over the course of the year.

    What we want to know is, now that they’re eating more of certain types of foods on land, what kind of energetic benefits might polar bears be experiencing? Up until now many have thought what they were eating on land wasn’t really helping them at all. To evaluate this, we are examining the energetic costs and benefits of capturing and consuming those foods as well as how often the behavior occurs. Only then can we determine whether these foods could help alleviate nutritional deficits that polar bears may come ashore with.

    Almost Human: Trauma Mannequins for Medic Training

    They breathe and they bleed, but they're not real human beings. These robots, built by the especial effects and fabrication experts at Kernerworks, are incredibly lifelike trauma mannequins used by the military to train field medics. We visit Kernerworks' workshop to learn how these robots are built and get a demo of their trauma simulation capabilities. See photos from our visit here.

    The Low-Budget Movie Gimmicks of Cinema Past

    With so many people watching movies at home with Blu-ray or through streaming services, Hollywood has been desperate to bring people back to theaters. This is why we’ve had the big 3D revival. With the success of films like Gravity, IMAX has also been a hot ticket. And overseas, 4D cinema has been very successful as well.

    4D is a cinema technology that can encompass many different experiences, and one that used to be most associated with the gimmick of Smell-O-Vision. In Asia, there are theaters that pump scents into the theater, providing the audience with the extra "dimension" of smell. There has been some effort to try and have theaters like this in the States, and Robert Rodriguez tried a similar version with scratch-and-sniff cards, unsuccessfully, for Spy Kids: All the Time in the World. (Perhaps he shouldn’t have made a soiled diaper one of the scents.)

    As silly as this gimmick may sound, when you look back in cinema history, it was something that was attempted way back in 1960. In fact, there have been many gimmicks that tried to give audiences much more than a regular movie could provide, often with a much smaller budget and less resources than the major studios had to play with.

    As we’ve previously reported, the first 3D feature film, Bwana Devil, was an attempt to get people into theaters again, because a brand new technological innovation, television, was keeping a lot of people at home. In fact, the ads for Bwana Devil promised you would be seeing something “Newer than television!”

    And even in the case of 3D, it was a cheaper technology because it was trying to give audiences something spectacular that was much less expensive than Cinerama widescreen, which required major reworking of theaters to support. With other gimmicks that followed, a lot of filmmakers have tried to bring audiences into theaters for cut-rate prices, and many of these innovations are amusing to look back on today. Here are some of my favorites.

    Tested Explains: What Does it Mean to Call on a "Secure Line"?

    If decades of televised White House dramas and Hollywood espionage thrillers have taught us anything, it's that barking "Get me a secure line!" into your phone is about all it takes to establish a private, encrypted call.

    Alas, security is rarely so simple – and for decades, encrypting phone conversations actually took a great deal of work. Only in recent years has encryption become more accessible, and it's still a lot more effort than pop culture would have you believe.

    The secure line's earliest days can be traced back to the development of a machine called SIGSALY at Bell Telephone Laboratories during World War II. It was meant to replace the seemingly scrambled, high-frequency radio communication then–employed by the Allies – which, it turned out, eavesdropping Axis forces had already managed to decrypt.

    So what was SIGSALY? "Consisting of 40 racks of equipment, it weighed over 50 tons, and featured two turntables which were synchronized on both the sending and the receiving end by an agreed upon timing signal from the U.S. Naval Observatory," according to the National Security Agency's historical account of the device.

    The two turntables played identical copies of randomly generated noise that was mixed into a call. "One would mix in noise, and the other would basically subtract out that noise. And anybody listening would just hear noise," explained Matthew Green, an assistant research professor at the Johns Hopkins Information Security Institute. "But somebody who subtracted out the noise would hear the phone call."

    The system, of course, had its flaws. There were only a handful of SIGSALY machines scattered around the globe, and synchronization between the two ends records required millisecond precision. That was even assuming, of course, that the person you wanted to call had the most up-to-date record, or key – delivery, understandably, "always a problem" recounts the NSA.

    "It was basically what we call a one-time pad," says Gord Agnew, an associate professor at the University of Waterloo's school of electrical and computer engineering, where his past research has focused on communication and cryptography.

    Testing: DJI Phantom 2 Vision+ Quadcopter Drone

    For the past week and a half, I've been testing DJI's new Phantom 2 Vision+ quadcopter. The RC quad, which was officially announced yesterday at the annual NAB (National Association of Broadcasters) convention in Las Vegas, is the first real prosumer quadcopter I've flown. And in my testing time with it, I've become completely addicted to flying it. Days and nights are now framed in my mind in terms of when I can find time to take it out to fly, and how many battery recharge cycles I can fit into an afternoon. Sunny weather is quad flying weather, and I'm constantly combing through my visual memory of San Francisco and Bay Area geography to think about where I can take the quadcopter flying.

    I wasn't kidding when I teased it in last week's podcast: not since the original iPhone and Oculus Rift have I been so impressed with a new consumer technology and its potential mass-market appeal. This isn't just an extremely fun toy for hobbyists and early-adopters: quadcopter technology is at a tipping point where it's ready for mainstream users to fly, hack, and utilize to do amazing things. We've been told that drones are going to change the world, but this is the first product I've used that really makes me believe it.

    We're going to talk about our experience with the $1300 DJI's Phantom 2 Vision+ and its underlying technologies in-depth in a video this week, but I wanted to flesh out the salient points from that conversation and explain why I'm so excited about the quadcopter. I've also included a few videos shot with the Vision+'s onboard camera, as well as some stills comparing its image quality with that of the GoPro Hero 3's 1080p video. Let's get started!

    Bits to Atoms: Designing and 3D-Printing Tested Nametags

    Sometimes a project pops into your head and keeps popping up--on the subway, at work, during meetings, while making dinner, laying in bed trying sleep, etc., until you just have to do it in order to purge it. It occurred to me that the Tested logo would be perfect for a 3D print! With its simple geometric parts as well as the opportunity to demonstrate a variety of printing techniques, I couldn’t resist. I had made name badges before for my booth at Maker Faire and thought it was a good idea for the Tested logo--the guys need to represent!

    The first step was to simply sketch out how the logo would break down into parts for printing. Since the Tested logo is made up of simple shapes the break down and modeling were relatively simple.

    In the TARDIS article I mentioned using a backdrop picture to build on top of and Norm supplied me with some Tested logos files, not knowing what purposes they would be used for! A dimmed down version of the logo was used in the top view and the geometry was built right on top of it. Since mechanical precision wasn’t needed, a simple cube was stretched out and modified by eye to match up with each piece.

    The ‘Tested’ text could easily be built from scratch since it’s so blocky, but there’s an even easier option if you can find the actual font, which is free at one of my favorite sources, dafont. Most modeling programs will have a text tool that will allow the letters to be extruded into 3D models which saves a ton of time.

    Tested: We Buy a Bitcoin!

    Driven by curiosity, Will and Norm do something incredibly foolish while at SXSW: they buy a Bitcoin from an ATM kiosk. Watch them fumble through the transaction, try to buy real goods with that Bitcoin in Austin, and then attempt to cash out before flying back home. (Here's the follow-up.)

    Show and Tell: Custom LEGO Creations

    For this week's Show and Tell, we're joined by special guest Carl Merriam, and professional LEGO builder who shares several of his most recent creations. Carl talks about competing in the "Iron Builder" challenge, and announces an awesome new job. Check out more of Carl's work here!

    The Problem with High-PPI Windows Display Scaling

    High-resolution screens are getting more and more common. You can get a pretty good 27-inch 2560x1440 screen for $300, and cheap 4K displays are on the horizon--although you shouldn't actually buy one yet. And of course there's all those 13-, 14-, and 15-inch laptops with high-resolution screens, from 2560x1440 screen available on the Acer Aspire S7 and Toshiba Kirabook, to the 3200x1800 displays on the Samsung Ativ Book 9 Plus and Lenovo Yoga 2 Pro. And, of course, the Retina MacBook Pro, the granddaddy of the genre, with its 2560x1600 13-inch and 2880x1800 15-inch form factors.

    These high-res ultrabooks are beautiful when they work properly, but I don't think you should actually buy one. Especially not one to run Windows desktop applications.

    At 27 inches, a 2560x1440 display is amazing, because you can use it without scaling the Windows UI at all. You can comfortably fit two 1280px-wide windows side-by-side, which means you can get the benefit of two smaller monitors in one. Here's what that looks like. There's so much room for activities!

    But that's 2560x1440 at 27 inches, or about 109 pixels per inch. Cramming the same number of pixels (or more!) into a 13.3-inch screen ups your pixel density to 221PPI. It's a much sharper image, but if you leave Windows display scaling at 100%, the desktop user interface becomes unbearably tiny--just look at those taskbar icons.

    Teaching Hollywood How To Hack

    It took a police raid in 1987 to finally scare Dave Buchwald straight. Well, mostly straight. He wouldn't claim after to have completely reformed after the incident – remaining hatless, his words – but he certainly wasn't interested in ever going to jail.

    As a member of the hacker group Legion of Doom in his late teens, the raid was meant to scare him. It was a warning, a slap on the wrist. But while Buchwald was never actually charged, others weren't so lucky. The Chicago Tribune the following year reported on a hacker named Shadow Hawk who was alleged to have stolen "an artificial intelligence program that had not even hit the market." It reads today like something out of a William Gibson novel.

    Shadow Hawk pleaded guilty. He was fined and served his time. Being a hacker in the eyes of the government, especially as the 1980s drew to a close, wasn't exactly something that earned you a gold star. But for a young screenwriter named Rafael Moreu, that animosity was exactly what made the misunderstood community that Buchwald and others were a part of a story he wanted to tell.

    By the early 1990s, Buchwald was working for a private investigator, using the skills he picked up hacking in a more, let’s say, constructive way. He was attending monthly hacker meetings, organized by Emmanuel Goldstein, co-founder of a hacker quarterly magazine called 2600. And it was there that Buchwald met Moreu. Buchwald, it just so happened, was looking for consulting gigs, and Moreu happened to have some work.

    When the film Hackers was released in 1995 – an oddball tale of a still-nascent net, starring a then-unknown Angelina Jolie in one of her first Hollywood films – Hacking Consultant was Buchwald's credit on the film.

    Contrary to what the quality of popular film and television might have you think, yes, these people do exist. They are people, like Buchwald, who have worked behind the scenes to ensure that Hollywood gets its depiction of hackers, computers and cybersecurity mostly right – to translate the technical complexity of science and technology into something that casual audiences can understand.

    And it's a good thing, too, because there are countless memorable cases where film and television get things so terribly wrong.

    The 3D Godzilla Movie That Almost Was

    The next big remake of Godzilla is just around the corner, and the buzz from the trailer is pretty good so far. Many fans, myself included, are hoping that this is the Godzilla remake will finally get it right. Many of the right elements are there: nuclear testing, the monster towering over and knocking down skyscrapers (instead of weaving between them), and even hints at other monsters like Rodan. Things that the 1998 film never got. Another difference is that this will be a Godzilla movie released in 3D. But if the news of an American incarnation of Godzilla in 3D sounds familiar to you for some reason, you might recall that back in 1983 there was an attempt to make a big U.S. version of the big G in 3D that was in development for several years before it finally fell apart.

    Reports of a 3D Godzilla first started gaining traction in the summer of ’83 when 3D was making a minor comeback. That summer had a big influx of movies in the format, such as Jaws 3D, and Friday the 13th Part 3D, which at the time was the highest grossing 3D movie in history. In fact, the director of the third Friday, Steve Miner, was also going to helm the 3D Godzilla film as well.

    Sculptor Shawn Nagle's diorama using William Stout's Godzilla designs.

    As Miner told writer Steve Ryfle, “I had always been a fan of Godzilla since I was a kid. Once seeing it as an adult, I realized that this could be remade as a good movie. I had just done Friday the 13th in 3D, and wanted to do a good movie in 3D.” The screenplay for this version of Godzilla was written by Fred Dekker, who also directed The Monster Squad and RoboCop 3. Dekker was honored to get the assignment, it was his first big Hollywood job, but he wasn’t a huge Godzilla fan, and wanted to elevate the monster genre to a higher level. For everyone involved, the whole idea was to treat this movie seriously, and make it on a big, Spielberg blockbuster level instead of lowballing it.

    Artist William Stout, who was a production designer on Conan the Barbarian and who also designed the poster for Ralph Bakshi’s Wizards, did extensive storyboards for Miner’s Godzilla, and he’s very proud of his work on it to this day. (Stout calls this incarnation of Godzilla “the greatest film project that never happened.”)

    William Stout's concept art for the proposed 1983 Godzilla 3D.

    Miner wanted a lot of “presentation art” for Godzilla, so the studios could get a good idea of what the finished movie would look like. (A great deal of “presentation art” had to be created for Fox to understand Star Wars.) Stout was very impressed with the screenplay he was helping bring to life, telling us, “We were working from a great script, I think Fred Dekker really outdid himself with it.”

    10 Things You Should Know about DirectX 12

    It’s about efficiency, not new features

    The new DirectX will add a few new rendering features, but those new features aren’t as important as efficiency. Direct3D 12 has a thinner abstraction layer between the operating system and the hardware. Game developers will have more control over how their code talks to graphics hardware. Overhead is reduced substantially. Time for threads to complete has been reduced by 2-5x in some cases.

    Only Direct3D 12 has been discussed

    Most gamers tend to think of Direct3D when DirectX is mentioned, and the focus of the recent announcement is indeed on D3D. There was no discussion of audio, game controller interfaces, Direct2D or other aspects of DirectX. Part of the reason for this early peak at DirectX 12 is that AMD’s Mantle, a direct-to-metal API with similar ambitions, was starting to get some traction. Microsoft no doubt worried that API fragmentation would return game development to the bad old days, where you wouldn’t be able to run any game on any graphics card.

    DirectX 12 is now a console API

    Direct3D 12 will run on the Xbox One. The execution environment has been described as “console-like,” which probably means the layers are thinner. This makes sense today, since most modern GPUs are really highly parallel and highly programmable. What this means for future versions of Windows is unknown, since Direct3D is the rendering API for Windows 8 and beyond. I hope Microsoft doesn’t “freeze” the Windows API at D3D 8. We’d once again be in a situation where application developers might end up using different APIs than game developers.

    What Astronauts Do When There’s Nothing to Do

    Whether it’s stoplights, your doctor’s office, or a popular restaurant on Saturday night, waiting is an inescapable aspect of modern life. For many of us, the pain of waiting is rarely much worse than being behind some indecisive couple at the Redbox kiosk. But even that trivial torment can be eased with time-killing apps on your phone. Now imagine that you have a few hours to kill before fulfilling your life’s greatest ambition, with practically nothing to do, all while firmly strapped to a fully reclined seat atop a few million pounds of highly explosive fuel…and no smartphone to check Twitter. That was the situation that many Space Shuttle astronauts found themselves in. That stoplight doesn’t seem so bad now, does it?

    Much has been written about the experience of riding a spaceship to orbit…but what about the wait to get started? (photo credit NASA)

    When astronauts arrived at the launch pad via the gleaming Astrovan motor home, there was much more to do than just pile into the shuttle and light the engines. There could have been up to seven astronauts on any given flight, and just strapping them into their seats took nearly an hour. Then the entry hatch had to be closed, sealed, and pressure checked…along with a laundry list of other vital tasks. When all was said and done, an astronaut could find themselves in that seat for as long as five hours before liftoff. I don’t even want to sit in my La-Z-Boy for that long, much less be shackled with a five-point harness to a rigid seat that was designed for lightness above all else.

    While physical comfort (or lack thereof) is one element of sitting on the launch pad, the mental aspect of processing the pending, and rather dramatic events must have been equally unsettling. Whether their primary emotion was excitement, fear, or something else entirely, I don’t see how anyone could dismiss the fact that launching into space is a very big deal. The last few hours of the countdown were likely among the least frenetic periods since the crew had begun training for the flight months--or years--earlier. The ways in which astronauts coped with this forced inactivity while perched at the edge of such a rare and dynamic human experience are surely as varied as the people themselves.

    Hands-On: Virtuix Omni Treadmill with Oculus Rift

    We strap on a harness and step on board the Virtuix Omni motion tracker at this year's Game Developers Conference. The Kickstarter-backed treadmill system pairs with an Oculus Rift development kit to simulate walking and running in a first-person shooter. It took a little getting used to, but the experience was unlike anything we've tried before.

    Testing Maxwell: Nvidia's Six-Inch Hammer

    Nvidia recently unleashed its latest graphics architecture on the world, Maxwell. The first iteration is the GTX 750, a GPU that will be the core of a graphics card whose asking price will typically be under $150. Two variants of the GTX 750 will be shipping, the GTX 750 and the GTX 750 Ti. I’ll get into the differences shortly.

    Once upon a time, the first iteration of a GPU architecture would show up as powerful, power-hungry beasts of a graphics card. That changed a bit when Nvidia introduced its first Kepler card aimed at gamers, the GTX 680. The GTX 680 took the gaming world by surprise, delivering leading edge performance, but was miserly on power consumption and low on fan noise, particularly when compared to Nvidia’s earlier GTX 580s and AMD’s Radeon HD 7970s.

    Still, the GTX 680 was a high end card, even though it broke the mold a bit as to what a high end card should be. The GTX 750 Ti’s typical asking price is $150. The overclocked EVGA GTX 750 I’ll be looking at is $169, but it’s both overclocked and ships with a 2GB frame buffer. You can find GTX 750 Ti cards from several manufacturers running at reference clocks for $149. GTX 750 1GB cards can be had for as little $119. I’ll take a look at performance of the EVGA GTX 750 Ti SC and an Nvidia GTX 750 Ti reference card, which also has a 2GB frame buffer, but runs at standard core clocks.

    But first, let’s look at Maxwell.

    Star Wars and the Explosion of Dolby Stereo

    When Star Wars came out on May 25, 1977, the cinema experience was changed forever. While some critics feel that Star Wars changed movies for worse by contributing to the modern blockbuster syndrome, there’s no doubt that for technology and special effects Star Wars was a huge leap forward. In particular, the way Star Wars cemented Dolby Stereo’s dominance in sound transformed the way we would listen to movies in theaters and at home.

    Sean Durkin, the director of corporate communications at Dolby, gives us a sense of what it was like before that day. “When you think about the ‘70’s, it represents a new era for film. When people think of Star Wars, they think of really iconic moments, and one of them is early in the of the film with the massive imperial destroyer chasing the rebel ship. That was the first Dolby experience for a lot of people. It gave people a different way to think about sound in a movie, and filmmakers and sound designers now had the ability to deliver these big experiences.”

    Image credit: SFgate.com

    As legendary sound designer Walter Murch (Apocalypse Now) said in the book Easy Riders, Raging Bulls, “Star Wars was the can opener than made people realize not only the effect of sound, but the effect that good sound had at the box office. Theaters that had never played stereo were forced to do it if they wanted Star Wars.” The executives at Dolby said, “We need our own Jaws” to make Dolby a force to be reckoned with, and it turned out to be Star Wars because it took a movie that big to push the technology through, and finally make it stick.

    Bay Area filmmakers like Francis Ford Coppola and George Lucas were always fascinated with the possibilities of sound. Coppola worked closely on his films with Murch, and Lucas’s sound wizard was Ben Burtt, who created R2-D2’s beeps, Darth Vader’s heavy breathing, the hum of the light sabers, and more.

    Stephen Katz began working at Dolby in 1974, and he was also a sound consultant on Star Wars. He remembered the day Ioan Allen called, telling him to come up to San Francisco to meet with a producer and director who were interested in using Dolby in their movie. Katz flew up and met George Lucas and Gary Kurtz, who said they initially wanted to use Sensurround in Star Wars, a short-lived gimmick that was used in Earthquake and several other films.