Latest StoriesConcepts
    Tested Explains: How Google's Project Ara Smartphone Works

    Project Ara is real, and Google has its fingers on the pulse of the technologies required to make modular smartphones a reality. Given the overwhelming public response to the Phonebloks concept, it's something that users seem to want, too. But whether or not Project Ara modular phones have a future in the smartphone marketplace will largely depend on whether or not there's a strong hardware ecosystem to support it. The custom PC market wouldn't have flourished a decade ago if component manufacturers weren't making user-friendly video cards, storage drives, motherboards, and power supplies--the building blocks of a PC. That's the point of this week's Ara Developers Conference: getting partners excited and educated about how they can build hardware to support that vision for a modular phone.

    The two-day conference, which was also streamed online, coincided with the release of the Project Ara MDK, or Module Developers Kit. This MDK provides the guidelines for designing Ara-compatible hardware, and along with the technical talks presented at the conference, offer the first clear look in the technologies that make Ara possible, if not completely practical. I attended the conference and read through the MDK to get a high-level understanding Google's plans for Ara, which goes far to address the concerns we and experts have had about the modular phone concept. I'm not yet a believer, but at least this clearly isn't a pipe dream. The following are what I consider the important takeaways from what Google has revealed so far.

    A brief note: the conference was also the first public showing of a Project Ara working prototype (past photos have been of non-functioning mockups), though the unit was unable to boot up and had a cracked screen. A little appropriate, given that both the main processing unit and screen are replaceable modules.

    Project Ara is two core components: the Endoskeleton and the Module

    On the hardware side, Google has laid out specific guidelines for how Project Ara phones can be built. The most important piece of hardware is the chassis, or what Project Ara leads are calling the "Endoskeleton." Think of this as an analogue to a PC case--it's where all the modular components will attach. In fact, it reminds me a lot of the design of Razer's Project Christine, in that a central "spine" traverses the length of Project Ara phones, with "ribs" branching out to split the phone into rectangular subsections. In terms of spatial units, the Endoskeleton (or Endo) is measured in terms of blocks, with a standard phone being a 3x6 grid of blocks. A mini Ara phone spec would be a 2x5 grid, while a potential large phone size would be a 4x7 grid.

    Fitting into the spaces allotted by the Endos structure would be the Project Ara Modules, the building blocks that give the smartphone its functionality. These modules, which can be 1x1, 2x1, or 2x2 blocks, are what Google hopes its hardware partners will develop to sell to Project Ara users. Modules can include not only basic smartphone components like the display, speakers, microphone, and battery, but also accessories like IR cameras, biometric readers, and other interface hardware. The brains of a Project Ara phone--the CPU and memory--live in a primary Application Processor module, which takes up a 2x2 module. (In the prototype, the AP was running a TI OMAP 4460 SoC.) While additional storage can be attached in separate modules, you won't be able to split up the the AP--processor, memory, SD card slot, and other core operational hardware go hand-in-hand.

    Tested Explains: What Does it Mean to Call on a "Secure Line"?

    If decades of televised White House dramas and Hollywood espionage thrillers have taught us anything, it's that barking "Get me a secure line!" into your phone is about all it takes to establish a private, encrypted call.

    Alas, security is rarely so simple – and for decades, encrypting phone conversations actually took a great deal of work. Only in recent years has encryption become more accessible, and it's still a lot more effort than pop culture would have you believe.

    The secure line's earliest days can be traced back to the development of a machine called SIGSALY at Bell Telephone Laboratories during World War II. It was meant to replace the seemingly scrambled, high-frequency radio communication then–employed by the Allies – which, it turned out, eavesdropping Axis forces had already managed to decrypt.

    So what was SIGSALY? "Consisting of 40 racks of equipment, it weighed over 50 tons, and featured two turntables which were synchronized on both the sending and the receiving end by an agreed upon timing signal from the U.S. Naval Observatory," according to the National Security Agency's historical account of the device.

    The two turntables played identical copies of randomly generated noise that was mixed into a call. "One would mix in noise, and the other would basically subtract out that noise. And anybody listening would just hear noise," explained Matthew Green, an assistant research professor at the Johns Hopkins Information Security Institute. "But somebody who subtracted out the noise would hear the phone call."

    The system, of course, had its flaws. There were only a handful of SIGSALY machines scattered around the globe, and synchronization between the two ends records required millisecond precision. That was even assuming, of course, that the person you wanted to call had the most up-to-date record, or key – delivery, understandably, "always a problem" recounts the NSA.

    "It was basically what we call a one-time pad," says Gord Agnew, an associate professor at the University of Waterloo's school of electrical and computer engineering, where his past research has focused on communication and cryptography.

    Rise of the Patent Troll

    Kirby Ferguson, the filmmaker behind the excellent "Everything is a Remix" video series, produced and directed this new short explaining the US patent system and the rise of patent trolling companies that target small businesses and individuals in costly litigation. It's an important PSA about a topic most people don't think about, even though patent trolling threatens the growth of our economy and stifles innovation. Adam Carolla's podcast show is currently being sued for allegedly infringing on a patent for a "system for disseminating media content representing episodes in a serialized sequence"--essentially podcasting. Carolla has a legal defense campaign set up to fight this lawsuit, and you can find out more about how to encourage government to reform the patent system here.

    In Brief: Gender Responses to Virtual Reality Simulations

    While the internet has a laugh over the White Guys Wearing Oculus Rifts Tumblr, there's some genuine discussion about the potential differences in the way that biological factors may affect a person's experience of virtual reality. To put it bluntly, there's the possibility that women may not be as responsive to current virtual reality tech as men. Danah Boyd, a researcher at Microsoft Research and Assistant Professor at New York University, recently shared the results of a 2000 study she conducted about the how individuals respond to the 2D cues that virtual reality systems use to simulate 3D space. Boyd, who had poor experience with her university's CAVE system, found that biological men were more likely to prioritize one type of VR cue--motion parallax--than women, who were more susceptible to shape-from-shading as a spatial cue. VR tech relies heavily on motion parallax, which could broadly explain why Boyd other female research subjects were getting disoriented more easily in her tests. The results aren't by any means conclusive about gender differences in VR use, but Boyd's point is that more research should be conducted by companies like Oculus so that they can take these factors into consideration.

    Norman 4
    10 Ways To Lifehack Your Next Flight

    There are few things quite as time-consuming and annoying as airline travel. Sure, it’s miraculous to jet through the air at over 800 miles an hour, but all the stuff that goes along with it can really get old. Here’s a guide to using your scientific and technology know-how to lifehack that airline experience to be much more bearable.

    Where We Went Wrong Buying a Bitcoin from an ATM

    If you've watched our video from today, you've caught a glimpse of the saga that was our attempt to buy and then subsequently sell a Bitcoin at SXSW. In retrospect, it wasn't a very bright idea. But we were curious, not only of the prospect of using cyptocurrency as a fungible good for making purchases, but also of the promised ability to turn Bitcoin into real cash dollars. Both of those goals were theoretically possible in that week in Austin, which had hosted a recent Texas Bitcoin Conference--spurring several local businesses (read: food trucks) to start accepting Bitcoin as a novel marketing tactic. Austin was also home to one of the first Bitcoin ATM operators in the nation, with no fewer than three places in the city to make automated in-person transactions. Yes, here was a machine that promised not only to slurp up your dollars to transfer fractions of Bitcoin to your digital wallet, but also let you cash out of virtual currency for Uncle Sam-backed bills.

    Oh, if only it was that easy.

    Something we didn't really explain in the video (because we frankly still don't completely understand it ourselves) is how the Bitcoin ATM system worked. The ATMs are built by a company called Robocoin, a Las Vegas-based started founded by two brothers who were previously making Bitcoin-for-cash transactions locally, in person. According to a Wired report, Mark and John Russell, wanted to find a way to automate the process using a machine, while still working within the still-evolving regulatory guidelines set by US government for Bitcoin transactions. Naturally, they teamed up with a Nevada slot machine maker to start making prototypes. Honestly, the warning signs were all there.

    Because of those tricky (and still muddy) regulatory requirements, Robocoin doesn't actually run its kiosks. They just make them and sell them to operators for $20,000 a pop. Their first customers set up shop in Canada, where Bitcoin trading regulations are more lax--the machine doesn't need identification verification to take or dispense cash. Austin-based Bitcoin Agents was the first operator to install Robocoin machines in the states (a Hacker Dojo in Mountain View wasn't far behind), putting machines in three locations timed to open with the Texas Bitcoin Conference and SXSW. Handlebar was where we ended up buying our Bitcoin, and where I spent the next few days hanging out to try to get it give our money back.

    I still have that .97 Bitcoin in my digital wallet.

    Tested: We Buy a Bitcoin!

    Driven by curiosity, Will and Norm do something incredibly foolish while at SXSW: they buy a Bitcoin from an ATM kiosk. Watch them fumble through the transaction, try to buy real goods with that Bitcoin in Austin, and then attempt to cash out before flying back home. (Here's the follow-up.)

    The Problem with High-PPI Windows Display Scaling

    High-resolution screens are getting more and more common. You can get a pretty good 27-inch 2560x1440 screen for $300, and cheap 4K displays are on the horizon--although you shouldn't actually buy one yet. And of course there's all those 13-, 14-, and 15-inch laptops with high-resolution screens, from 2560x1440 screen available on the Acer Aspire S7 and Toshiba Kirabook, to the 3200x1800 displays on the Samsung Ativ Book 9 Plus and Lenovo Yoga 2 Pro. And, of course, the Retina MacBook Pro, the granddaddy of the genre, with its 2560x1600 13-inch and 2880x1800 15-inch form factors.

    These high-res ultrabooks are beautiful when they work properly, but I don't think you should actually buy one. Especially not one to run Windows desktop applications.

    At 27 inches, a 2560x1440 display is amazing, because you can use it without scaling the Windows UI at all. You can comfortably fit two 1280px-wide windows side-by-side, which means you can get the benefit of two smaller monitors in one. Here's what that looks like. There's so much room for activities!

    But that's 2560x1440 at 27 inches, or about 109 pixels per inch. Cramming the same number of pixels (or more!) into a 13.3-inch screen ups your pixel density to 221PPI. It's a much sharper image, but if you leave Windows display scaling at 100%, the desktop user interface becomes unbearably tiny--just look at those taskbar icons.

    10 Ways To Browse The Internet Anonymously

    Electronic privacy is one of the most contentious issues of the modern age, with both private corporations and the government having an excessive interest in what we do online. If you’re starting to get paranoid, there’s hope. Here are ten methods to get on the Internet without disclosing personal info.

    10 Things You Should Know about DirectX 12

    It’s about efficiency, not new features

    The new DirectX will add a few new rendering features, but those new features aren’t as important as efficiency. Direct3D 12 has a thinner abstraction layer between the operating system and the hardware. Game developers will have more control over how their code talks to graphics hardware. Overhead is reduced substantially. Time for threads to complete has been reduced by 2-5x in some cases.

    Only Direct3D 12 has been discussed

    Most gamers tend to think of Direct3D when DirectX is mentioned, and the focus of the recent announcement is indeed on D3D. There was no discussion of audio, game controller interfaces, Direct2D or other aspects of DirectX. Part of the reason for this early peak at DirectX 12 is that AMD’s Mantle, a direct-to-metal API with similar ambitions, was starting to get some traction. Microsoft no doubt worried that API fragmentation would return game development to the bad old days, where you wouldn’t be able to run any game on any graphics card.

    DirectX 12 is now a console API

    Direct3D 12 will run on the Xbox One. The execution environment has been described as “console-like,” which probably means the layers are thinner. This makes sense today, since most modern GPUs are really highly parallel and highly programmable. What this means for future versions of Windows is unknown, since Direct3D is the rendering API for Windows 8 and beyond. I hope Microsoft doesn’t “freeze” the Windows API at D3D 8. We’d once again be in a situation where application developers might end up using different APIs than game developers.

    Hands-On: Virtuix Omni Treadmill with Oculus Rift

    We strap on a harness and step on board the Virtuix Omni motion tracker at this year's Game Developers Conference. The Kickstarter-backed treadmill system pairs with an Oculus Rift development kit to simulate walking and running in a first-person shooter. It took a little getting used to, but the experience was unlike anything we've tried before.

    The Art of Photogrammetry: Replicating Hellboy’s Samaritan Pistol!

    We’ve gone over the basic concepts and photography techniques on how to capture ideal images for photogrammetry 3D scanning. Now let's get into the meat of the subject and start processing our data so we can see some results. The case study we're going to use is a replica prop from the movie Hellboy, which I found at the Tested office. I spent an afternoon photographing the prop, and processed it using PhotoScan software. Here's how that process went, and what you can learn from it.

    Step 1: Inspecting Your Photos

    For this photogrammetry scan, I used the turntable method to capture photos of the prop pistol, the “Samaritan” from the movie Hellboy. Since the prop is an irregular shape, I didn't put it on an actual turntable or Lazy Susan. It’s propped up on the end of a C-stand pole, which allowed it to be turned a few degrees between shots. I took one "ring" of pictures from slightly above the prop, and another one below. That gave me about 45 photos total. Click here for an example of how one full rotation of photos looked.

    Since the front and the back of the gun aren’t visible from the main sequence, I took another set of photos of the front, and another of the back of the pistol.

    Inside MIT's Tangible Media Group

    Science Friday visits MIT's Tangible Media Group to get a better look at two projects being developed by PhD candidates at the school. We first heard about the inFORM project last November, but this is the best look we've had of how it works so far. A second project, "jamSheets," uses pneumatic pumps to change the hardness and texture of paper and fabric surfaces.

    Research Robots That Get Us Excited (and Terrified)

    Ah, robots – what would we do without them? Mechanical automation makes almost every aspect of the scientific process easier. But research robots are becoming increasingly complex and powerful. Today, we’ll spotlight ten amazing examples that will thrill and horrify you.

    Bits to Atoms: 3D Modeling Best Practices for 3D Printing

    So you’ve managed to build your first 3D creation using modeling software. You send it to the printer and it comes out looking like something sent through a cosmic spatial anomaly. What the heck happened? Building your model on the computer is just the first step to ensure a proper 3D print. Today, we'll go over best practices for modeling and how to prep those models for a good print.

    Photo credit: Tony Buser

    Neatness Counts

    Taking the time to sculpt a neat, clean computer model will prevent headaches down the road. This is particularly true of polygon models where deleting an edge, face, or vertex can quickly make a model unprintable. Using boole operations (adding and subtracting part together) is often used while building models, but can lead to messy models since two pieces of geometry are being combined or subtracted from one another.

    Sloppy modeling can easily occur just in the process of figuring out how to build something. I will often build a quick, rough model to work through the layout, what parts need to be made, and how to build them. I will rebuild the whole thing as a much cleaner model based on the rough version. One of the best pieces of advice I got from my modeling mentor is, ‘don’t be afraid to rebuild something’. It sounds like a drag but rebuilding a model from scratch always goes quicker than the original and it will be a cleaner model, using what was learned from the first version.

    If modeling with polygons, it’s in your interest to keep the mesh in quads (each face is four-sided) and avoid “n-gons” (in modeling, any polygon that is not 4-sided). Modeling with quads makes adjusting the model much easier, whereas n-gons will kind of mess things up. In general, any modeling program will make it easy to model in quads since any primitive (cube, sphere, cone, torus, etc) created will automatically be made out of quads.

    Hands-On: Oculus Rift Development Kit 2 Virtual Reality Headset

    New Oculus VR hardware! We get our hands on the second development kit for the Oculus Rift at GDC 2014 and chat with Oculus VR's Nate Mitchell about the roadmap to the final consumer release (plus their thoughts on Sony's VR efforts). Here's how DK2 differs from past prototypes, our impressions of it with new tech demos, and why you should still hold off until the final product.

    In Brief: Sony Announces Project Morpheus VR Headset

    I think this is something most people saw coming, but Sony today announced that it has been working on a virtual reality headset for its PlayStation 4 console. And thankfully it's not the virtual reality headset prototype we saw at CES, which was little more than a motion tracker slapped on Sony's HMZ-T3 display. Unveiled at a GDC press conference, Project Morpheus (as in Dream of the Endless, not the character from The Matrix) is a head-tracking HMD that's much more Oculus Rift than personal movie theater, with specs that look very similar to Oculus VR's Crystal Cove prototype. Morpheus uses a 5-inch 1080p display (LCD instead of Oculus' AMOED), tracks with a combination of accelerometer, gyro, and camera, and has optics that display games with a 90-degree FOV. Sony also carted out the keywords that will be familiar to anyone following modern VR work: presence, low latency, and 3D audio. It's apparently something Sony has been exploring since 2010, when its labs attached Move controllers to a HMD. This being GDC, Sony of course announced plenty of software partners for its VR initiative, including some developers that have already shown work on the Oculus Rift. There's no launch timeframe for Project Morpheus, but I bet that this holiday season is going to be really interesting for fans of virtual reality. What do you guys think of Sony's announcement?

    Norman 2
    How Smart Cars and Traffic will Change the World

    Google’s self-driving car started out as a way for the company to collect street view photographs for its Google Maps application and evolved into something completely different – a machine intelligence that could change the way people interact with their automobiles forever. Today, we’ll take a trip into the not so distant future to examine what autonomous cars would mean for you.