We test MaceTech's programmable LED glasses at this year's Bay Area Maker Faire. The glasses are basically a printed circuit board with single-color LED lights, which can be programmed to animate and display any message you want.
We test MaceTech's programmable LED glasses at this year's Bay Area Maker Faire. The glasses are basically a printed circuit board with single-color LED lights, which can be programmed to animate and display any message you want.
This year’s Google I/O came and went without a new version of Android, and there was much griping on the internet. Even though Google isn’t required to announce anything of consumer interest at a developer event, the past few I/O conferences have made it clear this is Google’s big software show. Everyone watches and waits on the big reveal, but this year we got nothing--or did we?
While it may appear at first blush that Google I/O 2013 was a bust, it was actually an incredibly important step for Google. This is the event when Google finally beat fragmentation.
If you paid close attention to the developer talks and API announcements, there were some enticing tidbits about the future of Android. For example, Google made it clear that Bluetooth Low Energy (AKA Bluetooth SMART) was coming to Android, but not under existing OS versions. No, this Bluetooth 4.0 implementation would be part of the platform in API level 18. Jelly Bean 4.2 is API level 17. There were also various server log and benchmark leaks -- the kind of stuff we always see when a new OS is imminent.
Hints like this indicate there is a newer version of Android that is far enough along that it has a finalized Bluetooth stack and is being tested on internal Google devices. Rumors can’t always be trusted, but the word is that Google was prepared to announce Android 4.3 at Google I/O, but decided to hold back and make a point. What point? Simply, Google doesn’t need a new version of Android to rollout new services to users.
Look at the Android announcements that did happen: Hangouts, Google Play Games, app data sync, Play Music All Access, and synced notifications. Those are neat features, but no one is going to convince Android fans it’s as sexy as a new version of the platform. However, the impact might be even greater than if Google had announced Android 4.3.
Imagine that Google had shown off a new version of Android; let’s even say that it was extremely impressive. After hearing the news, most Android users would look at their Galaxy S3 or Droid RAZR Maxx HD, and feel a mixture of annoyance and apathy. When Google announces a new version of Android, it only has an immediate effect on Nexus owners, which make up a small percentage of total Android users. The new services Google announced affect almost every Android phone in the world.
Phones running Gingerbread or higher got these new features. According to the latest platform numbers that’s nearly 95% of active Android devices. Google is proving that it can improve the Android experience without waiting for every OEM and carrier to get device updates deployed. That's worth a small delay.
It’s safe to say that Nvidia is really competing with itself at this point in time. The current GeForce GTX 680 is pretty much even in performance to AMD’s Radeon HD 7970 GHz Edition, but much quieter and uses less power. The GeForce Titan outperforms AMD’s single-GPU flagship by a wide margin, but costs a cool grand, so it’s out of reach of most users.
Enter the GeForce GTX 780. At first blush, it seems like a “Baby Titan”, but that would be inaccurate. Let’s look at the base specs, compared to both the Titan and the GTX 680.
|Feature||GTX 680||GTX 780||GTX Titan|
|Memory Type||GDDR5 (6gbps)||GDDR5 (7gpbs)||GDDR5 (6gpbs)|
|Transistors||3.5 billion||7.1 billion||7.1 billion|
|Core Clock Speed (ref)||1006 MHz||863 MHz||836 MHz|
|Boost Clock||1058 MHz||900 MHz||876 MHz|
|Noise Under Load (ref)||46 dBA||43 dBA||46 dBA|
Given that the GTX 780 uses the same GPU chip as the GTX Titan, but with roughly 15% fewer shader cores and half the memory, the GTX 780 offers about 80% of the gaming performance of a Titan, as we’ll see shortly. Take a look at that memory speed, too: 7000 MHz (effective), or 1gpbs faster throughput than the Titan or GTX 680. There’s no lack of memory bandwidth with the GTX 780. However, Nvidia told us that the GTX 780 would only have about a quarter of the double precision floating point performance of Titan. In other words, the GTX 780 will be a great gaming card, but won’t come close to Titan for high end GPU compute.
Digging a little deeper into the features of the GTX 780 card itself, Nvidia’s made some interesting design decisions in the reference design. The cooling subsystem is tweaked from Titan to run even quieter. Nvidia accomplished this by managing fan speeds to run closer to a steady state, rather than ramping the fan speeds up and down rapidly.
The GTX 780 will cost substantially less than a Titan, at about $649 for reference grade cards, but that's nearly $200 more than a 2GB GTX 680. However, 4GB GTX 680s still cost nearly $600, so the price differential between a GTX 780 and GTX 680 4GB card isn’t as large, while new new card offers quite a bit more performance. Still, $649 is a pretty steep price for a video card, and it’s partly a result of AMD’s inability to compete on single GPU performance. The lack of competition puts Nvidia in the enviable position of being able to set higher prices than they might have if competition had been stiffer. I included a GTX 680 4GB card for comparison, but it’s likely that performance differences with a 2GB card will be minor.
With this sobering thought in mind, let’s take a look at performance.
Norm test flies The Viper 2.0, a Battlestar Galactica-themed full-motion flight simulator built by teenage makers for Maker Faire 2013. This year's Viper project includes upgrades to the simulation software and added operator controls to give pilots a challenge when dogfighting Cylons. Seat belts required!
In 2012, Roy the Robot was one of the most eye-catching projects on exhibit in Maker Faire's expo hall. Half of Roy's draw came from his Terminator-like skeleton, with laser-cut wood standing in for shiny metal. He owed the rest of his appeal to a red Hawaiian shirt that Hunter S. Thompson and Bruce Campbell would've fought over. This year, the Hawaiian shirt hangs in the corner of Roy's booth, because he's no longer wearing it--he's got a brand new laser-cut chest to show off. 11 months after concluding a successful Kickstarter, maker Brian Roe is drawing a constant crowd to show off the new and improved Roy.
"At Maker Faire last year I had the arm and the hand and just the head, basically, the eyes and the jaw," says Roe, who's a mechanical engineer by day. "It was all mounted on a PVC frame kind of representing the shape of a human body, but nothing underneath the shirt. That's why he had a Hawaiian shirt on. I wanted to cover up all the PVC. This year I decided I really wanted to try to finish out the arms. So I got working on the arms, but then of course, if you're going to build the arms, they have to attach to something. So then you need the chest. Well, if I'm going to put the chest in there, I might as well do a cool neck because I've got the chest there to hook the neck to. So it got a little crazy. Now he sits with 48 servos, 16 servos in each arm. It's crazy. There's a ton of servos."
Roe started Roy as an animatronics project before Maker Faire 2012. The scale of the robot quickly spun out of control, but in a good way--Roe kept adding degrees of articulation, laser cutting parts in his home workshop, and suddenly his robot had a hand with individually servo-driven fingers. Roe launched a Kickstarter the first day of Maker Faire in 2012, offering Roy arm kits for backers to assemble, and eventually raised about $15,000--double his goal of $8,000.
There was enough money and interest in the project for Roy the Robot to grow even more complicated. But first, Roe had to deal with laser cutting some 10,000 parts for his backers.
Virtual reality goggles are getting a lot of attention these days, but there's also exciting innovations in augmented reality. We put on the Technical Illusions CastAR glasses at Maker Faire 2013 and chat with founders Jeri Ellsworth and Rick Johnson about their vision for AR in the home and for gaming.
The expo hall of Maker Faire is packed with hundreds of projects. Some Makers are there to sell things they've built. Others are just there to show off something fun. Craig Bonsignore, maker of the Open Clock, had a slightly different motivation for his project: He hated his alarm clock, so he built one of his own as a completely open source project. And every component, from the 6.4-inch resistive touchscreen to the 512 LED red/green display, is available online.
"It's the maker thing. Something bugs you, you just make a better one," says Bonsignore. "The design criteria were: Easy to use, easy to see, intuitive. I don't sleep with my glasses on, so with my glasses off, arm's length, I can read the digits without squinting."
The Open Clock looks a little like the time-telling equivalent of one of those cheap calculators with oversized buttons, and its numbers are big enough to read from across a room. But it's hardly a simple project. In his quest to make the perfect alarm clock--or, at least, an alarm clock that he won't hate--Bonsignore has given the Open Clock a fun array of features.
The display is touch-controlled, so a simple tap will switch from displaying the time to displaying the date. Another tap can open up the menu and adjust the time, and tapping at the top or bottom of a digit increases or decreases the number (if you've ever had one of those alarm clocks that makes you press a button 24 times to cycle through every AM/PM hour, you probably love this idea already).
The clock is green during the day from 7 o'clock in the morning to 7 o'clock at night, when it turns red.
"The clock is green during the day from 7 o'clock in the morning to 7 o'clock at night, when it turns red. So it's intuitive that right now it's day time, it's 1:52, it's green," says Bonsignore. He gestures to the three different models of the Open Clock he has on display at Maker Faire. A rough plastic frame houses the earliest model. "This is the first one--it's been sitting on my nightstand for about a year. I've sort of refined it over time. I think I started it with green at night, but decided, hey--red, submarines, there's kind of a night vision thing--red is better. You actually have more receptors on your retina for green. Green is an exciting color, and red is a subdued color, so that kind of made sense...The brightness adjusts automatically so it doesn't bug you at night. I had to go through some iteration on that."
The second Open Clock model has a smoother black shell. The third is made from transparent plastic, which shows off the Arduino board and speaker inside the clock. The LED face on the transparent model is also noticeably brighter than the other two, which he explains:
Keeping in mind the challenge of mixing food ingredients in micro-gravity, chef Traci Des Jardins concocts a recipe for spicing up astronaut Chris Hadfield's meals on board the International Space Station. Commander Hadfield also shares with Jamie and Adam the foods he misses most after spending six months in space.
I’m working on an Alien costume. I’ve got the suit. It was built for me, and it’s gorgeous. But I’m making the head myself, and it’s kicking my butt. The problem: I have too much time.
I’ve learned over decades of building that a deadline is a potent tool for problem-solving. This is counterintuitive, because complaining about deadlines is a near-universal pastime. When I worked with the amazing sculptor Ira Keeler on the space shuttle for Clint Eastwood’s Space Cowboys, Keeler was always proclaiming, “With a couple more weeks, this could be a nice model.” We’re conditioned to believe that the deadline is working against us. But I’m not so sure.
I’d like the head I’m building to be animatronic. The lips would curl back and the jaws would open and snap out, just like in the movie. I’d also like all of these to be controlled by the wearer’s facial movements. I know how each of these actions should work individually, but I keep getting stumped when it comes to choreographing them all to operate together. And when I’m stumped without a deadline, I tend to let things go. So the head has pretty much sat on my bench for seven months.
Any cursory perusal of a fan/maker forum on the web reveals two distinct kinds of projects: the long, meandering, inconsistently updated but impressively detailed effort and the hell-bent-for-leather, tearing-toward-a-deadline build. Solutions to problems of the first type are often methodical and obvious. Solutions for the second type are much more likely to be innovative, elegant, and shockingly simple.
Invariably, the second type of project is propelled by an upcoming event: Comic-Con, Halloween, or even just a visit to a children’s hospital with the 501st Legion (a loosely knit group of Star Warscostumers). Deadlines refine the mind. They remove variables like exotic materials and processes that take too long. The closer the deadline, the more likely you’ll start thinking waaay outside the box.
Meanwhile, my alien head sits there, taunting me, awaiting its resurrection.
Adam surprised the crowd at this year's Bay Area Maker Faire by riding in on a giant steampunk Nautilus vehicle. From the top of that machine, he gave his annual speech to makers, talking about the value of working hard and working smart, and giving advice about how to find a career that utilizes maker skills.
How do astronauts on board the International Space Station spend their downtime? Jamie and Adam learn about Chris Hadfield's clever "space darts" invention, and propose a new game for Hadfield to test while he's on orbit. This one involves creative use of duct tape!
Jamie and Adam chat with astronaut Chris Hadfield about the limitations of food preparation on board the International Space Station. While astronauts can't really cook their own meals, Jamie and Adam challenge celebrated chef David Chang with the task of devising a recipe that Commander Hadfield can test...in space!
How does the dining experience in space compare to that on Earth? We visited NASA's Space Food Systems Laboratory at the Johnson Space Center in Houston to learn about the history of space food and sample some of the same food that the astronauts on the International Space Station eat every day. Photos of the food here!
The heart of Google’s product line is search, and there can no longer be any doubt that Google Now is the future of the company's efforts. At the first day of Google I/O, the search giant cavorted itself like it was putting on a real developer conference. There were developer console updates, new tools, and APIs. Still, things came back to Google Now, and that’s no surprise.
The Search app on Android received an update, which was demoed on stage. Along with some new info cards, Google Now voice search gained a new capability -- it can schedule location and time-specific push reminders. Google Now understands natural language in ways that would have been impossible just a few years ago. Google’s data driven approach is desperately close to bringing the dream of a Star Trek computer to fruition.
Google and Apple took two divergent approaches to designing a digital assistant. Apple started with a system that understood common phrases and reached out to a limited number of services and databases to complete actions. This meant Siri could do some neat things out of the box, but it relied on third-parties like Wolfram Alpha and Google to do it. It wasn't about search--it was a digital personal assistant first and foremost.
Google came at the problem of voice interaction from the opposite direction. For Google, it was about search from the start. Mountain View has been aggregating massive volumes of data in its Knowledge Graph, now the heart of Google search cards. Google simply knows a lot of things without going outside its own services. This is the foundation of Google Now.
Google started working on its voice input system years ago with Goog411, which was later shuttered after the company had the data it needed. That enabled raw voice input for searches. The next step was to recognize relevant queries in search history and return Knowledge Graph cards in advance. That's the magic of Google Now on the phone -- it anticipates your searches.
I will never forget how well Google Now seemed to know my schedule when I started using it less than a year ago. Because I had Google location reporting turned on, my device knew where I liked to go, what roads I take, and even guessed my home address accurately. The old line about Apple products is that they “just work.” Well, Google Now is the modern embodiment of that slogan.
The voice aspect of Google Now has continued to evolve, culminating with yesterday’s announcement of reminder support, and it’s incredibly robust with all that Google data backing it. Google Now became an assistant app just like Siri, but it took longer to reach that level of usefulness and it’s stronger for having made the journey.
Let’s take a look at these new Google Now additions and see how it works.
Although the late William Castle, the man who gave us such films as Macabre, The House on Haunted Hill, and The Tingler, had a reputation for making schlocky, low budget horror movies, he was recently called the first interactive filmmaker. And indeed, his gimmicks did make audiences an active part of the movie going experience, even if an inflatable skeleton floating over the audience, or seat buzzers zapping you with mild electrical current wasn’t as innovative as creating IMAX, Dolby Atmos sound, or even D-Box.
Castle was somewhat of a low budget Hitchcock, and much like the master of suspense he would appear in the coming attractions of his films, explaining what kind of low budget fun the audience had in store if they went to his movies. Like Hitchcock, Castle become a brand of his own, and a recognizable face to young horror fans growing up. (Castle would even appear at the local movie theaters, talking to fans, chomping a big cigar, asking them what they thought of the picture.)
It all started with his 1958 horror film Macabre. Castle knew he couldn’t make a movie as scary as Hitchcock, but he hatched a fun plan to bring audiences to the theaters. As Castle recalled in his autobiography, he heard that Lloyds of London would insure anything, and he got them to put up a million dollar policy for anyone who died of fright watching the movie.
“Nobody’s going to drop dead,” Castle assured them. “It’s just a publicity stunt.” The movie began with a ticking clock, and an announcer warning the audience: “Ladies and Gentlemen, when the clock reaches sixty seconds, you will be insured by Lloyds of London for one thousand dollars against death by fright during Macabre. Lloyds of London sincerely hopes none of you will collect.”
Audiences ate it up, and Macabre was a big hit. With the House on Haunted Hill, which starred Vincent Price, Castle came up with “Emergo,” where an inflatable skeleton floated above the audience on a wire. Once time the skeleton fell into the audience, who tossed it around like a beach ball, and at another screening, the kids in the audience threw trash at the inflatable for target practice.
Then came The Tingler, which also starred Price.
I like to build PCs. Not as much as our resident PC columnist, maybe, but I still get a real kick out of ordering a bunch of components and spending an afternoon putting a PC together. I think that earns me a little nerd cred. But you know what earns you a LOT of nerd cred? Building a fully functioning PC—in Minecraft.
My favorite thing about projects like these are that not only are they an incredible example of the maker spirit, they’re a great teaching tool for something a lot of people don’t understand—how, at a deep level, the computer they use every day actually works. Today, we're going to look at some of the crazy things people build in Minecraft and other video games, and how they explain some of the most fundamental lessons of computer science.
If you spend very long hanging out in the sorts of seedy places people where gather to discuss building virtual computers, you’re going to hear the term “Turing Machine” thrown around. For instance, you might have seen that somebody built a Turing Machine in Dwarf Fortress but I’ll be damned if you’re going to be able to figure out what that thing does just from looking at the diagram.
So let’s talk a bit about Turing Machines. It’s a complicated topic, but also a tenant of modern computer science—so if you can pick this up, consider your daily enrichment quota fulfilled.
A Turing Machine is a conceptual machine composed of four parts:
The space program has long been one of America’s crown jewels, but critics often remark as to how wasteful it seems. Well, throw this story right in their faces – NASA has been responsible for many inventions that have made all of our lives better (or at least more awesome). Let's explore ten NASA-derived inventions that might surprise you.
Late last week, Intel unveiled some features and performance data for the graphics cores in their upcoming Haswell CPU. Most of the hoopla revolved around Haswell’s graphics performance on laptops, but Intel also disclosed some interesting bits about desktop processors. Before diving into that, it’s worth considering how integrated graphics typically plays out on desktop PCs.
First Puzzle Piece: Performance CPUs Rarely Use IGPs
On the mobile side, most Intel-based laptops currently include their highest end HD 4000 GPU. Laptops are increasingly becoming closed systems, making user upgrades more difficult–and graphics upgrades impossible. So Intel has been fairly smart, integrating its best GPU into all Core class processors. Even Ultrabooks, with their tightly constrained chassis and limited airflow, utilize CPUs with Intel HD 4000 graphics.
People who build PCs tend to be pretty smart about how they’re going to use a system. Building a small, shared living room PC for web access and light office chores? Integrated graphics may be fine, but so is a lower end CPU. Someone who picks up a higher end CPU – a Core i7 3770K, for example – is unlikely to use the integrated GPU. Usually, that system will end up with at least a mid-range graphics card, like a GeForce GTX 660 or AMD Radeon HD 7870.
Intel knows this, and doesn’t really want to spend the die space on putting a higher end integrated GPU into a performance-oriented CPU where the integrated graphics will mostly go unused. A better integrated GPU requires more die space, which increases the overall cost of the processor. That makes sense when you realize that even a relatively low end graphics card, like Nvidia’s GTX 650 or AMD’s HD 7790 substantially outperforms the HD 4000.
Fourteen billion years ago, when one tiny, dense point became an unfathomable explosion creating all the matter in the universe, no one was around to witness the spectacle. We may not have first hand accounts of just how hot the blast or just how fast the matter traveled, but that also doesn’t mean that our knowledge of the universe’s early years are blank pages. There is a record of what happened, and from it, you can make music—the big bang’s original sound track, in fact.
In 2003, the mother of an 11-year old contacted John Cramer, a physicist at the University of Washington, with a question about the big bang. She was helping her son on a school project, and she wondered if anyone had been able to record what the explosion sounded like. The answer, of course, was no, but he kept returning to the question.
Cramer was a frequent contributor to the magazine, Analog Science Fiction & Fact, and just two years earlier he had written enthusiastically about how recent research projects looking at the cosmic wave background allowed scientists to hear “the sound of a Big Bang from a distance of 14 billion light years!” Cramer’s linguistic flourish actually meant that the data gathered could be used to understand what the big bang sounded like over a period of hundreds of thousands of years as the universe rapidly expanded. But scientists hadn’t actually heard the sound with their ears. Cramer had access to enough information. Why not recreate the sound?
Staging a revival of a very, very old explosion took Cramer just 16 lines to program, and an one hour on a Saturday morning. He constructed the sound in the software Mathematica, which gives users the option to render mathematical functions as sound. For all his interest in the subject, Cramer explains now ten years later, “I didn’t know what I was going to get.”
The sound (embedded below), compressed to cover the first 760,000 years of the universe’s life, shoots up and then drops into a chest-vibrating hum that sounds like an airplane landing mixed with the static of the television. What came out of the speakers shocked more than just the physicist. Cramer’s two Shetland Sheepdogs came running into the room to inspect what in the world was going on. It was something bigger.
When people write about the great directors of our modern era, they often inexplicably leave out people who direct horror films. Yet it often takes an incredibly skilled filmmaker to make a great scary movie. All of the elements, such as the cinematography, pacing music, and editing have to come together and work like a well-oiled machine in the best scary movies.
It may have seemed odd that a comedy writer, Carl Gottlieb, was picked to craft the screenplay for Jaws, but as Gottlieb explains, “Comedy, at its most rudimentary level is really a craft, there’s really a technique to it. A shock moment in a horror film is like the punch line of a joke. If it’s not set up properly, it doesn’t work well. If it’s handled clumsily or bobbled, it doesn’t work at all.”
In Danse Macabre, Stephen King’s love letter to horror, he wrote that a filmmaker takes a “great risk” when making a horror movie because if it’s not made with any skill, “it often fails into painful absurdity or squalid porno-violence.”
Indeed, a great horror film doesn’t happen by accident, so here are a few common denominators I’ve noticed in the best of them, some good building blocks to help create a good, scary tale if you will.
Most of us know the Alfred Hitchcock rule of suspense. A bomb is under the table, the audience knows it’s going to go off in ten minutes but the people sitting there have no clue. Instead of having the bomb go off immediately and shocking the audience for a moment, now the audience is on the edge of their seat for what feels like an interminable length of time. As the master director himself once said, there’s no terror in a bang, only the anticipation of one.
I’ve always loved the first twenty minutes of When a Stranger Calls, which takes its time building fear, and it also built scares with simple ideas. Fred Walton, the writer/director of Stranger, advises, “Don’t be afraid to slow down and get into the details of what’s happening each moment. The clock is ticking, the wind is blowing outside, the ice cream bar is melting, all these little things flesh out the environment that the protagonist is struggling in. The things that scare me are the most realistic things, and for most people, the realistic things tend to be really small like the phone ringing, a knock at the door.”