Quantcast
Latest StoriesMath
    Show and Tell: The Curta Calculator

    Inventern champ Sean Charlesworth joins us in the Tested office this week to share one of his prized possessions: a Curta mechanical calculator. Designed in the 1940s before electronic calculators, this hand-cranked device was considered the the most precise pocket calculator available, and was used by rally car drivers and aviators.

    In Brief: Predicting Someone's Age By Their Name

    In the new Planet of the Apes movie, Keri Russell's character briefly talks about how she had a young daughter who died of the simian flu virus. As the character was telling the story, my friend--who had not seen the film--leaned over to me and said "I bet her daughter's name was Sarah." And indeed, just a second later, that's what was uttered on screen. This prediction led to a discussion post-screening about why Sarah was such a suitable (and predictable) name to evoke the image of a child never seen in the film. Why is Sarah evocative of a young child and not a name like Bessie or Helen? Earlier this year, Nate Silver's FiveThirtyEight did a statistical analysis of the popularity of names, based on public data from Social Security Administration. We've seen websites and apps that show how popular names are over time, but Silver's team went a step further to calculate the median ages for every common and uncommon name, for both male and female names. Of all living Sarah's, for example, the median age is 26. While if you were to meet a Helen in person, it's more likely that she's older, given that the median age for Helen's still alive is 73. And the names with the youngest median age? For girls, it's Ava, and for boys, it's Liam. Jayden comes in at a close second. Thanks, Will and Jada.

    Norman
    In Brief: How to Win at Rock Paper Scissors

    MIT Technology Review has details about a recent study carried out at the Zhejiang University in China on rock-paper-scissors strategy. Conventional thinking was that the best strategy for not losing in the long run was to choose your play at random--as defined by the Nash equilibrium mixed strategies solution of game theory. But the research done with 360 students at the university indicated that play choices were conditional and patterns emerged. Specifically, winners of one round tend to stick with the same action, while losers switch to the next action in a sequence (in the order of rock, paper, scissors). The researchers are preparing future studies to determine whether this type of conditional response is a basic decision-making mechanism or a byproduct of fundamental neural mechanism. Gambit play was not taken into consideration.

    Norman
    Visualizing The Infinite Hotel Paradox

    "The Infinite Hotel, a thought experiment created by German mathematician David Hilbert, is a hotel with an infinite number of rooms. Easy to comprehend, right? Wrong. What if it's completely booked but one person wants to check in? What about 40? Or an infinitely full bus of people?" A fun thought experiment to visualize the concept of infinity. Your brain starts to hurt at the two-and-a-half-minute mark. The full TED-Ed lesson is here.

    Celebrating Benoît Mandelbrot, The Father of Fractals

    A poignant interview with the late Benoît Mandelbrot, in which the legendary mathematician talks about the beauty in which a simple formula can create a universe of complexity. "IBM celebrates the life of Benoit B. Mandelbrot, IBM Fellow Emeritus and Fractal Pioneer. In this final interview shot by filmmaker Erol Morris, Mandelbrot shares his love for mathematics and how it led him to his wondrous discovery of fractals. His work lives on today in many innovations in science, design, telecommunications, medicine, renewable energy, film special effects, video game graphics, and more." Learn more about Mandelbrot here, and watch the 2010 TED talk he gave on fractals here.

    How 10 Gamblers Beat The Casinos

    In casinos, the odds always favor the house; it’s just math. But every once in a while, ambitious gamblers will try to skew those odds to break the system. We’ve hunted down ten stories of hustlers who managed to bring down the house in ways that would make Hollywood proud. Some used science, some used skill, and some straight-up cheated, but they all walked away with tons of cash in their pockets.

    Movies by the Numbers: Why the 1960s were the Most Creative Decade

    Any a film critic or movie buff for the golden age of film, and you'll probably get an answer that includes the 1960s and 1970s. The 1940s and 1950s gave us incredible performances and scripts from the Hollywood studio system, but the later decades gave rise to an incredible array of filmmakers like Jean-Luc Godard, Stanley Kubrick, and Francis Ford Coppola, who pushed forward the cinematography and psychology of movies.

    Turns out there are actually numbers to back up the wisdom of movie fanatics. Wired picked up on a study that analyzed IMDB tags to determine how creative movies have been decade-by-decade. Granted, novelty--especially novelty as defined by tags on IMDB--isn't going to be the most universally accurate measurement of film quality. But the numbers show that more new concepts and creative films came from the industry in the 1960s.

    Image credit: Warner Bros. Pictures

    The study's authors used "crowdsourced keywords from the Internet Movie Database as a window into the contents of films, and prescribe[d] novelty scores for each film based on occurrence probabilities of individual keywords and keyword-pairs." The study focused on keywords used as tags on IMDB, which describe specific story elements and locations, genres, and other movie trends.

    So how does novelty, and by association creativity, come into play? Analyzing those keyword tags. "We devise[d] a method to assign a novelty score to each film on the basis of the keywords associated with it and the keywords appearing in all films that were released prior to it," the study explains. They collected data from the years 1929 through 1998, then ran everything through some equations to deduce novelty. Sure enough:

    Ultimate Tic-Tac-Toe is Mind Bending

    Ultimate Tic-tac-toe is a game math nerd Ben Orlin recently discovered at a mathematician's picnic. The game is played with one giant tic-tac-toe grid, with each of the nine squares filled with another smaller tic-tac-toe game board. The rules are a little more sophisticated than regular tic-tac-toe, though. The object is to win the big board by winning the right combinations of small boards, but each turn takes place in a different small board, which is determined by where your opponent last played. So players have to play a meta game of strategically placing X's and O's in the small boards to direct where their opponent will get to play next. Orlin says that when he plays it, strategies surface where players make intentionally bad moves in the small boards to avoid sending the other player into a good places in the larger grid. Yo dawg, it's tc-tac-toe Inception. (h/t Boingboing, Kottke, Andy Baio)

    Norman
    POV: Solving Three Rubik's Cubes While Juggling

    Stanford student (and World Cube Association member) Ravi Fernando uploaded this video of his famous rubix's cube juggling feat from a first-person perspective. The cubes are solved at the 1:38, 4:15, and 5:55 marks. There's even a near drop!

    The Origin of the Plus and Minus Symbols

    If math is the only universal language, as the saying goes, then it's a language more or less like any other. And approaching it like a language makes us think about elements of mathematics that we normally take for granted. For example: When and how did the symbols for addition and subtraction come from? Astrophysicist Mario Livio was curious and decided to find out the answers for himself, and the resulting blog post is an interesting mathematics history lesson.

    Though mathematics has been around for more than two thousand years--famous mathematician Pythagoras lived in the 6th century BC--Livio traced the + sign back to the 1300s.

    "There is little doubt that our + sign has its roots in one of the forms of the word 'et,' meaning 'and' in Latin," writes Livio. "The first person who may have used the + sign as an abbreviation for et was the astronomer Nicole d’Oresme (author of The Book of the Sky and the World) at the middle of the fourteenth century. A manuscript from 1417 also has the + symbol (although the downward stroke is not quite vertical) as a descendent of one of the forms of et."

    The - sign, meanwhile, hasn't been around as long--Livio writes that it first appeared in 1481 in a German algebra manuscript. Neither the + or - symbol appeared in English writing on math until 1551. And, like any other language, the writing of mathematics has evolved over the years. Livio notes a few examples of how the symbols have changed into the forms we now know:

    Mirror Master: Mathematician Cures the Driver-Side Blindspot

    Mirrors are old. Thousands of years old. They don't exactly seem like the trickiest bit of technology to invent--once you've spotted your reflection in a shiny piece of stone or metal, you're going to figure it out pretty quickly. And once glassmaking came around, well, the leap to glass mirrors seems only natural. The silvered-glass mirrors that we know and love today are relatively young, comparatively--they were invented in the early 1800s. Since then, inventors have discovered convex glass can provide a wider field of view of the world, and glass surfaces with both concave and convex segments (aka carnival mirrors) can create crazy distorted reflections of reality.

    The work of mathematics professor R. Andrew Hicks may represent the most significant evolution in mirror technology since...well, glass. Hicks has been using math for years to design mirrors that reflect light in just the right ways--they're essentially finely-tuned versions of the carnival mirrors that make everything look all wacky--and has come up with some impressive reflective surfaces.

    For example, he's developed a curved mirror that reflects the world without reversing its image. It's one smooth piece of glass, not a pair of mirrors connected at a 90-degree angle like a traditional non-reversing mirror. As you'd expect from a mathematics professor, algorithms made it all possible--Hicks worked out equations to represent the kind of reflection he wanted to create, then used those to develop the coordinates for thousands of tilted points on the mirror's surface.

    Hicks has invented and patented a driver-side mirror for cars that eliminates blind spots.

    When those coordinates are fed to a machine and ground away with a diamond, they can create all sorts of mirror variations--another example, which Hicks calls the vampire mirror, doesn't even create a true mirror image. If you look into the mirror and wave your left hand around, it'll look like you're actually moving you right.

    More importantly, Hicks has invented and patented a driver-side mirror for cars that eliminates blind spots. Here's how it works.

    Why Mathematicians Love Programming Elevator Actions

    You stand poised for action in the lobby, eyes darting back and forth between the doors in front of you. Any moment now one of them will open. But which one? You wait to hear that wonderful Ding! that means arrival, doors opening, salvation from the interminable boredom of a 30 second wait. When you hear it you'll spring into action, rushing into the elevator and jamming on the button for your floor. The doors close. And then you wait again.

    It's a ritual we all know by heart, but it's amazing how much math and planning go into the 18 billion elevator rides taken annually in the United States alone. When everything goes right, you won't even have to wait 30 seconds for the doors to open. According to Theresa Christy, a mathematician and researcher at Otis Elevator Company, 20 seconds is the magic number for an elevator wait. And that number hasn't changed in about 50 years.

    You'd think we'd have faster elevators five decades after 20 seconds became the target waiting time, but speed isn't really the issue. It's all about the number of stops elevators have to make and juggling the wait times for people on every floor of a building. A recent profile of Christy in the Wall Street Journal reveals just how much math goes into every imaginable elevator use scenario.

    When Christy programs elevators, she has to take into account the size and weight of elevators and how many people can fit in them. Building owners want to install as few elevators as possible, since they take up a great deal of space. Passengers in various countries prefer different amounts of personal space. So, for example, more Japanese riders will crowd into elevators than Americans, but they want to know in advance which elevator they'll be getting into, so they can line up in front of the right set of doors.

    The elevator code has to strike a balance between convenience for riders and convenience for waiters. If an elevator has already made three stops, should it make a third to pick up someone who's been waiting for 30 seconds and inconvenience its current passengers? Christy runs simulations to analyze the decisions elevators make according to their programming, then tweaks that programming to better her score.

    She compares it to a video game; we hope she's never played Mass Effect. NPR's Marketplace calls her work an art. We think either label works--it's an underappreciated, endlessly challenging job that will never have a perfect solution. Christy's short Marketplace interview is an interesting look into a job that most of us would never think about, despite how much it affects our daily lives.

    The Uneven Odds of Flipping or Spinning a Coin

    What do we know about coins? They're legal tender, they usually depict dead men on one side, and they're the go-to tiebreakers for problems big and small. Thing is, coins aren't perfectly suited to that role. A study on coin tosses reveals that the "randomness" of a toss is actually weighted ever so slightly towards the side of the coin that's facing upwards when a flip begins. "For natural flips, the chance of coming up as started is about .51," the study concludes.

    The paper, written by statistics and math professors from Stanford and UC Santa Cruz, also points out that a perfect coin toss can reproduce the same result 100 percent of the time. Of course, the perfect flip was performed by a machine, not a person. And the results that lean ever-so-slightly in favor of flipped-side-up don't take into account flipping a coin after catching it or letting it bounce around on a floor or table. In practical usage, the .51 bias is so slight that you'll never notice.

    If, like me, you'd always heard that coins tend to land tails-up because the heads side is heavier, there's some science available for you, too. Spinning, rather than flipping, an old penny will land on heads something like 80 percent of the time. Lincoln's head is heavier than the Lincoln memorial on the reverse, which leaves tails facing up more often than not. Unless the penny has accrued enough dirt or oil to throw the weight off.

    And let's be honest--how often do you come across an old, clean penny?

    Algebraic Equations Could Relieve Congested Wireless Networks

    When it comes to high-speed data transfers, most of the technological breakthroughs we catch wind of use some specialized, experimental hardware. We'd all love to be able to send friends HD video files at 26 terabits per second, but who has the equipment lying around to encode data into 300 beams of light? Researchers at MIT (who else?) have worked out an alternative that apparently has real potential--companies have already licensed their technology, which increased Internet bandwidth from one to 16 megabits per second in a recent test.

    How's it work? With math, naturally. Specifically, algebra, which the researchers hope to use to eliminate or drastically reduce packet loss. When packets are dropped due to interference or clogged airwaves, devices have to re-request the missing information, and that information has to be sent again, contributing to the congestion problem.

    The researchers want to replace packets with equations. How does that help? Something like this, according to MIT professor Muriel Medard:

    The technology transforms the way packets of data are sent. Instead of sending packets, it sends algebraic equations that describe series of packets. So if a packet goes missing, instead of asking the network to resend it, the receiving device can solve for the missing one itself. Since the equations involved are simple and linear, the processing load on a phone, router, or base station is negligible.

    Coded TCP, as it's called, has already increased bandwidth of sluggish 1mbps and .5mbps connections to 16mbps and 13.5 mbps. Those were lab tests, so it's hard to judge how well Coded TCP would work in real-world situations. But it's currently operating with a proxy server stashed in Amazon's cloud, which makes the technology especially exciting. You may need to download an app for your phone to turn algebraic equations into bits of usable data, but you won't need new hardware to make use of Coded TCP.

    New hardware could help, as well: the researchers claim Coded TCP can seamlessly merge data from Wi-Fi and cellular connections without switching between the two.

    Want to read a whole lot more about a technology that could be driving faster network throughput in a few years? Dig into this Coded TCP white paper.

    The Physics of a Near-Lightspeed Baseball Throw

    Webcomic xkcd regularly revolves around jokes that require a degree in math or physics to appreciate--which makes sense, because author Randall Munroe has an undergraduate degree in physics. He recently started up a weekly blog called What If? that answers reader questions by putting his past career as a physicist to use. And the first one is awesome: "What would happen if you tried to hit a baseball pitched at 90% the speed of light?"

    As Munroe explains, things wouldn't go well for the batter. Or the pitcher. Or anyone within a square mile, really.

    At 90 percent the speed of light, or 604,000,000 miles per hour, the ball would be traveling so much faster than the air particles around it that it would collide with the particles in front of it. That collision would release gamma rays and tear apart air molocules, creating an expanding bubble of plasma that arrives at the plate before the ball itself.

    Image credit: XKCD.com via Creative Commons.

    Well, the ball doesn't even get there at all, really: in the 70 nanoseconds it takes to arrive, it's turned into a cloud of debris. That's about the time the batter is swept backwards into the backstop and disintegrates, even though he hasn't even seen the pitcher release the ball yet. Within a microsecond everything else disintegrates, too.

    The whole thing's more fun with Munroe's illustrations, so check out the original post. Still, the lesson's pretty clear: steer clear of lightspeed baseballs.

    Neural Networks: What They Are and How They Know the Internet is Full of Cats

    Cats. The Internet is full of them. Researchers at Google X, the Goog's in-house skunkworks, has created a neural network out of a massive cluster of computers to detect patterns in YouTube videos. What patterns did the cluster detect? Cat faces. Somehow, I bet you aren't surprised.

    It's important to understand exactly what's happened here, because it's simultaneously very exciting and kind of mundane. Detecting patterns in images is something that the human brain is so exceptionally good at that you don't realize how difficult a task it actually is. The fact that you can not only recognize that those shiny things in the carpark are in fact, automobiles is amazing. The fact that you can pick your car out from a group of hundreds is damn near miraculous.

    If your brain is atop the leaderboard for pattern recognition, computers are near the bottom, somewhere below brine shrimp and some forms of protozoa. Anyone who has used facial recognition software in popular photo managers knows exactly how bad computers are at detecting faces. Conversely, studies have shown that parts of the human brain actually specialize in detecting faces. This is why the Thatcher Effect optical illusion works. You can thank the neural network in your brain for that.

    Computer-based neural networks have much greater success at recognizing patterns in data than traditional computational models. They do this by mimicing the massively connected nature of neurons. The simulated neurons are arranged in layers, with each neuron in a layer connected to all the neurons in the layer above and below it. This is a gross oversimplification, but data enters the input layer of the network, which triggers a series of signals. Those signals propogate through the network and eventually exit the output side of the network. That output contains the information that the neural network uncovered.

    Typically, before you can use a neural network to detect a pattern in data, you need to seed it with examples of the pattern you want to find. This trains the network to look for specific patterns. If you want to find pictures of cats, you seed the network with bunches of pictures of cats.

    The Google X team did something a little different. They ran millions of images culled from YouTube videos through the neural network, but they didn't seed the network with patterns first. Instead they just let the algorithm find patterns in the data--all of the patterns. They reported that the network found human faces, human bodies, and cats. The top-performing networks were almost twice as accurate at detection as previous efforts.

    This is not a cat.

    The big news here isn't that the Internet is full of cats. We knew that. Proof of concept that neural networks can still work when they're scaled up to massive clusters of machines and equally massive data sets is a huge step forward. Google X processed 10,000,000 images using a 16,000 CPU cluster in about 3 days, and the images were significantly larger than normal.

    However, that doesn't mean that this type of processing will be coming to your Android phone anytime soon. And your computer still doesn't understand the philosophical implications of Magritte paintings--it doesn't understand the metaphysical difference between a picture of a cat and a cat. Even the fanciest neural network built is nothing more than a pattern recognizer.

    Testing the Limits of Engineering Designs

    Inside North Carolina State's Constructed Facilities Laboratory, engineers test their designs to see how they function in the real world. Maggie Koerth-Baker toured the lab earlier this year. Her videos are an interesting look into a side of engineering we don't often see.