Latest StoriesLife
The Mathematical Law that Predicts City Sizes

Math is weird. Take, for example, Zipf's law, which io9 wrote about on Monday. Linguist George Zipf discovered, back in the 1940s, that if he ranked words by their popular usage, a surprising pattern appeared. The most popular word was used twice as frequently as the next most popular, and that word was used twice as frequently as the next most popular, and so on. Zipf called this the rank vs. frequency rule, though it's now known as Zipf's law. And Zipf's law doesn't just apply to words. It applies to the sizes of cities, too.

Strangely, the population distribution among cities in many countries follows the pattern of Zipf's law. It doesn't work 100 percent of the time, but Zipf's law is surprisingly accurate when applied to cities over 100,000 population.

"Just take a look at the top ranked cities in the United States by population," io9 writes. "In the 2010 census, the biggest city in the U.S., New York, had a population of 8,175,133. Los Angeles, ranked number 2, had a population of 3,792,621. And the cities in the next three ranks, Chicago, Houston and Philadelphia, clock in at 2,695,598, 2,100,263 and 1,526,006 respectively. You can see that obviously the numbers aren't exact, but looked at statistically, they are remarkably consistent with Zipf's predictions."

Photo credit: Flickr user c1ssou via Creative Commons.

Economist Xavier Gabaix wrote a paper titled "Zipf's Law for Cities: An Explanation" that showed cities graphed very closely to a line representing Zipf's law. Gabaix writes that Zipf's law applies to countries like the US and China and India, even though their backgrounds differ enormously. He concludes that "cities in the upper tail follow similar growth processes," referencing Gibrat's law.

Perhaps the most interesting thing about Zipf's law, though, is how it applies to what, exactly, constitutes a city. The urban sprawl surrounding a city may not technically be considered part of the city itself, but colloquially, it certainly would. New York city, for example, has a population of about 8.3 million. But the New York metropolitan area has a population of 23.3 million.

Turns out, Zipf's law still applies.

Glasses vs. Hearing Aids: The Gap Between Technology and Assistive Technology

What separates a technology from an assistive technology? We use computers and smartphones every day to interact with others, make ourselves more knowledgeable and more productive. The wheel is instrumental in getting us, and our things, from one place to another. But assistive technologies usually fit into a narrower category: Crutches and wheelchairs and hearing aids, which assist the disabled. The Atlantic ran an interview on Tuesday with Sara Hendren, who runs the website Abler, challenging that notion.

Hendren argues that all technology is, in fact, assistive technology, which is something some scholars have argued in disability studies. Hendren's goal is to bring that point of view into the tech world.

What kind of technology is Google Glass?

" 'Assistive technologies' have largely taken their points of departure from medical aids, primarily because in industrialized cultures, people with atypical bodies and minds have been thought of as medical 'cases,' not as people with an expanded set of both capacities and needs," she says. "So a lot of the design attention to things like crutches, wheelchairs, hearing aids, and the like have followed the material look and structure of hospital gear. And accordingly, designers and people working in tech have 'read' them as a branch of medical technologies and, usually, uninteresting."

Hendren points out there's a big acceptance/interest gap between technologies viewed as assistive, and technologies viewed as, more or less, normal. Glasses are a prime example. "Eyeglasses have moved culturally from being a medical aid to a fashion accessory," she says. "People who use them are getting 'assistance' in a very dependent way, but their cultural register has no stigma attached to it, the way that hearing aids still do."

The technology behind hearing aids may be advancing all the time, making them better and smaller, but they're still not viewed as normal in the same way that glasses are. There aren't fashion designers building thousands of varieties of attractive hearing aids, either.

Of course, there are plenty of explanations for glasses' mainstream acceptance. They've been around far longer than hearing aids, for example. But Hendren makes a good point about their design, and the segregated field of assistive technologies could likely benefit enormously from an influx of great technology designers.

That's what Hendren is hoping for. "What I’m interested in is seeing technologies that have thus far been labeled for 'special needs' get the kind of design attention that mainstream technologies do; I’m also interested in designers and technology developers seeing needs—interdependence—as a fundamentally human social state on a universal continuum."

Scientific Mysteries We’re No Closer To Solving

It seems like just about every day we read about a new breakthrough that shakes the world of science to the core. But, that said, there are some things that we’re still totally stumped on. Today, we’ll spotlight ten scientific mysteries that people have been working on for generations and we’re no closer to answering. It's stuff like this that keeps us curious about the world.

Reminder: How to Properly Stop a Nosebleed

GI Joe had it right. The Wall Street Journal's Health and Wellness blog consulted experts in rhinology (study of the nose) to figure out once and for all the best way to stop a nosebleed. Two important rules: don't tilt your head backwards and don't stuff your bloody nostrils with cotton or tissue paper. But preventing nosebleeds is possible, too, by keeping your nostrils moist (saline spray is recommended) and not aggressively blowing your nose. Knowing is half the battle.

fMRI Reveals Surprising Similarities Between Human and Dog Brains

Are dogs people, too? That's a broad question, but a new study from neuroscientist Gregory Berns suggests that, at the very least, dog brains are more similar to human brains than we previously thought. To reach this conclusion, Berns did something with dog brains no one else has ever done: he put them through an fMRI machine.

Giving a dog an fMRI is no easy task, since the scan requires the subject to remain perfectly still while it's in operation. Worse, fMRI machines are loud, exceeding 95 decibels. How do you get a dog to stay perfectly still in a loud machine? Training. Also, earmuffs.

Photo credit: Emory University

"Berns recruited dogs from the local community...and gradually trained them to climb up a series of steps into a table, rest their head on a pad inside the fMRI’s inner tunnel and sit still for 30 seconds at a time as the machine does its work," writes Smithsonian Mag. "To deal with the device’s noise (which can surpass 95 decibels, equivalent to the sound of a jackhammer 50 feet away), they taped earmuffs to the dogs’ heads and piped in ambient noise over loudspeakers, so instead of the machine’s sound beginning abruptly, it gradually arrived over background noises."

The scan's findings turned out to be pretty revelatory. First was a fairly expected, unsurprising confirmation: When the dogs were exposed to a handsignal that indicated they were going to get a treat, it triggered activity in the caudate nucleus, which houses dopamine receptors. Just like in humans, this part of the brain lit up when there was the prospect of pleasure.

The next test was more interesting. Dogs in the fMRI were exposed to the smells of humans--their owners and strangers--and to other dogs. Only the smells of the owners triggered increased activity in the caudate nucleus. When owners left the fMRI room and came back in, it triggered the same response.

Image credit: Plosone

Berns says “Obviously, dog brains are much smaller, and they don’t have as much cortex as we do, but some of the core areas around the brainstem—the basal ganglia, which the caudate nucleus is part of—look very much like those in humans.” Essentially, dogs don't have brains evolved enough for complex thought, but basic emotions? They can do those. Berns thinks that, fundamentally, dogs experience emotions similar to humans. Love, for example.

Arguments against Berns' theory claim that the caudate nucleus activity doesn't actually indicate an emotion like love--that they don't love their owners, they're simply conditioned to respond to them because owners provide food. Berns' next step is to fMRI a dog while it's being fed by an automated mechanism, and compare that to a scan of it being fed by a human. If there's a difference, he'll establish that there really is a relationship there.

Coffee is Least Effective Between 8 and 9 am

The best time for coffee is not, in fact, the moment you sluggishly drag yourself out of bed. At least, not according to science. Brainfacts writes that the study of chronopharmacology, aka the interaction between drugs and the body's biological rhythms, reveals when we should and shouldn't drink coffee. More specifically, it dictates the best and worst times to inject ourselves with a peppy dose of caffeine. If your first cup of morning joe comes between 8 am and 9 am, you're doing it wrong, at least according to the study of chronopharmacology.

The body's circadian clock can affect how it responds to drugs, making them more or less effective, altering our tolerance, and so on. Brainfacts writes that light, more than any other environmental stimulus, affects our biological rhythm. That rhythm is controlled by the hypothalamus, which also controls our sleep/wake cycle, hormones, and sugar homeostasis.

Photo credit: Flickr user shereen84 via Creative Commons

The hypothalamus' control of the hormone cortisol is the key that ties together our biological rhythm and consumption of coffee. "Drug tolerance is an important subject, especially in the case of caffeine since most of us overuse this drug," writes Brainfacts. "Therefore, if we are drinking caffeine at a time when your cortisol concentration in the blood is at its peak, you probably should not be drinking it. This is because cortisol production is strongly related to your level of alertness and it just so happens that cortisol peaks for your 24 hour rhythm between 8 and 9 AM on average."

Caffeine will naturally be least effective when cortisol is at its peak, which happens to be right around the time most people drink coffee in the morning.

In other words, caffeine will naturally be least effective when cortisol is at its peak, which happens to be right around the time most people start chugging their morning pick-me-up. Brainfacts goes on to argue that using a drug when it's needed is a key pharmacological principle, and drinking caffeine when it's least effective means you're more likely to develop a tolerance and need to up your dosage.

Drinking a cup of coffee when your cortisol levels are low, on the other hand, will give it some more kick. Cortisol levels apparently swing up between noon and 1 pm, and between 5:30 and 6:40 pm. That leaves a couple windows of opportunity--most importantly, between 9:30 am and 11:30 am or so--where caffeine will really be able to do its job.

One other Brainfacts tip: Since light has a major affect on our biological rhythm and will help cortisol production in the morning, making the morning commute without sunglasses will get the cortisol pumping more quickly. It might not be as stimulating as a cup of coffee, but it's an au naturel way to wake up just a little bit faster.

Tiny Backpacks Tap into Dragonflies' Brains

From National Geographic: "Scientists are placing computer chip backpacks on dragonflies to record the insects' brain activity as they fly and capture prey. Researchers hope the data they collect will reveal more about how the brain controls body movement." Engineers designing these backpacks have to be careful about the weight of the payload, which at 40 milligrams, is as heavy as a few grains of rice. Read more about this research being done at the Howard Hughes Medical Institute here.

New Theory Suggests Life on Earth Began with Meteors

I like to imagine life on Earth began as it's depicted in Star Trek: The Next Generation--compounds in a puddle of goo colliding to form the first proteins, as Patrick Stewart stands around looking vaguely confused. A new theory about the beginnings of life on Earth, as reported by Phys.org, is actually pretty similar--minus Patrick Stewart.

""When the Earth formed some 4.5 billion years ago, it was a sterile planet inhospitable to living organisms," says paleontologist Sankar Chatterjee. "It was a seething cauldron of erupting volcanoes, raining meteors and hot, noxious gasses. One billion years later, it was a placid, watery planet teeming with microbial life – the ancestors to all living things."

Chatterjee thinks he's figured out the sequence of events that took Earth from sterile dead zone to oceanic paradise. Meteorites were the key. Chatterjee divides the history of life's beginnings into four stages: cosmic, geological, chemical and biological. In the cosmic stage, 3.8 - 4.1 billion years ago, meteorites pounded the Earth--we can still see the damage they inflicted in craters on the moon and other planets. When giant meteorites cracked through the planet's crust, they loosed geothermal vents. They also left behind the building blocks of life, and in Greenland, Australia, and South Africa, environmental conditions were perfect for life to form.

Image credit: CBS Home Video

"Because of Earth's perfect proximity to the sun, the comets that crashed here melted into water and filled these basins with water and more ingredients," writes Phys.org. "This gave rise to the geological stage. As these basins filled, geothermal venting heated the water and created convection, causing the water to move constantly and create a thick primordial soup."

At this point, the scene was very much like it was depicted in The Next Generation's finale. Convective currents from geothermal vents incubated organic molecules. The first RNA and proteins formed in the craters left behind by meteor strikes. This was the chemical stage. Chatterjee also believes that an older hypothesis about the primordial soup, from professor David Deamer, was correct--fatty lipid materials brought with the meteor strike at some point encapsulated the RNA and proteins, binding them together.

After that, it still took millions of years for cells to begin to form and replicate. And that, says Chatterjee, is life. Pretty simple stuff, huh?

This is Soylent, and It's Not People

Soylent is not people, but it is now backed by \$1.5 million in Silicon Valley venture capital. If you're unfamiliar with Soylent, it's the recent creation of software engineer Rob Rhinehart, who finds eating, except as a social exercise, a bit of a bother. There's all that buying and cleaning and cooking food and washing dishes and blah blah blah. Who has the time? It would be cheaper, and easier, he reckoned, to create a nutrition sludge containing all of the vitamins and minerals essential to life. So he spent months researching nutrition and came up with the soylent mixture of carbs, protein, fat, and a whole lot of other essentials.

Image credit: Warner Bros. Home Video

Soylent's name cheekily draws on the famous Charlton Heston sci-fi film, though it's actually inspired by the soya and lentil concoction from the book Soylent Green was based on. It's flesh-free. And the idea certainly isn't anything new, which Rhinehart has been happy to admit. Single-source food shakes like Ensure and SlimFast have been around for decades. For most people, a liquid diet doesn't really stick. But Rhinehart's been living off of Soylent for months, documenting the results, and he and other testers have been tweaking the formula to be better.

Now here's the weird thing about Soylent: It's got some people really mad. Take the comment in the Wired story, for example, which calls Soylent "everything that's wrong with the tech industry, in one neat example. Dehumanization, ahistoricism, authoritarianism." That seems extreme. Worst case scenario--assuming no one tries to live off of soylent for years and somehow ends up dying as a result--is that the concoction turns out to be ineffective over the long term. Rhinehart and the people he's working with are hoping to design Soylent as a food that you could completely subsist on, or could combine with a diet of solid foods as well. Current meal replacements don't offer the balance of nutrients necessary to live on forever, which is Soylent's goal.

If Soylent actually attains that goal, the benefits could be huge. Best case scenario, Rhinehart has created a food substitute that can feed the poor on \$5 per day and give people too busy to prepare proper meals a much healthier diet. If it doesn't work as intended, the naysayers have something to gloat about.

Photo credit: Soylent

There are, of course, concerns about Soylent from real experts. io9 writes: "We reached out to a handful of nutritional scientists to get their opinions on the product, and they were generally surprised that anyone would want to replace their food with a single mixture. Their opinions of Soylent were overwhelmingly negative. Steve Collins, founder and chairman of Valid Nutrition, a company that manufactures Ready to Use Foods for the prevention and treatment of malnutrition, said, speaking through a colleague, that, except in exceptional circumstances, he felt that trying to replace a diverse diet with a single product was misguided. Susan Roberts, Professor at Tufts University's Friedman School of Nutrition Science and Policy, likened Soylent to already available nutritional shakes. While there might be some benefit to Soylent's low saturated fat content, she said, there are certain risks inherent in a non-food diet. '[T]here are so many unknown chemicals in fruits and vegetables that they will not be able to duplicate in a formula exactly,' she said in an email. She says that, if Soylent is formulated properly, a person could certainly live on it, but she doubts they would experience optimal health. She fears that in the long-term, a food-free diet could open a person up to chronic health issues."

What's a Chicken Nugget Made of? Not Much Protein

If a chicken nugget was 19 percent protein, would you still call it chicken? That's a question you'll have to answer if you look into the research of Richard D. deShazo, MD, a professor at the University of Mississippi Medical Center. "“I was floored. I was astounded,” deShazo said to The Atlantic, describing his reaction after he looked at a chicken nugget under a microscope. The contents of the nugget did, technically, come from chicken. But if you consider chicken meat, well, the two nuggets deShazo checked out don't exactly qualify.

Photo credit: Flickr user sebleedelisle via Creative Commons

The Atlantic writes "The nugget from the first restaurant (breading not included) was approximately 50 percent muscle. The other half was primarily fat, with some blood vessels and nerve, as well as "generous quantities of epithelium [from skin of visceral organs] and associated supportive tissue." That broke down overall to 56 percent fat, 25 percent carbohydrates, and 19 percent protein.

"The nugget from the second restaurant was 40 percent skeletal muscle, as well as "generous quantities of fat and other tissue, including connective tissue and bone." That was 58 percent fat, 24 percent carbs, and 18 percent protein."

To some, the chicken nugget is a relatively pure fast food item--it's white meat, solid protein, with a not-totally-terrible batter around it. And at some fast food places, especially ones like Chick-fil-a that focus on chicken, that's probably true. Though deShazo didn't reveal the fast food places he tested from, one is likely McDonalds. He intended his research to be a reminder that "Chicken nuggets available at national fast food chains...remain a poor source of protein and are high in fat."

Image credit: University of Mississippi

The National Chicken Council (if only we had a National Chicken Nugget Council to turn to) argued that a test of two nuggets is hardly reflective of the millions (billions?) of chicken nuggets served at fast food restaurants every year. That's true. Just keep in mind there's a good chance you're chomping into some cartilage, intestinal tissue, bone fragments, and skeletal tissue when you take a bite out of a nug.

The Universal Law of Urination

Remember that awesome video last year released by researchers at Georgia Tech studying the efficiency of mammals shaking water off their bodies? The same researchers have just released a new study and video revealing a new discovery about mammals, and this time it's about peeing. While filming animals at a local zoo, the researchers noticed that animals of different sizes all took a similar time to urinate--about 21 seconds. As summarized by New Scientist, "their law of urination says that the time a mammal takes to empty a full bladder is proportional to the animal's mass raised to the power of a sixth, meaning even very large changes in mass have little effect on the time." Science!

It's Possible to Add Touch Sensation to Prosthetics

Prosthetic limbs offer the ability to hold objects, to walk or run, but they can't restore the sensation of touch lost with a human limb. They don't have skin or nerves like us, even though science has developed touch-sensitive artificial skin. The problem is, how do we relay the sensation of touch from a prosthetic to the brain? io9 writes that Sliman Bensmaia, head of the University of Chicago's somatosensory research lab, has made a major breakthrough with using electrical impulses to fake touch sensations.

Bensmaia and colleagues worked with DARPA on the Revolutionizing Prosthetics project, which had two major goals: building a limb to restore arm-like motor functions, and to simultaneously restore the sensation of touch. Bensmaia's research fell into the latter category. To simulate touch sensations, he would have to electrically stimulate parts of the somatosensory cortex. Doing that accurately would require great understanding of the brain.

Image credit: Paramount Pictures

Practically, he would also have to develop implants that could reliably stimulate the brain. They would have to be accurate and long-lasting. And they would have to be able to stimulate the correct location in the brain to correspond to the sense of touch in the arm rather than a foot or the stomach. Previous research into the somatosensory cortex thankfully isolated which areas of the brain pertain to which sense locations.

So, on to the breakthrough: the research of Bensmaia and half a dozen other scientists paid off. Using macaques, they were able to convey the sensation of touch using electrical stimulation. Here's an excerpt from the abstract of their paper, published on the study:

"We have developed approaches to intuitively convey sensory information that is critical for object manipulation—information about contact location, pressure, and timing—through intracortical microstimulation of primary somatosensory cortex. In experiments with nonhuman primates, we show that we can elicit percepts that are projected to a localized patch of skin and that track the pressure exerted on the skin. In a real-time application, we demonstrate that animals can perform a tactile discrimination task equally well whether mechanical stimuli are delivered to their native fingers or to a prosthetic one."

That last bit is especially important. In their study, the macaques would respond to direct electrical stimulation as if they had experienced the sensation of touch. When that electrical stimulation was created by poking sensors on a prosthetic, they had the same reaction.

It will likely still take years and years of testing to apply the research to humans, and maybe years more before these sorts of implants can be relatively non-invasive for amputees. There's also much more to touch than basic sensations--understanding shape, texture, temperature, and so on--that will require more nuanced research. But for a start, it looks like prosthetics with a real sense of touch are inevitable.

The scenario: Well, the singularity is here. Computers have surpassed humans in terms of processing power and level of intelligence. But the machines aren’t totally evil. They’re open to letting humankind upload their minds into the collective consciousness and live on as digital beings. You’ll have to give up your body, though. Still, it’s a small price to pay. Your knee has never been right since you tweaked it playing football in high school anyway. Plus: immortality! What do you do?

Image credit: Final Moments of Karl Brant

#### How Realistic is This?

Ok, this one is a bit of a leap. We’re nowhere near uploading our entire minds into a computer, depending on who you ask. But there are definitely some folks working on figuring out how to do it. Earlier this year, famous futurist (and director of engineering at Google) Ray Kurzweil said a conservative estimate would have us uploading our brains into a computer by 2045. And, hey, if Google says it will happen there’s no reason to think it’s not possible. Though, in the same speech he also said the singularity would be upon us by 2100. So, grain of salt. Others argue uploading our brains may actually never be possible at all.

#### The Ethical Conundrum

You’re going to have to decide how much you like your body and want to hang on to it. Once you upload your consciousness there’s very likely no going back. You also have no idea what to expect from living inside a computer, which means you’ll have to accept the fact that your very idea of consciousness might change once you’ve become fully digital. If your friends and family aren’t uploading themselves you’ll also have to decide if you’re willing to give up your current way of interacting with them. Or accept the fact that you may never see them again. But if the singularity has already happened, then you’ll get the added benefit of being smarter, faster, and better than a human.

Photo credit: CBS Home Video

#### What the Ethicists Say

There isn’t a whole lot of legitimate writing on the ethics of uploading the brain. But those considering it often point to The Ship of Theseus, or Theseus’s Paradox, which goes something like this (excerpt from Logical Paradoxes):

A Transhuman Conundrum: Implantable Sensors

The scenario: The economy’s terrible and you just can’t land a job. Seems like everybody these days has a digital enhancement of some kind that gives them an edge. Why not get your own? Just get a few teeny tiny sensors implanted to give yourself near-prescient abilities. Choose from the ability to sense magnetic fields, electric fields, or devices that constantly monitor the ship-shapeness of your body. Let your boss wirelessly monitor your brain activity to make sure you’re concentrating on your job. And, if your gig is particularly taxing, get a pH sweat monitor to make sure you’re truly staying hydrated. There’s literally nothing these gizmos can’t sense! What do you do?

Photo credit: Flickr user pasukaru76 via Creative Commons

#### How Realistic is This?

There are already tons of implantable sensors on the market or in development. In fact, we’ve even rounded them up before. Right now they’re all built for medical purposes (the pacemaker has been around for decades, but there’s tech to watch tumor growth, track the health of implanted organs, and monitor blood sugar). It’s only a matter of time before these sensors branch out to a slew of different purposes and become small enough that you can have several in your body at once.

#### The Ethical Conundrum

Image credit: Orion Pictures

You’ll have to decide just how much insight into your personal life (and the inner workings of your very body) you want to have--and just how much of that you want to give up to your employer. You’ll also have to consider how many people will lose their jobs to you because of the extra-special abilities your fancy new sensors impart. Plus, are you going to use the tech just in your job? Or are you going to start watching your girlfriend’s heart rate for changes outside of work just because you can?

A Transhuman Conundrum: Brain-Machine Interfaces

The scenario: Everyone’s always told you to relax. You’re too high-strung. You just have so much anxiety about everything. So why not get yourself a shiny new exocortex. A little computer that you can wear behind your ear. It plugs into your brain and helps you have all those fantastic personality traits you’ve always wanted. Want to be funnier? Tap into a repository of jokes and anecdotes. Have a better memory? Storage capacity is not a problem. Instantly speak a foreign language? Sure! Why not even a constructed one?

Photo credit: Paramount Pictures

#### Is This Really Possible?

The exocortex--or even the ability to jack your brain into a computer to enhance it--is still a long way away. But it’s not completely impossible. We’re already experimenting with it on the small scale. You can already buy a whole slew of toys that claim to be operated by your brain (Mattel’s Mindflex games let you use your mind to direct a ball through a maze). UC Berkeley's Carmena Lab is developing techniques to use the brain to manipulate mechanical devices. And, of course, there’s Obama’s infamous brain map initiative--who knows what will come from that. Applications today work in one direction--from brain to device--but two-way connections, such as memory implants, are in the works.

#### The Ethical Conundrum

Your new exocortex is going to help you with high level thinking. It could make you a better student or a better all around person. But it could also significantly change (or even replace) your personality. You’re going to have to decide how far you want to go. Just how different are you going to end up being after all is said and done? You’re also going to have to take into account how your friends and family are going to react to these changes. At the same time, your boss might consider you to be the ultimate employee--truly dedicated to being the best at your job that you can be.

A Transhuman Conundrum: Elective Bionic Limb Replacement

The scenario: You have carpal tunnel from repetitive tasks and your legs don’t have much muscle left because you sit all day long anyway. Don’t fret! Advances in prosthetics means cheap, easily attachable, bionic parts are available to you. Why not replace all your limbs? Mechanical hands can type faster than your stubby human ones, mechanical legs don’t get shin splints or bum knees, and a new metal elbow will make playing catch with your dog WAY more fun (especially since your dog is a robot). Prosthetics are better than your real limbs, they’re super cheap now, and it’s a simple in-and-out procedure. What do you do?

Image credit: 20th Century Fox Home Entertainment

How Realistic is This?

In a lot of ways, prosthetic limbs are already starting to look better than the regular old boring human ones. All the way back in 2009, an arm prosthetic called the iLimb came equipped with its very own iPhone app that allowed its users to customize a variety of personalized grips. Today it’s able to gradually increase the strength of its grip to adjust to different activities (like tying a shoe versus picking up a glass). And that’s just arms. In 2012, Zac Vawter and his bionic leg climbed all 103-flight of Chicago’s Willis (aka Sears) Tower in just under an hour. His \$8 million dollar prosthesis, made by the Rehabilitation Institute of Chicago’s Center for Bionic Medicine, is connected directly to the nerves in his leg that would normally control his hamstring. Right now, the biggest hurdle preventing us all from replacing our limbs with bionic ones is the price tag.

#### The Ethical Conundrum

Image credit: Sony Pictures Home Entertainment

You’ll have to decide how you feel about cutting off your already working limbs. After all, they’ve served you well enough for this long. And you have no idea how you’ll actually feel about your bionic replacements. Remember, once your limbs are gone, there’s no going back (probably). And how prepared are you to come in for regular firmware and hardware upgrades? You’ll also have to decide how your friends and colleagues will feel about your modifications -- because once you’re part robot you’ll jump higher and run faster than any of them. Plus you’ll beat everyone in arm wrestling. But if you’re a reporter you’ll be able to type super fast, so maybe it’s worth it.

Much like in other elective surgeries, your doctor will have to decide how he feels about basically maiming you in order to enhance you.

Here's what ethicists have to say on the matter.

A Transhuman Conundrum: The Retinal Implant

This week we’re taking a look at the ethics of enhancing ourselves. We’ll present you with a series of ethical conundrums brought about by entirely possible future transhuman modifications and you can argue the ethics in the comments. We’ll have to face these questions eventually, might as well get started now. Are you pro or con superhumans?

The scenario: You are going blind. But not to worry, it’s the future, so there’s technology to fix that. While at the doctor’s office discussing your retinal implant options, the doctor mentions that he can grant you with all sorts of visual abilities well beyond simply restoring your sight--all he has to do is add some extra features to the device. Want the ability to see x-rays, ultraviolet light, or infrared? How about a radar display? Even better, what about heat mapping? No problem! You may not actually need any of these extras, but you can have them anyway. All you have to do is ask. What do you do?

Image credit: Paramount Home Video

#### How Realistic is This?

There are a slew of implantable devices that replicate different functions of the human eye currently in development. The most recent, the Argus II, connects a retinal implant to a pair of glasses, which transmits visual information to the patient's optic nerve and allows them to see (despite their damaged eyeball cells). It’s entirely plausible that in the future retinal implants will evolve to allow for a variety of extras.

#### The Ethical Conundrum

If you choose to have the extended implant you will now have abilities that you don't need and, when compared to the rest of society, you'll be an "other"--a new version of human with supervision. In fact, if you simply chose the implant without any of the extras you'll already be a little bit superhuman. Either way, people around you won't necessarily know that you can see them in special ways. And they certainly won't know you're standing in front of their house using your brand new heat vision to tell if they're home.

Image credit: Dreamworks Home Entertainment

On your doctor's part, he will have to decide whether or not to offer you these special new abilities. If he decides not to, your doc will have to risk the possibility that you will later discover he had the option to give them to you--and now everybody with your implant can see in ultraviolet but you can't.

This Single Portrait Would Explain Humanity to Aliens

In 1977, NASA launched Voyager 1 and Voyager 2 into the vastness of space. Each carry with them a golden record containing images and sounds meant to depict Earth; if an alien civilization ever discovers either Voyager probe, they could lead back to Earth, or at least provide a small snippet of our civilization. Almost 20 years later, NASA considered sending a similar archive along with its Cassini probe which is now in orbit around Saturn. Cassini's archive would be A Portrait of Humanity, a stereo pair of images meant to depict, in a single 3D snapshot, the breadth of human life.

Image credit: NASA/ESA

The Portrait was meant to travel to Saturn's moon Titan on NASA's lander. NASA's mission went ahead as planned--Cassini was launched in 1997, and the Huygens craft successfully landed on Titan in 2005. But the images were never completed or launched with the mission. Jon Lomberg, who designed the Voyager Record for NASA, was also behind the Portrait of Humanity. He wrote an interesting history of the project that highlights its purpose and, most importantly, how it different from the records sent with Voyager and the CD-ROM Visions of Mars.

"Unlike the Voyager Record it was not intended to leave the solar system to be found by the crew of an advanced starship," writes Lomberg. "Unlike Visions it was not for humans in the next few centuries. Its fate would have been to remain on the surface of Saturn's moon Titan, waiting for eons of time against the slim chance that life might someday appear on that strange world, or that some other space traveler might visit Titan and find it. The image, inscribed on a diamond wafer about the size of a coin, was intended to show an intelligent alien on Titan viewer a little about our bodies, about our relationships with each other, and about our planet."

Photo credit: Simon M. Bell/NASA

The most complete part of the project was a photograph taken in Hawaii, which took nearly two years to envision and compose. Lomberg's writing about the photo reveals how much thought went into its composition. It represents the breadth of human ages and races, shows our bodies in various poses, and shows off Earth's oceans and blue skies in the background. Even the way people sit and stand in the photograph shows how our bodies work.

But what's really fascinating is the audience Lomberg wanted, hoped, to deliver this message to.

How Klingons, Dothraki, and Na'vi Have Real Languages

A delightful new TEDEd animated short. "What do Game of Thrones' Dothraki, Avatar's Na'vi, Star Trek's Klingon and LOTR's Elvish have in common? They are all fantasy constructed languages, or conlangs. Conlangs have all the delicious complexities of real languages: a high volume of words, grammar rules, and room for messiness and evolution." True story: I bought studied the Star Trek: Conversational Klingon audio cassette and guide book in middle school.

Making Eye Exams Affordable with a Smartphone Attachment

Almost no one enjoys going to the doctor. While not quite as universally feared and reviled as a trip to the dentist, a trip to the doctor is still often unpleasant. There's waiting in a clinical check-up room, the cool of the stethoscope, the awkward admission that we may not have been exercising much lately. And then there's the cost. Cost, above all, may be what drives us to self-diagnose; it's also the one piece of the medical industry that outsiders may be able to exploit as a weakness.

A startup called EyeNetra is the perfect example. With a smartphone app and an add-on device that can be built for a few dollars, EyeNetra thinks it can read our eyeballs and give us an eyeglass prescription. The professional equivalent that eye care professionals use costs \$5000, and Technology Review writes that the eye care market is currently a \$75 billion annual industry. Every year millions of people pay their optometrists hundreds of dollars to check their eyes and update their prescriptions. What if that only cost a few bucks?

"The device, called the Netra-G, is based on some clever optics and software[EyeNetra's Vitor] Pamplona came up with—a way to measure the refractive error of the eye using a smartphone screen and an inexpensive pair of plastic binoculars," writes Technology Review. "Pamplona invented the Netra while studying in an MIT lab specializing in computational photography...The prototype device he developed to measure how well your eye focuses light consists of a viewer that a user places against a smartphone screen. Spinning a dial yourself, you align green and red lines. From the difference between what you see and the actual location of the lines, an app calculates the focusing error of your eyes. It’s like a thermometer for vision."

Industry-changing inventions like EyeNetra are bound to stir up some controversy as they clash with established procedures. Silicon Valley investor Vinod Khosla, who has put \$2 million into EyeNetra, last year accused doctors of performing "witchcraft" and claimed most of their work could be done by machines. Others argue that doctors study a larger picture of health, and that EyeNetra wouldn't necessarily fill the same role as visiting an optometrist.

When it comes down to a simple examination, however, EyeNetra may get the job done, and that's going to create tension within the structure of the healthcare system. For now, EyeNetra is targeting India, but sooner or later self-diagnosis smartphone attachments are going to clash with regulations in the United States. With billions of dollars involved, you know that's going to be a nasty fight.