Last May, Dr. Peter Jansen made big splash in the maker community with the launch of his Tricorder Project website. This wasn't just to state an intention to build a working Science Tricorder akin to the one in Star Trek--Jansen had already built four models over the past five years. And these were no mere PDAs or smartphone apps--they're packed with sensors to detect a modalities ranging from spatial to atmospheric measurements. The working Tricorder, a handheld device used in the TV show for environmental analysis, has long been a dream of scientists who are also Star Trek fans (strong correlation there). Companies have attempted to build their own or approximate its functions in software. Qualcomm even has an open $10 million challenge for for someone to build a medical Tricorder. And in 1996, one company made limited edition Tricorders with simplistic functionality. Jansen's creations, though, are all self-designed, built, and programmed. In this detailed chronology of his work, Jansen shares the technical goals and challenges of bringing a piece of science fiction technology to the real world.
To start, let's give people some background about yourself and your areas of interest. What did you study for graduate school, and what fields are you most interested in researching and exploring?
You bet! I started off undergrad in Physics and Astronomy, because I wanted to learn more about the universe around us and devote my life to figuring out how to build spacecraft that would allow us to explore other planets and stars within a human lifetime. During my first and second years of undergraduate I became very interested in the branches of artificial intelligence that study language, cognition, robotics, and something called "knowledge representation". That's the study of how computers can represent knowledge that's more like the knowledge in brains than in books--and how that knowledge acquires "meaning" and can be processed. My PhD is broadly in this area, and I study how people represent things like concepts and language in their brains, and in turn try to get computers to learn concepts and language like babies to, developmentally.
Most of this is takes the form of computer simulations of neural networks, which are simplified models of how we understand the brain to work at the level of the neuron (and connections of neurons). I'm what's referred to as an "interdisciplinary" researcher in academia, which means that my research crosses disciplinary boundaries, and so I have to know a good amount about computer science, neuroscience, language, and human cognition (which are normally taught separately) to do my research. If you want to be nerdy (since we are talking Tricorders here) and would like a Star Trek reference, my research is similar to Dr. Noonian Soong, the fictional researcher who created Data in Star Trek: The Next Generation.
I'm most interested in exploring "cognitive" (or human-inspired) methods of artificial intelligence to help teach computers to learn language like babies do. The theoretical camp that I prescribe to says that it's very important to have a body and explore the world when you're a baby to acquire concepts and language, and so I'm also very interested in cognitive robotics, perceptual models, and all the connecting bits in between to go from robot body to thinking machine. Teaching is also important to me (both in and out of academia), and so I to try and maintain broad science education community outreach in the form of the open source Tricorder project.
Share with us the origins of the Tricorder Project, Mark I. How did you decide you wanted to build a Tricorder, and why do you think it's something that's practical?
I started so young at this, that I think my reason for choosing to build a Tricorder was that a transporter and a warp drive seemed too much like science fiction, while a Tricorder seemed more realistic. I remember wanting to build one since I was very, very young. As a high school student I could see different sensors that you could pick up off-the-shelf, and others that you might be able to make, and that you should be able to fit so many of them into a box of such-and-such a size, so a Tricorder really seemed like it was a possibility.
What was your experience in engineering and fabrication at the start of the project? What have you learned since then?
I'm lucky enough to research in a field that allows a good deal of theoretical and empirical work. I've always found that, for me, I'm best able to learn something (whether it's very abstract and theoretical, or a particular applied skill) when I'm able to ground that knowledge or skill in a particular example, so I can see and understand how it works. This is a big part of how I've crafted my academic teaching style (I believe strongly in project-based learning, when you can incorporate it appropriately into a curriculum), how I best believe the Tricorders can serve in science education -- by visualizing and grounding abstract things that you can't see (for example, magnetism, or pressure, or heat flow) -- and also, how I picked up the engineering skills to develop the Tricorders incrementally, along the process of actually developing them.
Growing up with a father who loves to make, build, design, solder, and experiment meant that coming into designing the Tricorders, I already had a good basic knowledge of electrical engineering, and I had a good deal of knowledge in computer science both formally and informally. But the Tricorders allowed me to develop and explore skills in microcontroller programming, schematic and printed circuit board design, surface mount assembly, low-level communications protocols, physical sensing, and many other things that only completing a good sized project allows you to acquire. It's important to remember that incremental advancements are typically the way things work, and so there were a lot of "mini" Tricorders that I had built with found items (like Gameboys and simpler micro controllers and only a few sensors) before attempting the big plunge, and designing the Mark 1. Even with the Mark 1, I prototyped a lot of it on a big circuit board with hundreds of little wires before designing the printed circuit board! :)
It looks like you have a passion for CNC devices and affordable and open source 3D printing. What's your interest and involvement in those fields?
I think rapid prototyping technologies (like 3D printers, laser cutters, and mills) entering into the maker/hobbiest space over the past few years is an incredibly exciting development -- it feels like something that has almost limitless possibilities. I became very interested in 3D printing after hearing about the early open-source Reprap and Fab@Home 3D printer projects about five years ago, but didn't have a great deal of mechanical know-how at the time. My father worked designing CNC machines in industry for about 30 years, and is simply the most talented electromechanical designer I've ever met -- so I figured it'd be a great project for us to do together, and a great way for him to teach me about mechanical design.
After following the RepRap project for a short while, Wade Bortz (who designed the popular Wade 3D printer extruder) announced that he had just printed out a complete set of RepRap 3D printer parts on his printer -- the very first set of Reprap parts to be fabricated "in the wild" (outside of a lab). This was terribly exciting. It turned out Wade happened to live about an hour away in Toronto, and so we went to visit him and see his printers. It was incredibly cool, and I think we were convinced shortly after walking in the door that we had to build one. We ended up cobbling together an extruder (this was before you could just buy one -- no one really knew how to make one yet), and it worked slowly and just long enough for us to print out a better extruder, that we ended up using for quite some time. I remember how we sat in awe the very first time we got the machine to work. I'd written some simple firmware that would just draw a square, then increment up (to make a long rectangular tube), and we sat almost silently with our faces about four inches from the extruder watching it build this thing in front of our eyes. We still have it!
Since then, and the wonderful discovery of a laser cutter at the local hackerspace in graduate school, I've designed and open-sourced a few experimental projects in rapid prototyping. A lot of my interest is in non-traditional approaches to CNC design, particularly for entire machines (or machine components) that could be entirely laser cut, or entirely 3D printed. To that end in graduate school I released several concept projects -- an experimental linear axis and preliminarily design for an almost entirely laser-cuttable selective-laser-sintering 3D printer, and 3D printable parts for building a laser cutter (although it turns out that as large as I built my machine, it's almost impossible to align the optics, and frightens me because of the danger involved with high-powered lasers!). Without access to a laser cutter after moving for my postdoctoral research, most of these projects have been put on the back burner.
Walk us through the progression of the Tricorder Project. In each Mark/model, what were your goals and challenges to designing and building a prototype?
Several precursor projects lead up to the Mark 1, that involved interfacing a handful of sensors with a microcontroller and placing these in small enclosures, and maybe interfacing those microcontrollers to something else (like a GameBoy Advanced) for visualization. These were my very first Tricorder projects, and started as far back as when I was in my final years of high school! In as much as I wanted to make a real Tricorder out of each one, looking back the purpose of these projects was centrally to teach a very young and very eager young scientist the individual skills (like micro controller programming, or circuit design) that he would need to eventually put together an entire device.
"I really wanted to squish as many different kinds of sensors into a small handheld device as possible."
With the Mark 1, I had several goals and challenges. First, where I had designed prototypes with two or three sensors before, here I really wanted to squish as many different kinds of sensors into a small handheld device as possible, with as much diversity and utility to the measurement modalities that I could manage. Some of these could be purchased off-the-shelf (like an ultrasonic distance sensor, or a light sensor, or a temperature sensor), but some of the sensors on my wish list had to be designed -- things like a miniature visible light spectrometer, and a high energy particle (radiation) detector. Others I could purchase off-the-shelf (like an oxygen gas sensor, for example), but there were issues -- they were just too large and power hungry to fit on the device. There came a point where I had a tinfoil-covered radiation sensor that was incredibly noisy, and a half-built spectrometer assembled inside of a Microchip cardboard sample box that needed a bunch of different voltages to run the linear CCD sensor, and a lot of hours put in to getting them to work.
For a student, any one of these would have been a good project, but having ten simultaneously was just too much, and I realized that if I took them all on I'd never get the Mark 1 finished. I made a decision to stick with sensors that I could order off-the-shelf or put together with a minimum amount of work (like the linear polarimeter, which is just four ambient light sensors with polarization filters), and that didn't have voltage, current, or space requirements that were radically different from each other. This ended up working really well, and I arrived at 11 different sensing modalities across atmospheric, electromagnetic, and spatial sensors.
Second, the Mark 1 needed a good LCD screen capable of visualizing the sensor readings, as well as some computational horsepower to talk to the display all the sensors and peripherals. Today sourcing a screen is very easy, and there are a bunch of open source hardware companies for makers like SparkFun or Adafruit that have screens in stock that you can easily communicate with -- but they were still in their infancy five or six years ago, and it was hard to convince LCD manufacturers to send you a display if you were a student (and even harder to figure out how to communicate with it). I ended up finding a supply of surplus 2.8" LCDs that had no driver (you had to constantly feed them data), and after soldering on the microscopic connector to a breakout board and connecting it up to a PIC microcontroller, I was able to write some simple firmware to drive it and display a screen full of happy faces that changed color.
This was incredibly exciting for me, and probably the largest engineering turning point in the entire project. I had never soldered a surface mount component before (let alone one with pins so tiny you could barely see them), and while I'd used very simple character displays, connecting to a color graphical display like you'd find in a commercial product seemed the thing of dreams -- but here, with some research, work, and patience, I'd done what I thought was a crazy and almost insurmountable task only a few months before. Seeing that display light up using a circuit I'd entirely designed and programmed convinced me that maybe designing something a lot more complicated with sensors and graphics and user input on my own circuit board was something I could do -- from that moment, nothing was impossible.
I ended up finding a suitable display controller and a fastest PIC microcontroller with the most RAM and I/O that was available at the time (driving the LCD to display those first simily faces had taken up nearly every clock cycle the much smaller PIC had to offer), and the project zoomed along. I ended up finding a simple and elegant solution to the problem of what the user should use as input (the kind folks at Cirque sent me a number of touchpads), and after going through some tutorials to learn Eagle CAD the schematic and board layout came together quickly from all the wire-wrap prototyping I'd done to convince myself it'd actually work. A few weeks later I had my first printed circuit boards in hand, and all the time I'd spent learning how to write code for the GameBoys really helped out when the hardware was finished, and it was time to write the firmware entirely from scratch, with nothing but a pointer to video memory. It was all great fun, and the software came together quickly.
The Mark 1 had really convinced me that I was able to make a complete Science Tricorder, and so I set off to work on the Mark 2. While the Mark 1 had graphical capabilities, I had hoped to dramatically improve the visualization capabilities, computational capacity, and connectivity. The sensor boards were also designed to be easily interchangeable, and entirely self-contained. Each Mark 2 sensor board is actually a miniature Mark 1, with its own microcontroller, and communicates back to the Mark 1 host processor using a very simple interface that would allow the boards to be updated in the future.
The Mark 2 was an enormous undertaking. Rather than using a microcontroller to run the whole show, I wanted to migrate to a full processor that was capable of running an operating system, in this case Linux, and migrate the graphical user interface functions to that processor while keeping a separate microcontroller to communicate with the sensors -- in essence, using each processor where it was at its best. The full processor (an ARM91) also had the capability to drive an external display very quickly, which would open up some interesting visualization possibilities. I ended up finding two bright, beautiful organic LED displays with touch overlays, and use one for the main display, and a second auxiliary display in the bottom where the touch pad previously occupied, which I figured would allow you to adaptively configure an input interface for the application you were using (or, alternatively, have a second display for data -- but the interface for the lower display is slower, so it can only update about twice per second). The Mark 2 is very close to the limit of what can be done with a 2-layer printed circuit board, has somewhat complex power requirements, and required some kernel-hacking to bring up linux and get the displays working -- so there were a lot of challenges and learning experiences with the project.
The Mark 3 was an experiment in low-cost design, aiming for an under-$200 Tricorder-like device that people could easily program in an Arduino-C-like environment, that had reasonable visualization capabilities (a 2.8" TFT LCD with a touch display), storage (uSD) and connectivity (bluetooth), and a reasonably quick microcontroller (a PIC32MX) that could interface directly with the LCD -- so a sort of middle-ground between the Mark 1 and Mark 2. The sensor package was quite different -- it had an onboard inertial measurement unit consisting of an accelerometer, gyro, and magnetic field sensor, and an onboard non-contact temperature sensor (which was very large), but otherwise sensor boards were intended to connect with it at two locations on the front. This dramatically changed the form factor (it had only one screen, and was flat), and the sensor philosophy. The idea was that the most expensive part of the device tends to be the sensors, so if you can shift them towards pluggable slots, then some of the cost of the device can be deferred.
As an experiment, the Mark 3 was a success. In as much as the first two Science Tricorders taught me what I loved, this one taught me what design aspects I felt didn't work so well -- how much the form factor and sensor philosophy impacts the overall feel and utility of the device. I learned that I love having lots of display real-estate and visualization capability, and most importantly, that the device must incorporate a diverse array of multimodal sensors -- having only a few sensors, or sensors that must be swapped (so you'd have a pocket full of sensor boards to plug in) really gives the user an entirely different and much less rich experience than the first two devices.
The Mark 4 was another design experiment, this time coupling low-cost design with laptop/tablet/smartphone integration and, critically, sensor fusion for data visualization. Sensor fusion is a hot area of research right now, and something that I've experimented with since the Mark 1. The basic idea is that by coupling the readings from multiple sensors (whether of the same or different modalities), you're able to learn things or generate data sets that you wouldn't have been able to otherwise. An example of this in the context of the Tricorder project is that the Science Tricorders each contain sensors that can be used to determine the device's position and orientation in space -- the accelerometers measure acceleration in a given direction, the gyroscopes measure the device's rotation, the magnetometer helps detect the Earth's magnetic north to compensate for drift in the gyros, and so forth. This collection of sensors is often called an inertial measurement unit. By coupling that collection of sensors with other sensors, say a non-contact infrared sensor, then you're theoretically able to pair the Science Tricorder's orientation in space with the temperature of what it's pointing at, and (after waving it back and forth for a few seconds) construct something like a very low resolution thermal image for very low cost.
While that's very exciting, the technique is fairly general, and so you could conceivably fuse the readings from additional sensors to, for example, make a volumetric image of the magnetic field intensity and direction in a given space, which is something that to my knowledge isn't done with off-the-shelf instruments today. While the image quality isn't great -- by nature the image is undersampled and reconstructed, so it's very lossy -- I think it's an exciting visualization technique that might allow people to get gist visualizations of things that can't normally be seen -- and this might be perfect for some subset of applications, especially science education, but also to find the heat leaks in your house.
This is something I've been doing since the Mark 1, and spoke about recently at TEDxBrussels. My Mark 1 has experimental firmware that incorporates sensor fusion, to help build datasets for reconstructing thermal images. Funny story -- the non-contact temperature sensor in the Mark 1 has a large field-of-view (about 35 degrees), and so for my first experiments I wanted as large a temperature gradient as possible to make the best image. I had a fireplace in my apartment, which was great for something hot, but I wanted something cold in the image as well. Long story short, I ended up emptying the freezer and piling my ex-girlfriend ice-cream in front of the fireplace. The image ended up working out great, but when she came home she wasn't so happy about the ice cream. "Hey, why's my ice-cream all melted?". I melted your ice cream for SCIENCE!
In its current state, what is the Tricorder capable of detecting and displaying? How do you envision this information to be used in either educational or field environments?
There are three general areas of sensors -- atmospheric sensors (for things like atmospheric temperature, pressure, and humidity), electromagnetic sensors (for measurements about light, color, or magnetic fields), and spatial measurements (for distance, location, or motion). There are hardware errata for each device, so depending on the model a couple of the sensors may have issues (for example, the pressure sensor is nearly impossible to solder, so if you build one you might want to select an alternate sensor).
On the software side of things, each of the Mark 1 and Mark 2 devices have open prototype firmware available, with the Mark 1 being fairly complete. The Mark 2 software would need work if someone wanted to use the device as a finished instrument, but contains example visualizations and the graphics/sensor interface code for linux programmers to use in developing their own software -- there are autoscaling widgets to graph the data, a 3D vector widget to display the directional readings from the magnetometer, and a keyboard widget, all placed in simple demo software to show some of the readings. A crafty linux programmer might be able to bring up X-windows on the device.
We live in an era where people have their heads buried in smartphones or tablets. Why isn't that the future of the Tricorder?
I certainly think smartphones and tablets have potential, and they have the computational facilities to enable a great deal of visualization. We're already seeing them incorporate some basic sensing functionality, largely centered around location sensing, or acceleration/orientation sensing to change their screen orientations. The hitch is that while squeezing one or two chip-scale sensors into a phone or tablet isn't too bad, trying to place ten or twenty sensors with different modalities into such a small form factor isn't yet possible -- there just isn't enough space yet. And I think intensely multimodal experience is where the magic is.
So what does someone need to build their own Tricorder based on your designs?
Depending on the model, a good set of interdisciplinary skills is required. For assembly, one should be very comfortable with surface mount soldering very fine pitched parts (especially for the Mark 2), and comfortable programming PIC micro controllers and with the Microchip development environment. In addition to collecting all the hardware for the devices, one needs a workshop equipped with all the usual equipment for surface mount soldering, rework, and debugging, as well as an ICD3 PIC programmer, and an ARM debugger for the Mark 2. The Mark 2 also requires a good deal of comfort with embedded Linux, and bringing up a Linux install from scratch then flashing this onto new hardware. Each model has errata, and so eager hardware designers familiar with Eagle can help knock items off the list and contribute to the project. These requirements are certainly a little steep for the average maker, but I'm hoping to bring the barrier to entry a lot lower for the average maker in future versions!
What's next for the Tricorder project? Is it something you're still actively working on?
I've been distilling the lessons from each of the Science Tricorders into a Mark 5 model, that's sort of a squish between a Mark 1 and Mark 2, with updated sensing capabilities, easier to source parts, easier assembly, easier programming -- basically making it easier for everyone to get, make, share, and use. While I put a great deal of effort into developing documentation and helping folks get excited about the devices and science education more broadly, ultimately I designed the first two Science Tricorders for me, rather than for a broad community of folks from kids just starting to learn about science, to parents wanting to tinker and learn programming with their kids, to seasoned scientists and engineers looking to do incredible things.
I think one of the most rewarding things I can imagine is to wake up to an inbox full of incredible pictures and stories and discoveries people had made the day before, and to see an active community of folks sharing upgrades to their hardware and software that allow them to better visualize the world around them. THe Mark 5 is being designed for the community, with the hope of using this simple, intuitive, and completely open and tinker able hardware device to learn about the world around each of us, and about making, sharing, and curiosity.
Tricorder Project photos courtesy Peter Jansen, Data photo via Paramount Home Entertainment