By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble. And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers–whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human “feel for a situation” usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.
Eli Pariser’s talk, Beware online “filter bubbles” recently hit the front page of the TED Talks website. I found it interesting because it discusses some of what I’ve seen happening online, and some of my fears for how we search and consume content. Watch the video if you haven’t to see just how much is already being filtered on the web for you.
Right after I saw the video, of course, Facebook announced that they’ll be rolling out a new version of the the news feed. They are quoted as making it more like a “personal newspaper” — a filter, essentially, for the things Facebook thinks you want to see. As the video above pointed out, they’ve already been filtering out the things they think you don’t want for awhile.
No doubt there will be some backlash as people adjust to changes on Facebook, but very few will abandon it. And if you are the kind of person that would abandon Facebook over filtering, or privacy, or concerns about owning your own data, you don’t have a lot of choices of where to go.
Sure, you can specifically seek out services that aren’t going to give you a filtered world view — DuckDuckGo comes to mind. But that’s not always possible. The problem is bigger than just the search engine you use and whether or not you’re logged into Facebook. Just about every site is run by someone else, and they are analyzing you constantly. They are in a brutal battle to keep you as their consumer and keep you from going to other sites. But are the things that they think you want to see what you really want or need to see?
As a developer, I’m capable of taking matters into my own hands, to some degree. Most of the sites that I use on a daily basis have an API and allow me to export or scrape my personal data. But the cost of switching off of the nice service, the well-designed UI/UX of an official mobile app, and so on has kept me from taking the plunge to export my data and go set up shop on my own version of various web services.
Recently I revamped a very old, empty repo that I had on Github. The point of this repo was code a way to export all of my data from Last.fm. But for whatever reason the repo has been empty for something like 4 years now. So the other week, I sat down, checked the Last.fm API docs, and wrote a very simple Ruby script to dump out all the JSON it could about my top tracks, artists, and albums over the past 7 years. That script is available as my birdsong repo on Github.
So far, I don’t have a real use for the data. I could try to use some visualization tools on it to make cool graphs or maps. Or I could try and get analytics out of it: genres listened to, how they’ve changed over the years. But for now, I am content to just have the JSON data.
All in all, it’s around 30 megabytes of JSON data, which is really just plain text with no compression, so it is really quite a lot of data. There’s more I can do, though. I plan to do to start taking control, to both aggregate my own data and filter it.
For a long time it has bothered me that Google Reader stopped receiving new features, and the features that existed only went so far. To be fair, Google Reader has been a rock solid web service for many, many years for me. It allows me to read my feeds quickly, reliably, and has parsed feeds well — all problems I’ve had with other feed readers. It’s always had some neat features like showing me analytics of what I read.
But as far as discovering new content and helping me to eliminate content I don’t read or don’t want to read, Google Reader is not so great. Reader is terrible at suggesting new feeds to read to me; it constantly suggests Dilbert and Lifehacker, and has for the past 5 years or so. I have no interest in either of those. It has never analyzed my reading patterns to such a degree that it suggested something new that blew me away. I am looking for those kinds of interesting suggestions: not just for feeds that match what I already read, but things that are outside of my normal bubble but would be interesting to me.
Luckily, Google Reader has an API, and rather than just exporting my data and building a new service, I can start to build off of it. I get to keep a lot of the features I enjoy while extending it with my own code.
In Pariser’s talk, he talks about encoding algorithms of filtering and recommendation with a sense of civic duty. To some degree, this means having some journalistic integrity. Such an algorithm needs to present both sides of the story. One issue of many feed readers and other content online is that, well, it only shows one view. The view of the article you’re on.
But imagine a feed reader that was more like the front page of Google News. It would not only show you the blog post you’re reading, but all recent blog posts from other authors about similar topics. Maybe, if they’re responding to a news event or writing about a known fact, the reader could do the work to track down the original source. To take it even further, without even really needing to be aware of motive, politics, and other factors, a dumb feed reader with good suggestions could probably present both sides of the story. Both sides of an argument. Both liberal and conservative takes on the same bill.
Dreaming up features like this can be a deep rabbit hole. Start considering the consequences of pulling up all past articles you’ve read about similar keywords or tags. Or performing searches for academic papers on the topic. Pulling in data from Wikipedia. Looking up books you’ve read or are planning to read on Goodreads that are related to the blog post you’re reading. Or any other number of ways to slice and dice content. And it doesn’t have to stop with Google Reader.
Opportunities to make better tools and more intelligently consume information are all around us. At the same time, consumer-consuming corporations on the web try to trap us into filter bubbles. They try to provide us with what they think we want, but can they ever really know? In the end, it’s control-your-own-filters or be filtered1.
1 The name of this article was lifted from Daniel Rushkoff’s Program or be Programmed, a book which I didn’t really enjoy. It was not what I thought it would be about: why we should learn to program so that we can control the complex technical systems around us rather than be controlled by them, or how programming’s problem solving skills can be applied elsewhere. Instead, I found the book to be some technology-fear-mongering and a bunch of diatribes about how things online or in computers are “less real.” Suffice to say, I did not enjoy it. ↩
I’ve been teaching my downstairs neighbor basic electronics and how to solder. He’s a musician and has been working his way through several kits from Bleep Labs.
As I explained each component: the resistor, the capacitor, the diode, etc., I eventually got to the transistor. I told my neighbor how transistors are rarely used on their own now. The transistors in the kit were there mainly because they were easy to solder, but usually a circuit designer would opt not to use them.
On its own, a single transistor can’t do much, and takes up some amount of space, which is actually quite large relative to the circuits we build now. I scored bags of hundreds of transistors that were about to be thrown out at the Milwaukee Makerspace when we were putting together the electronics lab — not because they didn’t work but because that many transistors simply wouldn’t get used. “Nowadays, you might as well throw an Arduino in a project,” was one of the reasons. No one wants to work at the abstraction level of single transistors anymore.
It turned out that the drum machine kit that we were putting together contained an Atmel AVR microprocessor - the main chip of the Arduino. In an effort to save space and complexity, the designers had used a microprocessor instead of discrete transistors. The chip in the Arduino and the Bleep Drum is the ATmega328, which has something like 600,000 transistors inside it. And I explained how the form factor of the components we were using was obsolete. Even the tight-packed pins of a typical integrated circuit (a computer chip, in common parlance), spaced at 0.1”, just isn’t dense enough for modern circuits. The parts we were using were all designed to be soldered to the circuit board by human hands. Slow, error-prone human hands. The vast majority of circuit boards produced today are surface mount: parts placed by robots and soldered all in one go by another machine.
“It’s really neat that you know all this stuff,” my neighbor remarked. But I shrugged it off. This knowledge, especially of how analog circuits work and how audio signals are modified by analog components, is mostly obsolete. Better to just use some analog-to-digital converters and put a microprocessor on it. Or better yet, something way more powerful than a simple microprocessor.
We keep putting more and more transistors on a single die, or chip, every day. This trend was first spotted by Intel co-founder Gordon E. Moore, and so we call it Moore’s law. So far, Moore’s prediction that the number of transistors on integrated circuits would double every 2 years has been quite accurate. It has led us from the simplest integrated circuit in 1958 to the Core i7 processor in my Macbook Air today.
The other day I tweeted about the crazy processing power that we get today under Moore’s Law.
Crazy future; my MBA doesn’t sweat when running Linux-in-VMware, Chrome, Emacs, LightTable, other apps, terminals & 4 REPLs all at once.— mathiasx (@mathiasx) March 2, 2013
And in truth, I was running more software than I could fit into that tweet, because I was listening to music, had a few PDFs in the background, and was managing my ebook collection at the same time. But those notorious CPU hogs listed in the tweet should be enough to illustrate just how much is going on without really pushing the processing cores of the Macbook Air.
Remember, my 13” Macbook Air only weighs 2.96 pounds. In 2005, I had a 12” Powerbook G4 that weighed 4.6 pounds and had, by my back-of-the-envelope calculations on Geekbench scores 1, my Macbook Air is almost 9 times as powerful as the Powerbook G4 was.
Where is this progress taking us? Well, my iPhone is already pretty powerful. It’s hard to get a raw number of FLOPS (floating-point operations per second) for the processor in an iPhone 4S, but we can get a Geekbench score for it. It is weighed against the same scale as the Macbook Air is measured on. On that scale, my iPhone measures2 very close to the Powerbook G4 I had 8 years ago, but the iPhone has the advantage of having two cores.
Some people incorrectly read Moore’s Law as “processor speed doubling every 2 years,” which it is not. We’ve found that there’s more to processing power than just gigahertz, though. The number of cores in a processor, and therefore the number of simultaneous things that a computer can do, is increasing. While there’s some issues as we grow into this new paradigm where everything has multiple cores and we have to ensure consistency between them, this is overall a net win for those of us seeking more powerful computers.
To be honest, I don’t really even notice my computer as physical hardware anymore. It is just the stage for software to run on: quiet, fast, lightweight, and very infrequently does the hardware bog down to make me wait. Only a few short years ago, we would have to wait for the computer to do something – usually shown as the hourglass on Windows and the spinning beachball on Macs. Even things like copying a file between two directories could stop a system in its tracks, and the computer would simply stop accepting any input from the user. Now, processors, RAM, and SSDs are so fast that I rarely have to wait for them. And if I do have to wait, say, for some piece of software to be installed, the fact that the computer is multicore means that I can go and browse the internet in another window without noticing.
It’s likely that, just like the example of VMware running Linux running on my Macbook Air in the my tweet, eventually we will get to the point of having so much computing power embedded around us that we will virtualize everything. It’s possible that we will take not just files or even processes between computing devices but the whole environment, a whole virtualized machine and operating system. As a thought experiment, imagine pausing a VMware virtual machine while it is performing some CPU-intensive task on your laptop. Now copy that paused VM over to another machine, and start it back up. The software will continue to chug along on the CPU-intensive task like nothing happened3. Advances in processing power, storage speed, and wireless networking speed will continue to progress until this process could be seamless to the user.
In some regards, sending the whole VM across the wire and starting it back up on another processor is simpler than trying to marshal a raw process between two machines, and ensure that the process can still run in the other machine’s environment. It doesn’t solve the problem of running one process on many machines at once, but it could be used in cases where we can spin up one copy of a process that we want to parallelize, and then copy that VM to many different machines with chunks of the dataset to process. In fact, that’s basically how many large distributed processing projects like BOINC (the software that SETI@Home runs on) work.
And so it’s likely that, for reasons of ease of use, sandboxing for security and safety, and simply because we have so much processing power, in the future our phones and our wearable computers will simply be running a virtual machine. Then we don’t need to worry about what software is running on our desktop PCs when we get home and have access to a larger display; we just move our processing over to the desktop computer, which ever is more convenient. (Even more likely is that we just have a display that acts like an accessory that we can “throw” the video onto, rather than having a separate desktop computer.)
The things around us are getting progressively more powerful by way of cheap, small processors. Where before I was saying that electronics hobbyists would rather use an Arduino into a project than deal with many transistors, industry would rather throw a small ARM processor into everything around us and write software than design a custom piece of hardware.
Case in point, over on the Panic blog, the case of The Lightning Digital AV Adapter Surprise. After wondering why the new Lightning AV adapter for the iPad mini took a few moments to boot up, and seemed to display a scaled version of the video, they cracked open the Lightning AV cable to find: an ARM processor! As far as they can tell, the iPad mini sends a bit of software to the processor in the Lightning cable every time it is connected, essentially booting the cable up, and sets an Airplay stream down the cable to the other end, where the ARM processor decodes it, upscales it to HD, and sends it to the TV. This makes the Lightning AV cable, in essence, the world’s smallest AppleTV. And we thought the new AppleTV was small when it was debuted.
When I go to a concert now, the room is full of smartphones being pointed at the stage taking video or pictures. And while most people might think, “That’s a lot of pictures and video that will be uploaded to Facebook,” I think about the fact that the people in the room with me are holding more processing power in their hands than we had in most of the computer labs in my schools and in college. Those phones will just keep getting more powerful, and soon they’ll match the performance of the Macbook Air I’ve got in my backpack. Even the mundane things will have plenty of processing power because it will be cheap and simpler to put a cheap processor in it. But those cheap processors are getting more powerful every day, and that is exciting.
So while I may have learned digital logic in college and can help you build up a simple adder from NAND gates, and as a hobbyist I’ve learned to build guitar fuzz pedals from a few transistors, that knowledge is increasingly obsolete in the face of rapid progress. We will soon be packing powerful processors into everything around us, sometimes in surprisingly ways, simply because it is easier and cheaper than designing a custom piece of hardware. But the future is exciting, and that knowledge isn’t completely useless, as long as I can share it with a few more people to show us just how far we’ve come.
“Geekbench scores are calibrated against a baseline score of 1,000 (which is the score of a single-processor Power Mac G5 @ 1.6GHz). Higher scores are better, with double the score indicating double the performance.”
3. Of course, there’s some issues with this. Anything that was happening synchronously would probably failed, as well as anything that depends on a network connection in progress. But for the purposes of the thought experiment, let’s ignore those problems. ↩
There’s been quite a bit of backlash surrounding the new Google Glass product. As someone who has wanted wearable computing for awhile, I’d like to talk about it a bit. My only real qualifications on this subject come from the fact that I’ve read a lot of scifi1 and that I’m an open source developer that would like to build software for wearable computing that makes people’s lives better.
When I’m talking about wearable computing glasses in general rather than just the Google Glass product, I will call them smart glasses. There doesn’t seem to be a consistent name yet, and the phrase eyetap only refers to a very specific type of smart glasses.
As Amber Case explains in this TED Talk, we are all cyborgs now. As soon as man started using tools, we were augmenting what evolution gave us to become something more. A cyborg. When you drive a car, you are a cyborg because your legs could not carry you that fast. Welcome to the future, human.
That progress marches on constantly, much faster than evolution could ever provide us with enhancements. Computers, the internet, and smartphones are some of the latest and greatest in the enhancements that we add to our bodies to be something more. We’re already using smartphones to constantly be connected to a wider world than just what we can see and hear in the room. We’re extending our brains by using things like Google Search and Wikipedia to find and remember far more than our meatbrains could do on their own. And we’re connected to others to a degree that no other form of communication has matched. When scifi authors write about this kind of brain enhanced by technology, they sometimes refer to it as the exocortex. I’d argue that we already have exocortexes: our digital selves, the software and websites we use, the tools like Twitter and Facebook that we communicate over, are all part of those exocortexes. Without them, we are less than the whole. And while not everyone feels it yet, I certainly feel a little limited when cut off from my exocortex.
The promise, then, of Google Glass is to constantly be connected to that information source and communications tools. Not to interrupt your daily life, but to simply be better integrated into it. Some people want to point out that Glass is going to be a distraction from “real life”, but your real life already includes these “distractions” to a degree that you’re probably not thinking about. How often do you check your email during your work day? Twitter? What apps do you use on your smartphone to find the next bar to go to or figure out a restaurant that everyone can eat at? Does using any of those constitute putting real life on hold while you spend some quality time with something fake? No, they’re just part of life.
I was initially quite excited about Google Glass, because it is the first real promise of wearable computing for the masses at a price point that we will probably be able to afford. For nearly a year, the product has been announced but no real details have surfaced. With the announcement of the Google Glass Explorer contest, more information has been leaking out of the Googleplex. After the Youtube video showing real Glass in use was posted, I quickly realized that Google Glass was not intended for me in the way that I was hoping.
Simply put, Glass is going to do the things that consumers now want to do quickly with their smartphones: receive and send texts & email, take pictures, take videos, and post that content up to things like Facebook, Youtube, and Google+. Another valid use is quickly searching Google for something, which could be useful for everything from trivia night to trying to remember what goes in your favorite korma.
But the things I’m interested in? Coding while walking around (twiddling fingers in the air, no doubt), augmenting my poor social skills by having my wearable remember faces and previous conversations, providing heads-up documentation while I am doing some task, reading full ebooks in a sitting with some sort of speed-reading app, etc. And some of these, like coding and reading, are quite focused activities that would benefit from having full-screen display of information rather than just a hovering box in the corner of one eye. Since the Glass only really shows notifications and thumbnails of photos and videos, it isn’t going to provide that focused experience. At least until Google or some other company builds binocular, full-vision smart glasses.
Then again, building a device that most consumers will want to use is exactly what Google should be doing. I’m a relatively tiny market, and it’d make no sense for Google to prioritize my features over the average consumer. I’m well aware of that. But all that said, I still plan on being an early-adopter, at least after the price drops a little.
The marketing bullshit
You may have heard that Sergey Brin said that smartphones are “emasculating.” I’m not going to linger on this topic too long, but whether he thought this up on the fly or a marketing team came up with it, it makes him sound like an idiot. And in case you forgot, Google makes smartphones. Is it “emasculating” to open a refrigerator? What about when you ride the bus rather than drive a car in to work? What does this tell female smartphone users when you say this? I believe the correct usage of the word “emasculating” is to “deprive (a man) of his male role or identity. Ex: he feels emasculated because he cannot control his sons’ behavior.” What does that have to do with a smartphone? It’s just a poor choice of words all around. Sorry Sergey, but I’m calling bullshit on your statement.
The wearable device Cambrian explosion
We can only hope that as soon as Google Glass comes out, two things start to happen: hackers figure out how to root (or jailbreak, or whatever we’re going to call it) Glass and install Linux on it. This will ensure that open source software can start being developed for it for all sorts of niche users (like me) and not just the average consumer mentioned above. It also will help to combat some of the security and privacy concerns outlined below. Maybe in the long run, Linux isn’t the best OS to run on such a device, but that doesn’t matter initially. As soon as open source hackers figure it out, anything and everything can run on it. We’ll see a Cambrian explosion of wearable computing apps.
I think we’ll also see hardware manufacturers, especially the OEMs that typically make keyboards, mice, and cheap Android phones, start to produce similar glasses to the Google Glass devices. They’ll likely run Android or some flavor of whatever Microsoft is calling a mobile OS, but will probably be a lot easier to hack than the Google Glass device. And they’ll get cheaper. The best part about all of this is that by leading the way, Google is practically ensuring that we’re going to have lots of manufacturers building these devices, and they’re going to get cheaper. The demand for wearable displays in the past were limited to applications in military and some industry jobs, along with academic research, and so wearable displays tended towards the expensive and impractical in the past. Eventually, smart glasses will probably be as common as smartphones, and around the same price point. Don’t be surprised if, in a few years, it makes a lot more sense for someone to wear smart glasses than to carry around a smartphone in their pocket.
The “dorky” backlash
I’ve seen a few blogs, perhaps in an attempt to stir up trouble and get pageviews, call out Google Glass for being dorky. You’ll see phrases like “But why would anyone want to wear something so dorky?” The authors of these pieces of trying to enforce the status quo the same way that kids in school make fun of the clothes that others wear. Why are they doing that? They may have valid fears of the technology’s privacy implications. But more likely, they’re responding to what, in literature, we’d call “the fear of the other.” The Google Glass device is new and unknown. People who would break from the tribe and wear it are weird; possibly nonhuman. They are the other.
There’s a fear mentioned in some blog posts about Glass that someone might pay more attention to their Glass display than to the other person in a social situation. I imagine that the way it works out in reality is that common courtesy comes into play here, and that you wouldn’t ignore someone in favor of your glass any more than you’d walk away from someone you were talking to to look at Twitter on your phone. That said, some people have done that to me, and I expect that it is really just a fault of people rather than the technology.
Some of this backlash, too, is the fear that someone else with that easy access to information and search results (perhaps even more discreetly than the voice searching we’ve seen so far) is going to give others an unfair advantage. Suddenly everyone will remember everything and be experts on trivia, or last quarter’s financials, or any other things that smart glasses will make easy. But again, as I discussed before, this is all part of the wearer’s exocortex. It is as much as part of them as using a hammer or using a calendar on the wall to remember something.
By being a part of the cyborg human, the device is a prosthetic that the wearer uses to enable their exocortex. But rather than compensating for some handicap, it helps to enhance the wearer. Would you deny someone their hearing aids because they might be able to hear more than you? I think the feelings and popular opinion on these topics will change as more people start wearing smart glasses and other wearable technology. It’s already perfectly acceptable to use your smartphone to play Angry Birds while sitting in a waiting room, and I’m pretty sure that the activities that the Google Glass will lend itself to will soon become socially acceptable, too.
The biggest, and most valid in my mind, issue with Google Glass is what it will mean for everyone to suddenly have an always-on, always-available camera and microphone on their face. I’d suggest you go read Mark Hurst’s article on Creative Good, The Google Glass feature no one is talking about if you haven’t yet, to get up to speed on this debate.
While I believe in privacy and support organizations like the EFF, I think it is a little short-sighted and silly to react so strongly to the fact that the Google Glass device will have a camera on it. The two fears outlined in that article are:
- That anyone, at any time, could be taking photos or video of you without your consent. And you wouldn’t know.
- That Google will now be able to index and otherwise process any audio, video, and images you send to them, along with the other data the Glass will send along: date and time, Google user account, etc.
This seems to happen with every technology. And you may not realize it, but if you are advocating against the Google Glass for the above reasons, then you have far more reasons to be vocal about banning smartphones and even cheap digital cameras. I think it’s easy for someone to exclaim “But they could be taking videos of me in a public place, possibly something embarrassing!” and not realize that there already exists a whole bunch of terrible usage of existing technology out there to exploit women by taking pictures and video of them without their consent. No one seems up in arms, marching to ban the cheap digital camera on behalf of exploited women. The same technology is used by parents to take pictures and videos of their kids. Here, the technology itself is not so much to blame as the people who would use it to exploit others.
There have been attempts to try and regulate digital cameras and smart phones in the past, usually to force them to emit some kind of loud shutter noise when a picture is taken or a video is started. But just about everything comes with a camera now; are you really going to regulate all of that? Part of my point here is that you can’t regulate the future and try to lock down technology, because the companies that want to sell you the technology will just find a way around it. Let’s be honest here, governments are too slow to out-maneuver technology.
The fact of the matter is that in public, you’re probably being monitored by far more than just someone’s glasses, and that’s far more worrying than worrying about this new technology that Google is building. Security cameras abound. Your movements online are tracked by any number of ISPs, advertisement platforms, and governments. Have you been up in arms all this time about that and only now adding Google Glass to the list of technologies to worry about? I’m just pointing out the absurdity of singling out Google Glass here.
Now, what about the fear that, simply put, you won’t know if someone with Google Glass glasses on could be taking a video and you wouldn’t know it? Isn’t that true that they could already be doing this every time you’re standing around someone with a smartphone in their hand? The fact is that the majority of people are probably going to be polite and follow the same kinds of social expectations you’d have around that. Just like you’re already doing whenever you take out your smartphone. Do people take videos of people on the bus because they think they’re funny? Sure, but that’s not the only use for a camera on a smartphone.
Lastly, there’s the concern that all of the data from Glass will be going into Google, to be indexed and searched, and could be subpoenaed by the government. This problem does not really point at the technology as the source of that concern. If you’re really worried about what kinds of things a corporation or government could do with all that information, especially a corrupt government, then the problem lies with the governments and corporations. You’ve been contributing to the constant stream of information into Google and Facebook’s datacenters for years. Twitter will give up your information to the government if pressured; yet we communicate over Twitter all of the time. The government has been installing taps into data exchanges for years to monitor online communications. So think about it. It might make a lot more sense for you to focus on fixing those organizations, or weakening their growing Big Brother powers, rather than chasing after perceived rights lost when someone wears a camera on their face in public.
Open source, again, provides a way out here: It’s likely that an open source OS like Linux will put more control about what information it leaks into the hands of users. At the same time, the open source software probably won’t see widespread usage by the average consumer, and so we should continue to question and call out this kind of abuse of information by corporations and governments. And, we can do what Mozilla is doing with Firefox OS and provide software that encodes some of our ideals about freedom and privacy right into the software, while making it attractive for manufacturers to use by making it free.
So don’t try to shout down a fledgling technology just because it could be used to limit your freedoms or privacy. Lots of technologies could be used to limit those things. You can’t regulate the technology, because cheap clones are on the way and a lot of people are going to want them. Instead, the real menace here is those that would use technology to limit your privacy. Those are what you should fear when you feel uneasy about the future. I’d like to see a lot more discussion on that topic, but the latest gadget fad pays the bills (with advertising, at least) better, I suppose. And I guess I’d like to see more writing on the potential of new technologies to improve the human condition and make us better cyborgs, rather than just whether or not it will be the killer Facebook app. But we can all dream, right?
1: In particular, Accelerando features a main character with smart glasses. It’s an interesting portrait of how a well-connected digital savant might use this wearable technology while still interacting with people and places. ↩
Recently, I was setting up my laptop for an existing Rails project with the help of a pair. My pair was pivot on this project, which means that he’d been on it longer and so was bringing his experience and knowledge to the table, while I was seeing the project with fresh eyes.
“This is going to take forever to set up,” he grumbled. “The documentation’s out of date, and I remember there’s a bunch of gotchas to setting this project up. We’ll need to compile this, then do that, then you get an API key for..”
I went to Github and cloned the repo to my laptop.
“Just run rake,” I replied.
“Just run rake. It will tell us what to do next.”
And indeed, it did tell us what to do. I’ve called this test-based configuration, or other funny things in the past, but you can just think of it as trying to get to a known good state - a state where all the tests run. If it prevents the tests from running, then it’s the next thing you need.
I had to silence my pair’s grumbling at this process, because at first it seems like you’re going to be waiting a lot for rake, and that it might be easier just to remember all the steps necessary to set up a project.
It turns out rake showed us all the steps we needed to do to get the project running. A full log of what this looks like setting up a simple Rails app can be seen in this gist.
The point of software craftsmanship is to be pragmatic, not to seek perfection. I could have memorized the steps necessary to set up the average Rails project, but those steps wouldn’t have applied here. And indeed, my pair could have memorized them, since he had been on the project. But those steps would go out the window as soon as my pair was on another project. It is far more pragmatic to know the behavior of our tools (like knowing that rake will tell us about each thing necessary to get to a state where the tests pass) and rely on that behavior rather than to seek perfection on this one project.
Note: we could have used our experience with Rails and software craftsmen to avoid some of the steps you see me running in the gist: for example, you probably know that if the databases aren’t created, that you can run
rake db:create:all db:migrate db:test:prepare all at once, without running rake inbetween every single rake task. That’d be far more pragmatic, as you’re saving yourself time and effort by knowing the toolset. But I wanted to demonstrate that running rake between every single step told us what to do next.
Now, ask yourself: How can you “just run rake” with your projects?