Recently asked what I’ve been thinking about in grad school, my response left some friends scratching their heads: “bots” I said. I wrote this post in an attempt to describe some reasons to investigate them, examples of their roles, and the design features that make them unique and desirable.
What is a bot?
There’s no great definition out there, but the vagueness of the term should not overshadow their awesomeness. Bots are automated or semi-automated software agents that perform a set of actions. Often, these scripts read and write data on the web, running continuously to monitor changes or actions. Each bot can be created for a single purpose rather than a complex set of interactions, and easily replaced by better bots that may come along. Bots are not new or unique to the Web, but are increasingly visible. They can be time-saving, spammy, unexpected, or even destructive.
Reasons to Investigate Bots
Bots can help automate mundane or time-intensive tasks.
During the API Workshop, I was impressed by George Oates’ presentation on the Open Library project and how they use bots to perform mundane tasks like batch-importing catalog records from the Library of Congress. Every morning, their ImportBot checks the LoC for new records, and adds them to the Open Library database. No interns or software developers are required. Tools like IFTTT are similarly beginning to make it easier for non-technical users to automate parts of how we interact with the web.
They are increasingly easy to run in the cloud.
The ease of deploying applications to cloud services like Heroku, Windows Azure, and Amazon EC2 has become significantly easier for developers, and I expect that similar services will be more accessible to non-technical users in the near future. This opens up new opportunities to continuously run bots and smaller pieces of software in the cloud, while in the past you may have only considered firing up a server for a more persistent function like your personal website.
The ability to run software in the cloud becomes particularly profound when it’s possible to increase the number of servers and software simultaneously running a task; this is something bots can take advantage of, and a method of solving problems that is unique to the cloud.
They can be fun to build and use.
One of the fun outcomes for THATCamp Prime last year was @horse_thatbooks, a Twitter bot created by @boone that has resulted in some funny tweets including my favorite description of the unconference: “it’s a great time despite the humanities geek alert.” @horse_thatbooks reads a stream of tweets that include the #thatcamp hashtag, stores them in a cache, and will periodically run them through a markov chain process to create new tweets. While not all bots as playful as @horse_thatbooks, their design features create increased opportunities for experimentation, and yes – fun.
Networks of bots can perform functions that would otherwise require single/unified platforms.
In many of the examples I’ve looked at where bots have thrived, it’s possible that robust and centralized software could have been adopted to provide similar functionality. Bots can be powerful examples of how core functionality of a system can be limited in scope, and extended in an ecosystem through the creation of software by third parties. I think @substack summed up this idea with this tweet: “instead of writing a big app, compose together lots of tiny apps split out over separate processes that talk to each other over the network.”
Many related questions and concerns came up a few years ago during the planning process of Bamboo Corpora Space, where we grappled with the question of how distributed or centralized a system should be to establish interoperability among tools and collections for humanities researchers. On one end of the spectrum was a model that was mostly decentralized and run by software agents (similar to bots), and the other was a large service model that was controlled by a single entity. Many of the design features of bots (listed below) make a decentralized system desirable.
Monitoring, notification, and extending oneself through software.
The most far-out of my bot interests and ideas include the possibility that we’re entering a time in mobile computing where own interactions with the world are continuously monitored, and our daily decision making is effectively augmented by software agents that are looking out for us. Scoble has referred to a similar trend (although he is interested more in social and sensor data) as the rise of contextual systems. Part of my Triggers project had similar goals in mind, best summed up by the Google Now tagline: “the right information at just the right time.”
I’d like to see platforms like Google Now play a role in our future, but for bots or software that has design features of bots to be leveraged instead of black boxes. Looking at the roles and design features of bots, I think there’s a lot that can be desired.
Roles and Examples of Bots
Some great examples of bots in action are on Wikipedia, where there are currently 1,638 bot tasks approved for use , making 9% of edits to the English site. Bots play an active role in updating the site, and perform a variety of algorithmic functions that include importing new articles, vandal fighting and reverting changes, identifying copyright violations, ban enforcements, and recommending pages for users to contribute to. Other languages have much higher rates of bot participation, such as Spanish with 21%, and Africaans 61%. 
One of the earliest Wikipedia bots was RamBot, which imported public domain census data in October 2002 to create approximately 30,000 U.S. city articles. Unlike the OpenLibrary ImportBot that I previously mentioned, RamBot was fixed around a finite corpus to import and doesn’t need to continuously run. In seeding Wikipedia with content and stubs, RamBot played an important role. Another example is ClueBot, a vandal fighting bot that examines edit and contribution history, and is capable of reverting edits in seconds. ClueBot will then notify the offending user about the content that was modified. These scripts are created and operated by members of the community, and interact with the Mediawiki software that powers Wikipedia. If I was trying to build a crowdsourced or community-powered site right now, I would want to be thinking about ways that bots could help out.
Sometimes bots interface with humans in the form of a command, notification, or message. For quite some time there have been IRC bots that respond to commands or greet users, and more recently we’ve seen the rapid growth of Twitter bots. A funny example of Twitter bot that I stumbled upon is called @BronyRT, which retweets any message that uses the term “brony.” [For those that are not already aware, a brony is a male viewer of the television show My Little Pony.] If you tweet the message “STATS” at @BronyRT, it will return your ranking in terms of brony-related tweets, based upon the number of times your message has been retweeted. It’s a very strange system of karma that I don’t entirely understand.
There are a few other bots worth mentioning: @paperbot is a twitter bot created by Ed Summers that tweets 100-year-old news from OCR’ed newspapers using the Chronicling America API. I love how this is an example of a bot is both creating new awareness of content, and servers a very specific purpose. There’s also an interesting CHI article about GetLostBot, a bot that tries to facilitate serendipity by using Foursquare location data.
Design Features of Bots
Several design features of bots jump out at me:
- Bots are cheap. You only need a client with a web connection to do HTTP requests, on a server or a desktop. Beyond this basic hardware, there is nothing about the software that makes them expensive.
- They move agency to the edges of an ecosystem. By allowing outside developers to add functionality through the creation bots, core software can be lighter and allow for greater participation. For example, I have a lot more control over how a Wikipedia bot works than I do the Mediawiki software that powers the entire site. The ability to create a bot gives me more agency in that situation.
- They are modular. Bots are typically built around a set of actions and problems. When addressing problems of interoperability, bots can act as a lightweight solution to be the glue between different APIs.
- Replicability. As software, they can be copied, which makes it possible to share them with relative ease. The ability to have many copies of the same bot also means that they can be run in parallel, increasing the computing power and processing time by running on multiple machines.
Let’s Build Bots
The best part about bots is that they don’t rely on technical specifications, and their implementation can take a number of forms. We can build bots today, and don’t need funding or to ask for permission. There are also some interesting design patterns among bots, and several frameworks exist today to create bots for various platforms.
I’d love feedback on the ideas mentioned here, and hope we can build some awesome bots together.