Note: This is a repost of a blog post about the Facebook emotional contagion experiment that I wrote on People Pattern’s blog.


 

This is the first in a series of posts responding to the controversial Facebook study on Emotional Contagion

The past two weeks have seen a great deal of discussion around the recent computational social science study of Kramer, Guillory and Hancock (2014) “Experimental evidence of massive-scale emotional contagion through social networks” . I encourage you to read the published paper before getting caught up in the maelstrom of commentary. The wider issues are critical to address, and I have summarized the often conflicting but thoughtful perspectives below. These issues strike close to home, given our company’s expertise in computational linguistics and reliance on social media.

In this post, I provide a brief description of the original paper itself along with a synopsis of the many perspectives that have been put forth in the past two weeks. This post sets the stage for two posts to follow tomorrow and Tuesday next week that provide our take on the study plus our own Facebook-external opt-in version of the experiment, which anyone currently using Facebook can participate in.

Summary of the study

Kramer, Guillory and Hancock’s paper provides evidence that emotional states as expressed in social media posts are contagious in that they affect whether readers of those posts reflect similar positive or negative emotional states in their own later posts. The evidence is based on an experiment involving about 700,000 Facebook users over a one week period in January 2012. These users were split into four groups: a group that had a reduction in positive messages in their Facebook feed, another that had a reduction in negative messages, a control group that had an overall 5% reduction in posts, and a second control group that had a 2% reduction. Positivity and negativity were determined by using the LIWC word lists. LIWC, which was created and maintained by my University of Texas at Austin colleague James Pennebaker, is a standard resource for psychological studies of emotional expression in language. Over the past two decades, it has been applied to language from varying sources, including speech, essays, and social media.

The study found a small but statistically significant difference in emotional expression between the positive suppression group and the control and the negative suppression group and the control. Basically, users who had positive posts suppressed produced slightly lower rates of positive word usage and slightly higher rates of negative word usage, and the mirror image of this was found for the negative suppression group (check out the plot for these). (This description of the study is short — see Nitin Madnani’s description for more detail and analysis.)

The study was published in PNAS, and then the shit hit the fan.

Objections to the study

Objections to the study and the infrastructure that made it possible have come from many sources. The two major complaints have to do with ethical considerations and research flaws.

The first major criticism is that the study was unethical. The key problem is that there was no informed consent. Facebook users had no idea that they were part of this study and had no opportunity to opt out of it. An important aspect of this is that the study conforms to the Facebook terms of service: Facebook has the right to experiment with feed filtering algorithms as part of improving its service. However, because Jeff Hancock is a Cornell University professor, many state it should have passed Cornell’s IRB process. Furthermore, many feel that Facebook should obtain consent from users when running such experiments, whether for eventual publication or for in-company studies to improve the service. The editors of PNAS itself have issued an editorial expression of concern over the lack of informed consent and opt-out for subjects of the study. We agree this is an issue, so in our third post, we’ll introduce a way this can be achieved through an opt-in version of the study.

The second type of criticism is that the research is flawed or otherwise unconvincing. The most obvious issue is that the effect sizes are small. A subtler problem familiar to anyone who has done anything with sentiment analysis is that counting positive and negative words is a highly imperfect means for judging the positivity/negativity of a text (e.g. it does the wrong thing with negations and sarcasm — see Pang and Lee’s overview). Furthermore, the finding that reducing positive words seen leads to fewer positive words produced does not mean that the user’s actual mood was affected. We will return to this last point in tomorrow’s post.

Support for the study

In response, several authors have joined the discussion to support the study and others similar to it, or to refute some aspects of the criticism leveled at it.

Several commentators have made unequivocal statements that the study would have never obtained IRB approval. This is in fact a misperception: Michelle Meyer provides a great overview of many aspects of IRB approval and concludes that actually this particular study could have legitimately passed the IRB process. A key point for her is that had an IRB approved the study, it would probably be the right decision. She concludes: “We can certainly have a conversation about the appropriateness of Facebook-like manipulations, data mining, and other 21st-century practices. But so long as we allow private entities freely to engage in these practices, we ought not unduly restrain academics trying to determine their effects.”

Another defense is that many concerns expressed about the study are misplaced. Tal Yarkoni argues “In Defense of Facebook” that many critics have inappropriately framed the experimental procedure as injecting positive or negative content into feeds, when in fact it was removal of content. Secondly, he notes that Facebook already manipulates users’ feeds, and this study is essentially business-as-usual in this respect. Yarkoni notes that it is a good thing that Facebook publishes such research: “by far the most likely outcome of the backlash Facebook is currently experiencing is that, in future, its leadership will be less likely to allow its data scientists to publish their findings in the scientific literature.” They will do the work regardless, but the public will have less visibility into the kinds of questions Facebook can ask and the capabilities they can build based on the answers they find.

Duncan Watts takes this to another level, saying that companies like Facebook actually have a moral obligation to conduct such research. He writes in the Guardian that the existence of social networks like Facebook gives us an amazing new platform for social science research, akin to the advent of the microscope. He argues that companies like Facebook, as the gatekeepers of such networks, must perform and disseminate research into questions such how users are affected by the content they see.

Finally, such collaborations between industry and academia should be encouraged. Kate Niederhoffer and James Pennebaker argue that both industry and academy are best served through such collaborations and that the discussion around this study provides an excellent case study. In particular, the backlash against the study highlights the need for more rigor, awareness and openness about the research methods and more explicit informed consent among clients or customers.

Wider issues raised by the study and the backlash against it

The backlash and the above responses have furthermore provided fertile ground for other observations and arguments based on subtler issues and questions that the study and the response to it have revealed.

One of my favorites is the observation that IRBs do not perform ethical oversight. danah boyd argues that the IRB review process itself is mistakenly viewed by many as mechanism for ensuring research is ethical. She makes an insightful, non-obvious argument: that the main function of an IRB is to ensure a university is not liable for the activities of a given research project, and that focusing on questions of IRB approval for the Facebook study is beside the point. Furthermore, the real source of the backlash for her is that there is public misunderstanding and growing negative sentiment for the practice of collecting and analyzing data about people using the tools of big data.

Another point is that the ethical boundaries and considerations between industry and academia are difficult to reconcile. Ed Felten writes that though the study conforms to Facebook’s terms of service, it clearly is inconsistent with the research community’s ethical standards. On one hand, this gap could lead to fewer collaborations between companies and university researchers, while on the other hand it could enable some university researchers to side-step IRB requirements by working with companies. Note that the opportunity for these sorts of collaborations often arise naturally and reasonably frequently; for example, it often happens when a professor’s student graduates and joins such companies, and they continue working together.

Zeynep Tufekci escalates the discussion to much higher level—she argues that companies like Facebook are effectively engineering the public. According to Tufekci, this study isn’t the problem so much as it is symptomatic of the wider issue of how a corporate entity like Facebook has the power to target, model and manipulate users in very subtle ways. In a similar, though less polemical vein, Tartleton Gillespie notes the disconnect between Facebook’s promise to deliver a better experience to its users with how users perceive the role and ability of such algorithms. He notes that this leads to “a deeper discomfort about an information environment where the content is ours but the selection is theirs.”

In a follow up post responding to criticism of his “In Defense of Facebook” post, Tal Yarkoni points out that the real problem is the lack of regulations/frameworks for what can be done with online data, especially that collected by private entities like Facebook. He suggests the best thing is to reserve judgment with respect to questions of ethics for this particular paper, but that the incident does certainly highlight the need for “a new set of regulations that provide a unitary code for dealing with consumer data across the board–i.e., in both research and non-research contexts.”

Perhaps the most striking thing about the Kramer, Guillory and Hancock paper is how the ensuing discussion has highlighted many deep and important aspects of the ethics of research in computational social science from both industry and university perspectives, and the subtleties that lie therein.

Summing up

A standard blithe rejoinder to users of services like Facebook who express concern, or even horror, about studies like this is to say “Don’t you see that when you use a service you don’t pay for, you are not the customer, you are the product?” This is certainly true in many ways, and it merits repeating again and again. However, it of course doesn’t absolve corporations from the responsibility to treat their users with respect and regard for their well-being.

I don’t think the researchers nor Facebook itself have been grossly negligent with respect to this study, but nonetheless the study is in an ethical gray zone. Our second post will touch on other activities, such as A/B testing in ad placement and content, that are arguably in that same gray zone, but which have not created a public outcry even after years of being practiced. It will also say more about how the linguistic framing of the study itself essentially primed the extreme backlash that was observed and how it is in many ways more innocuous than its own wording would suggest.

Our third post will introduce our own opt-in version of the study, which we think is a reasonable way to explore the questions posed in the study. We’d love to get plenty of folks to try it out, and we’ll even let participants guess whether they were in the positive or negative group. Stay tuned!

There seems to be a relatively frequent back-in-forth in American society involving one group asking the wider society to stop using a racially charged word in certain contexts, and members of the wider society reacting to this as political correctness or thinking it is just plain wrong. For example, @KaraRBrown posted “Stop calling shit ‘ghetto’”, in which she calls out the increasing use of the word ‘ghetto’ as an adjective for less desirable stuff and strongly recommends people stop using it in that context. To summarize: “… this [using ghetto in this way] is something that can make a seemingly OK person immediately sound like an ignorant, possibly racist asshole. Don’t be that person.”

One response on Twitter to this remarked on how she must mean it is racist toward Jews, and this led to a non-debate with no improvement in mutual understanding. Ms Brown took it as trolling, but it is also indicative of responses I’ve seen elsewhere in similar discussions. I’m fine with viewing it as trolling if you are tired of that kind of typical response, but it can also be viewed as an expression of general misunderstanding about don’t-use-the-word-X-in-certain-contexts requests.

An example that I deal with all the time is the use of the word ‘slave’ in distributed computing contexts, where there is commonly a “master” compute node that is in charge of many “slave” worker compute nodes (e.g. look at systems like Hadoop and Spark). This terminology comes from general use of master and slave in technology. When I started working with Hadoop, I asked my wife (who is black) what she thought of that terminology, and she responded simply that she found it somewhat insensitive and offensive. Mainly, her response was just “Why? There are lots of other good descriptive words one could use instead.” I looked into it a bit, and it turns out there was a bit of a furor over master/slave terminology years ago when, in 2003,  the County of Los Angeles requested that equipment suppliers avoid such terminology on equipment labels. The internet had a conniption about it, with many posters crying foul that this was political correctness gone crazy—even though it was just a polite email request. It is remarkable how vehemently offended some people got by the request and how they went through great lengths to defend the terms as the best ones possible. I’m personally with those who point out that there are many other perfectly good words to describe the relationship, e.g. primary/secondary, supervisor/worker, and that those have the added benefit of not being insensitive. My favorite response (which I unfortunately cannot find the link to now) was something like “we don’t call computer components rapist and victim: let’s not use master and slave either.”

One of the things that was often pointed out regarding master/slave is that the term goes back a long long time and that it neither began nor ended with American slavery, so why should black Americans be bothered by it? And anyway, slavery ended with the Civil War, so why can’t black Americans just get over it? It’s the same thing with the point about ‘ghetto’ being associated with Jews rather than black Americans. These comments ignore context, and the strong associations such terms have for some segments of American society. Context is everything, and unfortunately, American slavery is not out of context — it is the genesis of the struggles for equality that black Americans have faced over the past 150 years. Most white Americans feel it is far, far in the past, but it isn’t such a long time.  Oddly, many white Americans feel that we live in a post racial society, but this is at odds with the experience of many black Americans, and you don’t need to look far to see ugly examples of it right in our faces on Twitter.

Here’s another example of where context matters: a Boston policeman was fired for calling a baseball player a “Monday”. But “Monday” is just a day of the week, so what’s the big deal, right? Well, you know, regular words can be racists slurs in context. Consider this as well: it is still unfortunately common for white people born before the fifties to refer to black men as “boys”. This is highly offensive, even though they may wish no offense and often harbor no explicitly racist views. It’s the echo of times past reverberating through language still used today, and it still has power.

So, now to circle back to the main point. Some people seem to get quite offended by statements like “word X is racist in such-and-such context, so don’t use it that way.” Why? My guess is that quite often the offended person thinks “I’m not racist, but I’ve used that word in that way, so now you are calling me a racist, and that’s just crazy.” They then go on to justify that use of the word or otherwise make the request seem unreasonable. What they seem to be missing is that the original request is not saying that you are racist because you say X, but that it is racially insensitive to do so (and you probably didn’t realize that, so here’s your public service announcement). These are usually reasonable requests (and not calls to ban words, etc), so just consider changing your use of such terms out of respect and good sense.

Topics: twitter,twitter4j,word clouds

Introduction

My previous post showed how to use Twitter4j in Scala to access Twitter streams. This post shows how to control a Twitter user’s actions using Twitter4j. The primary purpose of this functionality is perhaps to create interfaces for Twitter like TweetDeck, but it can also be used to create bots that take automated actions on Twitter (one bot I’m playing around with is @tshrdlu, using the code in this tutorial and the code in the tshrdlu repository).

This post will only cover a small portion of the things you can do, but they are some of the more common things and I include a couple of simple but interesting use cases. Once you have these things in place, it is straightforward to figure out how to use the Twitter4j API docs (and Stack Overflow) to do the rest.

Getting set up: code and authorization

Rather than having the reader build the code up while going through the tutorial, I’ve set up the code in the repository twitter4j-tutorial. The version needed for this tutorial as v0.2.0. You can download a tarball of that version, which may be easier to work with if there have been further developments to the repository since the writing of this tutorial. Checkout or download that code now. The main file of interest is:

  • src/main/scala/TwitterUser.scala

This tutorial is mainly a walk through for that file in blog form, with some additional pointers and explanations here and there.

You also need to set up the authorization details. See “Setting up authorization” section of the previous post to do this if you haven’t already.

READ THE FOLLOWING

IMPORTANT: for this tutorial you must set the permissions for your application to be “Read and Write“. This does NOT mean to use ‘chmod’. It means going to the Twitter developers application site, signing in with your Twitter account, clicking on “Settings” and setting the permissions to read and write.

OKAY, THANKS FOR PAYING ATTENTION

In the previous tutorial, authorization details were put into code. This time, we’ll use a twitter4j.properties file. This is easy: just add a file with that name to the twitter4j-tutorial directory with the following contents, substituting your details as appropriate.

oauth.consumerKey=[your consumer key here]
oauth.consumerSecret=[your consumer secret here]
oauth.accessToken=[your access token here]
oauth.accessTokenSecret=[your access token secret here]

Rate limits and a note of caution

Unlike streaming access to Twitter, performing user actions via the API is subject to rate limits. Once you hit your limit, Twitter will throw an exception and refuse to comply with your requests until a period of time has passed (usually 15 minutes). Twitter does this to limit bad bots and also preserve their computational resources. For more information on rate limits, see Twitter’s page about rate limiting.

I’ll discuss how to manage rate limits later in the post, but I mention them up front in case you exceed them while messing around with things early on.

A word of caution is also in order: since you are going to be able to take actions automatically, like following users, posting a status, and retweeting, you could end up doing many of these actions in rapid succession. This will (a) use up your rate limit very quickly, (b) probably not be interesting behavior, and (c) could get your account suspended. Make sure to follow the rules, especially those on following users.

If you are going to mess around quite a bit with actual posting, you may also want to consider creating an account that is not your primary Twitter account so that you don’t annoy your actual followers. (Suggestion: see the paragraph on “Create account” in part one of project phase one of my Applied NLP course for tips on how to add multiple accounts with the same gmail address.)

Basic interactions: searching, timelines, posting

All of the examples belowe are implemented as objects with main methods that do something using a twitter4j.Twitter object. To make it so we don’t have to call the TwitterFactory repeatedly, we first define a trait that gets a Twitter instance set up and ready to use.

trait TwitterInstance {
  val twitter = new TwitterFactory().getInstance
}

By extending this trait, our objects can access the twitter object conveniently.

As a first simple example, we can search for tweets that match a query by using the search method. The following object takes a query string given on the command line query, searches for tweets using that query, and prints them.

object QuerySearch extends TwitterInstance {

  def main(args: Array[String]) {
    val statuses = twitter.search(new Query(args(0))).getTweets
    statuses.foreach(status => println(status.getText + "\n"))
  }

}

Note that this uses a Query object, whereas with using a TwitterStream, a FilterQuery was needed. Also, for this to work, we must have the following import available:

import collection.JavaConversions._

This ensures that we can use the java.util.List returned by the getTweets method (of twitter4j.QueryResult) as if it were a Scala collection with the method foreach (and map, filter, etc). This is done via implicit conversions that make working with Java libraries far nicer than it would be otherwise.

To run this, go to the twitter4j-tutorial directory, and do the following (some example output shown):

$ ./build
> run-main bcomposes.twitter.QuerySearch scala
[info] Running bcomposes.twitter.QuerySearch scala
E' avvilente non sentirsi all'altezza di qualcosa o qualcuno, se non si possiede quella scala interiore sulla quale l'autostima pu? issarsi

Scala workshop will run with ECOOP, July 2nd in Montpellier, South of France. Call for papers is out. http://t.co/3WS6tHQyiF

#scala http://t.co/JwNrzXTwm8 Even two of them in #cologne #germany . #thumbsup

RT @MILLIB2DAL: @djcameo Birthday bash 30th march @ Scala nightclub 100 artists including myself make sur u reach its gonna be #Legendary

@kot_2010 I think it's the same case with Scala: with macros it will tend to "outsource" things to macro libs, keeping a small lang core.

RT @waxzce: #scala hiring or job ? go there : http://t.co/NeEjoqwqwT

@esten That's not only a front-end problem. Scala devs should use scalaz.Equal and === for type safe equality. /cc @sharonw

<...more...>

[success] Total time: 1 s, completed Feb 26, 2013 1:54:44 PM

You might see some extra communications from SBT, which will probably need to download dependencies and compile the code. For the rest of the examples below, you can run them in a similar manner, substituting the right object name and providing any necessary arguments.

There are various timelines available for each user, including the home timeline, mentions timeline, and user timeline. They are accessible as twitter4j.api.TimelineResources. For example, the following object shows the most recent statuses on the authenticating user’s home timeline (which are the tweets by people the user follows).

object GetHomeTimeline extends TwitterInstance {

  def main(args: Array[String]) {
    val num = if (args.length == 1) args(0).toInt else 10
    val statuses = twitter.getHomeTimeline.take(num)
    statuses.foreach(status => println(status.getText + "\n"))
  }

}

The number of tweets to show is given as the command-line argument.

You can also update the status of the authenticating user from the command line using the following object. Calling it will post to the authenticating user’s account (so only do it if you are comfortable with the command-line argument you give it going onto your timeline).

object UpdateStatus extends TwitterInstance {
  def main(args: Array[String]) {
    twitter.updateStatus(new StatusUpdate(args(0)))
  }
}

There are plenty of other useful methods that you can use to interact with Twitter, and if you have successfully run the above three, you should be able to look at the Twitter4j javadocs and start using them. Some examples doing more interesting things are given below.

Replying to tweets written to you

The following object goes through the most recent tweets that have mentioned the authenticating user, and replies “OK.” to them. It includes the author of the original tweet and any other entities that were mentioned in it.

object ReplyOK extends TwitterInstance {

  def main(args: Array[String]) {
    val num = if (args.length == 1) args(0).toInt else 10
    val userName = twitter.getScreenName
    val statuses = twitter.getMentionsTimeline.take(num)
    statuses.foreach { status => {
      val statusAuthor = status.getUser.getScreenName
      val mentionedEntities = status.getUserMentionEntities.map(_.getScreenName).toList
      val participants = (statusAuthor :: mentionedEntities).toSet - userName
      val text = participants.map(p=>"@"+p).mkString(" ") + " OK."
      val reply = new StatusUpdate(text).inReplyToStatusId(status.getId)
      println("Replying: " + text)
      twitter.updateStatus(reply)
    }}
  }

}

This should be mostly self-explanatory, but there are a couple of things to note. First, you can find all the entities that have been mentioned (via @-mentions) in the tweet via the method getUserMentionEntities of the twitter4j.Status class. The code ensures that the author of the original tweet (who isn’t necessarily mentioned in it) is included as a participant for the response, and also we take out the authenticating user. So, if the message “@tshrdlu What do you think of @tshrdlc?” is sent from @jasonbaldridge, the response will be “@jasonbaldridge @tshrdlc OK.” Note how the screen names do not have the @ symbol, so that must be added in the tweet text of the reply.

Second, notice that StatusUpdate objects can be created by chaining methods that add more information to them, e.g. setInReplyToStatusId and setLocation, which incrementally build up the StatusUpdate object that gets actually posted. (This is a common Java strategy that basically helps get around the fact that parameters to classes can neither be specified by name in Java nor have defaults, the way Scala does.)

Checking and managing rate limit information

None of the above code makes many requests from Twitter, so there was little danger of exceeding rate limits. These limits are a mixture of both time and number of requests: you basically get a certain number of requests every hour (currently 350) per authenticating user. Because of these limits, you should consider accessing tweets, timelines, and such using the streaming methods when you can.

Every response you get from Twitter comes back as a sub-class of twitter4j.TwitterResponse, which not only gives you what you want (like a QueryResult) but also gives you information about your connection to Twitter. For rate limit information, you can use the getRateLimitStatus method, which can then inform you about the number of requests you can still make and the time until your limit resets.

The trait RateChecker below has a function checkAndWait that, when given a TwitterResponse object, checks whether the rate limit has been exceeded and wait if it has. When the rate is exceeded, it finds out how much time remains until the rate limit is reset and makes the thread sleep until that time (plus 10 seconds) has passed.

trait RateChecker {

  def checkAndWait(response: TwitterResponse, verbose: Boolean = false) {
    val rateLimitStatus = response.getRateLimitStatus
    if (verbose) println("RLS: " + rateLimitStatus)

    if (rateLimitStatus != null && rateLimitStatus.getRemaining == 0) {
      println("*** You hit your rate limit. ***")
      val waitTime = rateLimitStatus.getSecondsUntilReset + 10
      println("Waiting " + waitTime + " seconds ( " + waitTime/60.0 + " minutes) for rate limit reset.")
      Thread.sleep(waitTime*1000)
    }
  }

}

Using rate limits is actually more complex than this. For example, this strategy ignores the fact that different request types have different limits, but it keeps things simple. This is surely not an optimal solution, but it does the trick for present purposes.

Note also that you can directly ask for rate limit information from the twitter4j.Twitter instance itself, using the getRateLimitStatus method. Unlike the results for the same method on a TwitterResponse, this gives a Map from various request types to the current rate limit statuses for each one. In a real application, you’d want to control each of these different limits at a more fine-grained level using this information.

Not all of the methods of Twitter4j classes actually hit the Twitter API. To see whether a given method does, look at its Javadoc: if it’s description says “This method calls http://api.twitter.com/1.1/some/method.json“, then it does hit the API. Otherwise, it doesn’t and you don’t need to guard it.

Examples using the checkAndWait function are given below.

Creating a word cloud from followers’ descriptions

Here’s a more interesting task: given a Twitter user, compute the counts of the words in the descriptions given in the bios of their followers and build a word cloud from them. The following code does this, outputing the resulting counts in a file, the contents of which can be pasted into Wordle’s advanced word cloud input.

object DescribeFollowers extends TwitterInstance with RateChecker {

  def main(args: Array[String]) {
    val screenName = args(0)
    val maxUsers = if (args.length==2) args(1).toInt else 500
    val followerIds = twitter.getFollowersIDs(screenName,-1).getIDs

    val descriptions = followerIds.take(maxUsers).flatMap { id => {
      val user = twitter.showUser(id)
      checkAndWait(user)
      if (user.isProtected) None else Some(user.getDescription)
    }}

    val tword = """(?i)[a-z#@]+""".r.pattern
    val words = descriptions.flatMap(_.toLowerCase.split("\\s+"))
    val filtered = words.filter(_.length > 3).filter(tword.matcher(_).matches)
    val counts = filtered.groupBy(x=>x).mapValues(_.length)
    val rankedCounts = counts.toSeq.sortBy(- _._2)

    import java.io._
    val wordcountFile = "/tmp/follower_wordcount.txt"
    val writer = new BufferedWriter(new FileWriter(wordcountFile))
    for ((w,c) <- rankedCounts)
      writer.write(w+":"+c+"\n")
    writer.flush
    writer.close
  }

}

The thing to consider is that if you are pointing this at a person with several hundred followers, you will exceed the rate limit. The call to getFollowersIDs is a single hit, and then each call to showUser is a hit. Because the showUser calls come in rapid succession, we check the rate limit status after each one using checkAndWait (which is available because we mixed in the RateChecker trait) and it waits for the limit to reset as previously discussed, keeping us from exceeding the rate limit and getting an exception from Twitter.

The number of users returned by getFollowersIDs is at most 5000. If you run this on a user who has more followers, followers beyond 5000 won’t be considered. If you want to tackle such a user, you’ll need to use the cursor, which is the integer provided as the argument to getFollowersIDs, and make multiple calls while incrementing that cursor to get more.

Most of the rest of the code is just standard Scala stuff for getting the word counts and outputting them to a file. Note that a small effort is done to reduce the non-alphabetic characters (but allowing # and @) and filtering out short words.

As an example of the output, when put into Wordle, here is the word cloud for my followers.

jasonbaldridge_wordcloud

This looks about right for me—completely expected in fact—but it is still cool that it comes out of my followers’ self descriptions. One could start thinking of some fun algorithms for exploiting this kind of representation of a user to look into how well different users align or don’t align with their followers, or to look for clusters of different types of followers, etc.

Retweeting automatically

Tired of actually reading those tweets in your timeline and retweeting some of them? The following code gets some of the accounts the authenticating user follows, grabs twenty of those users, filters them to get interesting ones, and then takes up to 10 of the remaining ones and retweets their most recent statuses (provided they aren’t replies to someone else).

object RetweetFriends extends TwitterInstance with RateChecker {

  def main(args: Array[String]) {
    val friendIds = twitter.getFriendsIDs(-1).getIDs
    val friends = friendIds.take(20).map { id => {
      val user = twitter.showUser(id)
      checkAndWait(user)
      user
    }}

    val filtered = friends.filter(admissable)
    val ranked = filtered.map(f => (f.getFollowersCount, f)).sortBy(- _._1).map(_._2)

    ranked.take(10).foreach { friend => {
      val status = friend.getStatus
      if (status!=null && status.getInReplyToStatusId == -1) {
        println("\nRetweeting " + friend.getName + ":\n" + status.getText)
        twitter.retweetStatus(status.getId)
        Thread.sleep(30000)
      }
    }}
  }

  def admissable(user: User) = {
    val ratio = user.getFollowersCount.toDouble/user.getFriendsCount
    user.getFriendsCount < 1000 && ratio > 0.5
  }

}

The getFriendsIDs method is used to get the users that the authenticating user is following (but who do not necessarily follow the authenticating user, despite the use of the word “friend”). We again take care with the rate limiting on gathering the users. We filter these users, looking for those who follow fewer than 1000 users and those who have a follower/friend ratio of greater than .5, in a simple attempt to filter out some less interesting (or spammy) accounts. The remaining users are then ranked according to their number of followers (most first). Finally, we take (up to) 10 of these (the take method returns 3 things if you ask for 10 but there are just 3), look at their most recent status, and if it is not null and isn’t a reply to someone, we retweet it. Between each of these, we wait for 30 seconds so that anyone following our account doesn’t get an avalanche of retweets.

Conclusion

This post and the related code should provide enough to get a decent feel for working with Twitter4j, including necessary setup and using some of the methods to start creating applications with it in Scala. See project phase three of my Applied NLP course to see exercises and code that takes this further to do interesting things for automated bots, including mixing streaming access and user access to get more complex behaviors.

Topics: twitter, twitter4j, sbt

Introduction

My previous post provided a walk-through for using the Twitter streaming API from the command line, but tweets can be more flexibly obtained and processed using an API for accessing Twitter using your programming language of choice. In this tutorial, I walk-through basic setup and some simple uses of the twitter4j library with Scala. Much of what I show here should be useful for those using other JVM languages like Clojure and Java. If you haven’t gone through the previous tutorial, have a look now before going on as this tutorial covers much of the same material but using twitter4j rather than HTTP requests.

I’ll introduce code, bit by bit, for accessing the Twitter data in different ways. If you get lost with what should go where, all of the code necessary to run the commands is available in this github gist, so you can compare to that as you move through the tutorial.

Update: The tutorial is set up to take you from nothing to being able to obtain tweets in various ways, but you can also get all the relevant code by looking at the twitter4j-tutorial repository. For this tutorial, the tag is v0.1.0, and you can also download a tarball of that version.

Getting set up

An easy way to use the twitter4j library in the context of a tutorial like this is for the reader to set up a new SBT project, declare it as a dependency, and then compile and run code within SBT. (See my tutorial on using Jerkson for processing JSON with Scala for another example of this.) This sorts out the process of obtaining external libraries and setting up the classpath so that they are available. Follow the instructions in this section to do so.

$ mkdir ~/twitter4j-tutorial
$ cd ~/twitter4j-tutorial/
$ wget http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.12.2/sbt-launch.jar

Now, save the following as the file ~/twitter4j-tutorial/build.sbt. Be aware that it is important to keep the empty lines between each of the declarations.

name := "twitter4j-tutorial"

version := "0.1.0 "

scalaVersion := "2.10.0"

libraryDependencies += "org.twitter4j" % "twitter4j-stream" % "3.0.3"

Then save the following as the file ~/twitter4j-tutorial/build.

java -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=384M -jar `dirname $0`/sbt-launch.jar "$@"

Make that file executable and run it, which will show SBT doing a bunch of work and then leave you with the SBT prompt. At the SBT prompt, invoke the update command.

$ cd ~/twitter4j-tutorial
$ chmod a+x build
$ ./build
[info] Set current project to twitter4j-tutorial (in build file:/Users/jbaldrid/twitter4j-tutorial/)
> update
[info] Updating {file:/Users/jbaldrid/twitter4j-tutorial/}default-570731...
[info] Resolving org.twitter4j#twitter4j-core;3.0.3 ...
[info] Done updating.
[success] Total time: 1 s, completed Feb 8, 2013 12:55:41 PM

To test whether you have access to twitter4j now, go to the SBT console and import the classes from the main twitter4j package.

> console
[info] Starting scala interpreter...
[info]
Welcome to Scala version 2.10.0 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_37).
Type in expressions to have them evaluated.
Type :help for more information.

scala> import twitter4j._
import twitter4j._

If nothing further is output, then you are all set (exit the console using CTRL-D). If things are amiss (or if you are running in the default Scala REPL), you’ll instead see something like the following.

scala> import twitter4j._
<console>:7: error: not found: value twitter4j
import twitter4j._
^

If this is what you got, try to follow the instructions above again to make sure that your setup is exactly as above (check the versions, etc).

If you just want to see some examples of using twitter4j as an API and are happy adding its jars by hand to your classpath or are using an IDE like Eclipse, then it is unnecessary to do the SBT setup — just read on and adapt the examples as necessary.

Write, compile and run a simple main method

To set the stage for how we’ll run programs in this tutorial, let’s create a simple main method and ensure it can be run in SBT. Do the following:

$ mkdir -p ~/twitter4j-tutorial/src/main/scala/

Next, save the following code as ~/twitter4j-tutorial/src/main/scala/TwitterStream.scala.

package bcomposes.twitter

import twitter4j._

object StatusStreamer {
  def main(args: Array[String]) {
    println("hi")
  }
}

Next, at the SBT prompt for the twitter4j-tutorial project, use the run-main command as follows.

> run-main bcomposes.twitter.StatusStreamer
[info] Compiling 1 Scala source to /Users/jbaldrid/twitter4j-tutorial/target/scala-2.10/classes...
[info] Running bcomposes.twitter.StatusStreamer
hi
[success] Total time: 2 s, completed Feb 8, 2013 1:36:32 PM

SBT compiles the code, and then runs it. This is a generally handy way of running code with all the dependencies available without having to worry about explicitly handling the classpath.

In what comes below, we’ll flesh out that main method so that it does more interesting work.

Setting up authorization

When using the Twitter streaming API to access tweets via HTTP requests, you must supply your Twitter username and password. To use twitter4j, you also must provide authentication details; however, for this you need to set up OAuth authentication. This is straightforward:

  1. Go to https://dev.twitter.com/apps and click on the button that says “Create a new application” (of course, you’ll need to log in with your Twitter username and password in order to do this)
  2. Fill in the name, description and website fields. Don’t worry too much about this: put in whatever you like for the name and description (e.g. “My example application” and “Tutorial app for me”). For the website, give the URL of your Twitter account if you don’t have anything better to use.
  3. A new screen will come up for your application. Click on the button at the bottom that says “Create my access token”.
  4. Click on the “OAuth tool” tab and you’ll see four fields for authentication which you need in order to use twitter4j to access tweets and other information from Twitter: Consumer key, Consumer secret, Access token, and Access token secret.

Based on these authorization details, you now need to create a twitter4j.conf.Configuration object that will allow twitter4j to access the Twitter API on your behalf. This can be done in a number of different ways, including environment variables, properties files, and in code. To keep it as simple as possible for this tutorial, we’ll go with the latter option.

Add the following object after the definition of StatusStreamer, providing your details rather than the descriptions given below.

object Util {
  val config = new twitter4j.conf.ConfigurationBuilder()
    .setOAuthConsumerKey("[your consumer key here]")
    .setOAuthConsumerSecret("[your consumer secret here]")
    .setOAuthAccessToken("[your access token here]")
    .setOAuthAccessTokenSecret("[your access token secret here]")
    .build
}

You should of course be careful not to let your details be known to others, so make sure that this code stays on your machine. When you start developing for real, you’ll use other means to get the authorization information injected into your application.

Pulling tweets from the sample stream

In the previous tutorial, the most basic sort of access was to get a random sample of tweets from https://stream.twitter.com/1/statuses/sample.json, so let’s use twitter4j to do the same.

To do this, we are going to create a TwitterStream instance that gives us an authorized connection to the Twitter API. To see all the methods associated with the TwitterStream class, see the API documentation for TwitterStream.  A TwitterStream instance is able to get tweets (and other information) and then provide them to any listeners that have registered with it. So, in order to do something useful with the tweets, you need to implement the StatusListener interface and connect it to the TwitterStream.

Before showing the code for creating and using the stream, let’s create a StatusListener that will perform a simple action based on tweets streaming in. Add the following code to the Util object created earlier.

def simpleStatusListener = new StatusListener() {
  def onStatus(status: Status) { println(status.getText) }
  def onDeletionNotice(statusDeletionNotice: StatusDeletionNotice) {}
  def onTrackLimitationNotice(numberOfLimitedStatuses: Int) {}
  def onException(ex: Exception) { ex.printStackTrace }
  def onScrubGeo(arg0: Long, arg1: Long) {}
  def onStallWarning(warning: StallWarning) {}
}

This method creates objects that implement StatusListener (though it only does something useful for the onStatus method and otherwise ignores all other events sent to it). Clearly, what it is going to do is take a Twitter status (which is all of the information associated with a tweet, including author, retweets, geographic coordinates, etc) and output the text of the status—i.e., what we usually think of as a “tweet”.

The following code puts it all together. We create a TwitterStream object by using the TwitterStreamFactory and the configuration, add a simpleStatusListener to the stream, and then call the sample method of TwitterStream to start receiving tweets. If that were the last line of the program, it would just keep receiving tweets until the process was killed. Here, I’ve added a 2 second sleep so that we can see some tweets, then clean up the connection and shut it down cleanly. (We could let it run indefinitely, but then to kill the process, we would need to use CTRL-C, which will kill not only that process, but also the process that is running SBT.)

object StatusStreamer {
  def main(args: Array[String]) {
    val twitterStream = new TwitterStreamFactory(Util.config).getInstance
    twitterStream.addListener(Util.simpleStatusListener)
    twitterStream.sample
    Thread.sleep(2000)
    twitterStream.cleanUp
    twitterStream.shutdown
  }
}

To run this code, simply put in the same run-main command in SBT as before.

> run-main bcomposes.twitter.StatusStreamer

You should see tweets stream by for a couple of seconds and then you’ll be returned to the SBT prompt.

Pulling tweets with specific properties

As with the HTTP streaming, it’s easy to use twitter4j to follow a particular set of users, particular search terms, or tweets produced within certain geographic regions. All that is required is creating appropriate FilterQuery objects and then using the filter method of TwitterStream rather than the sample method.

FilterQuery has several constructors, one of which allows an Array of Long values to be provided, each of which is the id of a Twitter user who is to be followed by the stream. (See the previous tutorial to see one easy way to get the id of a user based on their username.)

object FollowIdsStreamer {
  def main(args: Array[String]) {
    val twitterStream = new TwitterStreamFactory(Util.config).getInstance
    twitterStream.addListener(Util.simpleStatusListener)
    twitterStream.filter(new FilterQuery(Array(1344951,5988062,807095,3108351)))
    Thread.sleep(10000)
    twitterStream.cleanUp
    twitterStream.shutdown
  }
}

These are the IDs for Wired Magazine (@wired), The Economist (@theeconomist), the New York Times (@nytimes), and the Wall Street Journal (@wsj). Add the code to TwitterStream.scala and then run it in SBT. Note that I’ve made the program sleep for 10 seconds in order to give more time for tweets to arrive (since these are just four accounts and will have varying activity). If you are not seeing anything show up, increase the sleep time.

> run-main bcomposes.twitter.FollowIdsStreamer

To track tweets that contain particular terms, create a FilterQuery with the default constructor and then call the track method with an Array of strings that contains the query terms you are interested in. The object below does this, and uses the args Array as the container for the query terms.

object SearchStreamer {
  def main(args: Array[String]) {
    val twitterStream = new TwitterStreamFactory(Util.config).getInstance
    twitterStream.addListener(Util.simpleStatusListener)
    twitterStream.filter(new FilterQuery().track(args))
    Thread.sleep(10000)
    twitterStream.cleanUp
    twitterStream.shutdown
  }
}

With things set up this way, you can track arbitrary queries by specifying them on the command line.

> run-main bcomposes.twitter.SearchStreamer scala
> run-main bcomposes.twitter.SearchStreamer scala python java
> run-main bcomposes.twitter.SearchStreamer "sentiment analysis" "machine learning" "text analytics"

If the search terms are not particularly common, you’ll need to increase the sleep time.

To filter by location, again create a FilterQuery with the default constructor, but then use the locations method, with an Array[Array[Double]] argument — basically an Array of two-element Arrays, each of which contains the latitude and longitude of a corner of a bounding box. Here’s an example that creates bounding box for Austin and uses it.

object AustinStreamer {
  def main(args: Array[String]) {
    val twitterStream = new TwitterStreamFactory(Util.config).getInstance
    twitterStream.addListener(Util.simpleStatusListener)
    val austinBox = Array(Array(-97.8,30.25),Array(-97.65,30.35))
    twitterStream.filter(new FilterQuery().locations(austinBox))
    Thread.sleep(10000)
    twitterStream.cleanUp
    twitterStream.shutdown
  }
}

To make things more flexible, we can take the bounding box information on the command line, convert the Strings into Doubles and pair them up.

object LocationStreamer {
  def main(args: Array[String]) {
    val boundingBoxes = args.map(_.toDouble).grouped(2).toArray
    val twitterStream = new TwitterStreamFactory(Util.config).getInstance
    twitterStream.addListener(Util.simpleStatusListener)
    twitterStream.filter(new FilterQuery().locations(boundingBoxes))
    Thread.sleep(10000)
    twitterStream.cleanUp
    twitterStream.shutdown
  }
}

We can call LocationStreamer with multiple bounding boxes, e.g. as follows for Austin, San Francisco, and New York City.

> run-main bcomposes.twitter.LocationStreamer -97.8 30.25 -97.65 30.35 -122.75 36.8 -121.75 37.8 -74 40 -73 41

Conclusion

This shows the start of how you can use twitter4j with Scala for streaming. It also supports programmatic access to the actions that any Twitter user can take, including posting messages, retweeting, following, and more. I’ll cover that in a later tutorial. Also, some examples of using twitter4j will start showing up soon in the tshrldu project.

Topics: Unix,spelling,tr,sort,uniq,find,awk

Introduction

We can of course write programs to do most anything we want, but often the Unix command line has everything we need to perform a series of useful operations without writing a line of code. In my Applied NLP class today, I show how one can get a high-confidence dictionary out of a body of raw text with a series of Unix pipes, and I’m posting that here so students can refer back to it later and see some pointers to other useful Unix resources.

Note: for help with any of the commands, just type “man <command>” at the Unix prompt.

Checking for spelling errors

We are working on automated spelling correction as an in-class exercise, with a particular emphasis on the following sentence:

This Facebook app shows that she is there favorite acress in tonw

So, this has a contextual spelling error (there), an error that could be a valid English word but isn’t (acress) and an error that violates English sound patterns (tonw).

One of the key ingredients for spelling correction is a dictionary of words known to be valid in the language. Let’s assume we are working with English here. On most Unix systems, you can pick up an English dictionary in /usr/share/dict/words, though the words you find may vary from one platform to another. If you can’t find anything there, there are many word lists available online, e.g. check out the Wordlist project for downloads and links.

We can easily use the dictionary and Unix to check for words in the above sentence that don’t occur in the dictionary. First, save the sentence to a file.

$ echo "This Facebook app shows that she is there favorite acress in tonw" > sentence.txt

Next, we need to get the unique word types (rather than tokens) is sorted lexicographic order. The following Unix pipeline accomplishes this.

$ cat sentence.txt | tr ' ' '\n' | sort | uniq > words.txt

To break it down:

  •  The cat command spills the file to standard output.
  • The tr command “translates” all spaces to new lines. So, this gives us one word per line.
  • The sort command sorts the lines lexicographically.
  • The uniq command makes those lines uniq by making adjacent duplicates disappear. (This doesn’t do anything for this particular sentence, but I’m putting it in there in case you try other sentences that have multiple tokens of the type “the”, for example.)

You can see these effects by doing each in turn, building up the pipeline incrementally.

$ cat sentence.txt
This Facebook app shows that she is there favorite acress in tonw
$ cat sentence.txt | tr ' ' '\n'
This
Facebook
app
shows
that
she
is
there
favorite
acress
in
tonw
$ cat sentence.txt | tr ' ' '\n' | sort
Facebook
This
acress
app
favorite
in
is
she
shows
that
there
tonw

Note: the use of cat above is a UUOC (unnecessary use of cat) that is dispreferred to sending the input directly into tr at the start. I do it this way in the tutorial so that everything flows left-to-right. However, if you want to avoid cat abuse, here’s how you’d do it.


$ tr ' ' '\n' < sentence.txt | sort | uniq

We can now use the comm command to compare the file words.txt and the dictionary. It produces three columns of output: the first gives the lines only in the first file, the second are lines only in the second file, and the third are those in common. So, the first column has what we need, because those are words in our sentence that are not found in the dictionary. Here’s the command to get that.

$ comm -23 words.txt /usr/share/dict/words
Facebook
This
acress
app
shows
tonw

The -23 options indicate we should suppress columns 2 and 3 and show only column 1. If we just use -2, we get the words in the sentence with the non-dictionary words on the left and the dictionary words on the right (try it).

The problem of course is that any word list will have gaps. This dictionary doesn’t have more recent terms like Facebook and app. It also doesn’t have upper-case This. You can ignore case with comm using the -i option and this goes away. It doesn’t have shows, which is not in the dictionary since it is an inflected form of the verb stem show. We could fix this with some morphological analysis, but instead of that, let’s go the lazy route and just grab a larger list of words.

Extracting a high-confidence dictionary from a corpus

Raw text often contains spelling errors, but errors don’t tend to happen with very high frequency, so we can often get pretty good expanded word lists by computing frequencies of word types on lots of text and then applying reasonable cutoffs. (There are much more refined methods, but this will suffice for current purposes.)

First, let’s get some data. The Open American National Corpus has just released v3.0.0 of its Manually Annotated Sub-Corpus (MASC), which you can get from this link.

- http://www.anc.org/masc/MASC-3.0.0.tgz

Do the following to get it and set things up for further processing:

$ mkdir masc
$ cd masc
$ wget http://www.anc.org/masc/MASC-3.0.0.tgz
$ tar xzf MASC-3.0.0.tgz

(If you don’t have wget, you can just download the MASC file in your browser and then move it over.)

Next, we want all the text from the data/written directory. The find command is very handy for this.

$ find data/written -name "*.txt" -exec cat {} \; > all-written.txt

To see how much is there, use the wc command.

$ wc all-written.txt
   43061  400169 2557685 all-written.txt

So, there are 43k lines, and 400k tokens. That’s a bit small for what we are trying to do, but it will suffice for the example.

Again, I’ll build up a Unix pipeline to extract the high-confidence word types from this corpus. I’ll use the head command to show just part of the output at each stage.

Here are the raw contents.

$ cat all-written.txt | head

I can't believe I wrote all that last year.
Acephalous

Friday, 07 May 2010

Now, get one word per line.

$ cat all-written.txt | tr -cs 'A-Za-z' '\n' | head

I
can
t
believe
I
wrote
all
that
last

The tr translator is used very crudely: basically, anything that is not an ASCII letter character is turned into a new line. The -cs options indicate to take the complement (opposite) of the ‘A-Za-z’ argument and to squeeze duplicates (e.g. A42, becomes A with a single new line rather than three).

Next, we sort and uniq, as above, except that we use the -c option to uniq so that it produces counts.

$ cat all-written.txt | tr -cs 'A-Za-z' '\n' | sort | uniq -c | head
   1
 737 A
  22 AA
   1 AAA
   1 AAF
   1 AAPs
  21 AB
   3 ABC
   1 ABDULWAHAB
   1 ABLE

Because the MASC corpus includes tweets and blogs and other unedited text, we don’t trust words that have low counts, e.g. four or fewer tokens of that type. We can use awk to filter those out.

$ cat all-written.txt | tr -cs 'A-Za-z' '\n' | sort | uniq -c | awk '{ if($1>4) print $2 }' | head
A
AA
AB
ACR
ADDRESS
ADP
ADPNP
AER
AIG
ALAN

Awk makes it easy to process lines of files, and gives you indexes into the first column ($1), second ($2), and so on. There’s much more you can do, but this shows how you can conditionally output some information from each line using awk.

You can of course change the threshold. You can also turn all words to lower-case by inserting another tr call into the pipe, e.g.:

$ cat all-written.txt | tr 'A-Z' 'a-z' | tr -cs 'a-z' '\n' | sort | uniq -c | awk '{ if($1>8) print $2 }' | head
a
aa
ab
abandoned
abbey
ability
able
abnormal
abnormalities
aboard

It all comes down to what you need out of the text.

Combining and using the dictionaries

Let’s do the check on the sentence above, but using both the standard dictionary and the one derived from MASC. Run the following command first.

$ cat all-written.txt | tr -cs 'A-Za-z' '\n' | sort | uniq -c | awk '{ if($1>4) print $2 }' > /tmp/masc_vocab.txt

Then in the directory where you saved words.txt, do the following.

$ cat /usr/share/dict/words /tmp/masc_vocab.txt | sort | uniq > big_vocab.txt
$ comm -23 words.txt big_vocab.txt
acress
tonw

Ta-da! The MASC corpus provided us with enough examples of other words that This, Facebook, app, and shows are no longer detected as errors. Of course, detecting there as an error is much more difficult and requires language models and more.

Conclusion

Learn to use the Unix command line! This post is just a start into many cool things you can do with Unix pipes. Here are some other resources:

Happy (Unix) hacking!

Topics: Twitter, streaming API

Introduction

Analyzing tweets is all the rage, and if you are new to the game you want to know how to get them programmatically. There are many ways to do this, but a great start is to use the Twitter streaming API, a RESTful service that allows you to pull tweets in real time based on criteria you specify. For most people, this will mean having access to the spritzer, which provides only a very small percentage of all the tweets going through Twitter at any given moment. For access to more, you need to have a special relationship with Twitter or pay Twitter or an affiliate like Gnip.

This post provides a basic walk-through for using the Twitter streaming API. You can get all of this based on the documentation provided by Twitter, but this will be slightly easier going for those new to such services. (This post is mainly geared for the first phase of the course project for students in my Applied Natural Language Processing class this semester.)

You need to have a Twitter account to do this walk-through, so obtain one now if you don’t have one already.

Accessing a random sample of tweets

First, trying pulling a random sample of tweets using your browser by going to the following link.

You should see a growing, unwieldy list of raw tweets flowing by. It should look something like the following image.

tweets_sample

Here’s an example of a “raw” tweet (which comes in JSON, or JavaScript Object Notation):

{"text":"#LetsGoMavs til the end RT @dallasmavs: Are You ALL IN?","truncated":false,"retweeted":false,"geo":null,"retweet_count":0,"source":"web","in_reply_to_status_id_str":null,"created_at":"Wed Apr 25 15:47:39 +0000 2012","in_reply_to_user_id_str":null,"id_str":"195177260792299521","coordinates":null,"in_reply_to_user_id":null,"favorited":false,"entities":{"hashtags":[{"text":"LetsGoMavs","indices":[0,11]}],"urls":[],"user_mentions":[{"indices":[27,38],"screen_name":"dallasmavs","id_str":"22185437","name":"Dallas Mavericks","id":22185437}]},"contributors":null,"user":{"show_all_inline_media":true,"statuses_count":3101,"following":null,"profile_background_image_url_https":"https:\/\/si0.twimg.com\/profile_background_images\/285480449\/AAC_med500.jpg","profile_sidebar_border_color":"eeeeee","screen_name":"flyingcape","follow_request_sent":null,"verified":false,"listed_count":2,"profile_use_background_image":true,"time_zone":"Mountain Time (US &amp; Canada)","description":"HUGE ROCKETS &amp; MAVS fan. Lets take down the Lakers &amp; beat up on the East. Inaugural member of the FC Dallas – Fort Worth fan club.","profile_text_color":"333333","default_profile":false,"profile_background_image_url":"http:\/\/a0.twimg.com\/profile_background_images\/285480449\/AAC_med500.jpg","created_at":"Thu Oct 21 15:40:21 +0000 2010","is_translator":false,"profile_link_color":"1212cc","followers_count":35,"url":null,"profile_image_url_https":"https:\/\/si0.twimg.com\/profile_images\/1658982184\/204970_10100514487859080_7909803_68807593_5366704_o_normal.jpg","profile_image_url":"http:\/\/a0.twimg.com\/profile_images\/1658982184\/204970_10100514487859080_7909803_68807593_5366704_o_normal.jpg","id_str":"205774740","protected":false,"contributors_enabled":false,"geo_enabled":true,"notifications":null,"profile_background_color":"0a2afa","name":"Mandy","default_profile_image":false,"lang":"en","profile_background_tile":true,"friends_count":48,"location":"ATX \/ FDub. From Galveston !","id":205774740,"utc_offset":-25200,"favourites_count":231,"profile_sidebar_fill_color":"efefef"},"id":195177260792299521,"place":{"bounding_box":{"type":"Polygon","coordinates":[[[-97.938383,30.098659],[-97.56842,30.098659],[-97.56842,30.49685],[-97.938383,30.49685]]]},"country":"United States","url":"http:\/\/api.twitter.com\/1\/geo\/id\/c3f37afa9efcf94b.json","attributes":{},"full_name":"Austin, TX","country_code":"US","name":"Austin","place_type":"city","id":"c3f37afa9efcf94b"},"in_reply_to_screen_name":null,"in_reply_to_status_id":null}

There is a lot of information in there beyond the tweet text itself, which is simply “#LetsGoMavs til the end RT @dallasmavs: Are You ALL IN?” It is basically a map from attributes to values (and values may themselves be such a map, e.g. for the “user” attribute above). You can see whether the tweet has been retweeted (which will be zero when the tweet is first published), what time it was created, the unique tweet id, the geo-coordinates (if available), and more. If an attribute does not have a value for the tweet, it is ‘null’.

I will return to JSON processing of tweets in a later tutorial, but you can get a head start by seeing my tutorial on using Scala to process JSON in general.

Command line access to tweets

Assuming you were successful in being able to view tweets in the browser, we can now proceed to using the command line. For this, it will be convenient to first set environment variables for your Twitter username and password.

$ export TWUSER=foo
$ export TWPWD=bar

Obviously, you need to provide your Twitter account details instead of foo and bar…

Next, we’ll use the program curl to interact with the API. Try it out by downloading this blog post.

$ curl http://bcomposes.wordpress.com/2013/01/25/a-walk-through-for-the-twitter-streaming-api/ > bcomposes-twitter-api.html
$ less bcomposes-twitter-api.html

Given that you pulled tweets from the API using your web browser, and that curl can access web pages in this way, it is simple to use curl to get tweets and direct them straight to a file.

$ curl https://stream.twitter.com/1/statuses/sample.json -u$TWUSER:$TWPWD > tweets.json

That’s it: you now have an ever-growing file with randomly sampled tweets. Have a look and try not to lose your faith in humanity. ;)

Pulling tweets with specific properties

You might want to get the tweets from specific users rather than a random sample. This requires user ids rather than the user names we usually see. The id for a user can be obtained from the Twitter API by looking at the /users/show endpoint. For example, the following gives my information:

Which gives:


<user>
<id>119837224</id>
<name>Jason Baldridge</name>
<screen_name>jasonbaldridge</screen_name>
<location>Austin, Texas</location>
<description>
Assoc. Prof., Computational Linguistics, UT Austin. Senior Data Scientist, Converseon. OpenNLP developer. Scala, Java, R, and Python programmer.
</description>
...MORE...

So, to follow @jasonbaldridge via the Twitter API, you need user id 119837224. You can pull my tweets via the API using the “follow” query parameter.

$ curl -d follow=119837224 https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

There is a good chance I’m not tweeting right now, so you’ll probably not see anything. Let’s follow more users, which we can do by adding more id’s separated by commas.

$ curl -d follow=1344951,5988062,807095,3108351 https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

This will follow Wired Magazine (@wired), The Economist (@theeconomist), the New York Times (@nytimes), and the Wall Street Journal (@wsj).

You can also write those ids to a file and read them from the file. For example:

$ echo "follow=1344951,5988062,807095,3108351" > following
$ curl -d @following https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

You can of course edit the file “following” rather than using echo to create it. Also, the file name can be named whatever you like (“following” as the name is not important here).

You can search for a particular term in tweets, such as “Scala”, using the “track” query parameter.

$ curl -d track=scala https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

And, no surprise, you can search for multiple items by using commas to separate them.

$ curl -d track=scala,python,java https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

However, this only requires that a tweet match at least one of these terms. If you want to ensure that multiple terms match, you’ll need to write them to a file and then refer to that file. For example, to get tweets that have both “sentiment” and “analysis” OR both “machine” and “learning” OR both “text” and “analytics”, you could do the following:

$ echo "track=sentiment analysis,machine learning,text analytics" > tracking
$ curl -d @tracking https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

You can pull tweets from a specific rectangular area (bounding box) on the Earth’s surface. For example, the following pulls geotagged tweets from Austin, Texas.

$ curl -d locations=-97.8,30.25,-97.65,30.35 https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

The bounding box is given as latitude (bottom left), longitude (bottom left), latitude (top right), longitude (top right). You can add further bounding boxes to capture more locations. For example, the following captures tweets from Austin, San Francisco, and New York City.

$ curl -d locations=-97.8,30.25,-97.65,30.35,-122.75,36.8,-121.75,37.8,-74,40,-73,41 https://stream.twitter.com/1/statuses/filter.json -u$TWUSER:$TWPWD

Conclusion

It’s all pretty straightforward, and quite handy for many kinds of tweet-gathering needs. One of the problems is that Twitter will drop the connection at times, and you’ll end up missing tweets until you start a new process. If you need constant monitoring,  see UT Austin’s Twools (Twitter tools) for obtaining a steady stream of tweets that picks up whenever Twitter drops your connection.

In a later post, I’ll detail how to use an API like twitter4j to pull tweets and interact with Twitter at a more fundamental level.

Remembering Belle Scarlett Baldridge, Sep. 29, 2011.

On September 29, 2011, one year ago today, my family experienced the late-term stillbirth of our daughter Belle (see my post last year). It hurt like hell and it’s a loss we’ll always feel acutely. Despite this tragedy, we have emerged through the year stronger than before, in no small part thanks to the strength of our relationships and the amazing support of family, friends and community. And, just this past Sunday, on September 23, we celebrated the birth of a beautiful and healthy baby boy. This new addition was obviously very well monitored during the pregnancy, given the loss of Belle. We were quite confident that he would come out fine; nonetheless, his healthy arrival has been an immense relief for our family. He’s a very calm baby (so far), but I still do find myself rejoicing a little when he cries, even as I try to take care of whatever it is that he needs at the moment.

Our kids (my older daughter from my previous marriage and our three-year-old son) clearly had Belle’s loss very present in their minds with this pregnancy, even if they rarely voiced their concerns. My daughter was very worried before the birth, and her relief was palpable after she knew that he was alive and well. Our three-year-old was less direct about it, but it was very much in his mind. The day after the birth, I announced at  “Hey, he has been alive for a whole day!” not even thinking about Belle at that moment. Our three-year-old said “Belle wasn’t alive for a long time!” After I affirmed that statement, he followed it with “But he (the baby) will be alive for a long time!”, with a big smile on his face. It’s always amazing to see how kids are sorting through very complex emotions, and I think it is often at levels that are far deeper than we tend to give them credit for.

In addition to receiving support from others, I also processed Belle’s death through music. Like nearly everyone who grew up in the 1980s, I have a special fondness for the mixtape, and I’ve created plenty of playlists over the years. Last year, my younger brother Justin and I decided to each create (independently) a playlist of 100 songs that we connected with in 2011. In part, the idea was for the playlist to reflect the course of the year. When Belle died, my playlist obviously took a tone for the music that got me through it, and I ended up with a 20 song segment that I have come to think of as “the Belle cycle” (see below). Perhaps, the key song was one that I had long loved, but that Justin reminded me of – The Cure’s “Where The Birds Always Sing”. It’s lyrics beautifully capture the powerlessness we experience when dealing with death and attempting to make some sense of it. And, every time I’ve been to Belle’s grave, the birds have been singing.

The aftermath of Belle’s death taught us a lot, and there is much I discovered about myself. And, as we found out, it affected others greatly. We experienced an amazing outpouring of support from across the world — every little email and tweet of well-wishing helped. We heard from friends who, upon hearing about Belle, reflected on their own lives and made changes to enable them to lead happier, more fulfilled lives. Some even made pretty major changes in location and/or career. I heard from many others who had experienced stillbirths, and from yet others who were children who had come after stillbirths — and the tremendous, positive influence those siblings had had on their lives. It is still incredible to me that Belle has already had such an impact, even though she never drew her own breath.

So, one year on, all is well. We still feel the loss of Belle and remember her daily. We are now filled by the joy of having our new baby, and, of course, our other kids. It’s quite a mix of intense emotions, but the human heart has room for them all. Life goes on. It can be hard. It is rarely easy. But, it is good — very, very good.


Musical addendum: The 20 songs that formed what I’ve come to think of as “the Belle cycle” in my 2011 playlist run through the gamut of emotions I experienced just before and well after her death, and were drawn from songs that I was listening to a lot during that period. For some songs, it was the lyrics that spoke to me, but mostly it was the emotion conveyed by the music itself. They didn’t pop out that way, but as I organized the songs, they fell into a fairly clear narrative, for me. I’m sharing it here because maybe it, or some portion of it, or just the idea, can help someone else. It goes something like the following.

To begin, I’m on top of the world — I had just been promoted to associate professor and I’m in love with the little girl who I’d have soon. Heck, there was even a song with her name!

1. Kanye West – Touch The Sky
2. TV On The Radio – Will Do
3. Jack Johnson – Belle
4. Yo La Tengo – Our Way To Fall

The last song somehow transitioned for me: it has to do with falling in love, but that being transient, or hard to capture, as Belle was to evade us. The music actually conveys this much better than the lyrics alone.

Zoe Keating’s cello on “Sun Will Set” captured the echoing, sawing emptiness and desperation of discovering that Belle was dead.

5. Zoe Keating – Sun Will Set

The next is a song about watching a loved one die that has always moved me, but which took on particular poignancy. The lyric “love is watching someone die” beautifully captures the ache of losing someone.

6. Death Cab for Cutie – What Sarah Said

We struggle to make sense of death, regardless of our personal belief system, and The Cure captures it perfectly (see lyrics below).

7. The Cure – Where The Birds Always Sing

The next songs go to a dark place and then a raging hole inside. “Bury The Evidence” and “Ruiner” have long been songs for me to vent rage, and they served me well last year.

8. Danger Mouse – Dark Night Of The Soul (Feat. David Lynch)
9. Tricky – Bury The Evidence
10. Nine Inch Nails – Ruiner

Trent Reznor can express rage, but he can express calm equally well, though still typically with an edge to it. As such, NIN and then Radiohead and Danger Mouse express the calm after the storm, but an uneasy one, one that is nursing its wounds and wants revenge.

11. Nine Inch Nails – A Warm Place
12. Radiohead – Codex
13. Danger Mouse – Revenge (Feat. The Flaming Lips)

But perhaps things can still look up, and DJ Shadow gets a hook in that starts to bring it across.

14. DJ Shadow – Redeemed

As a kid, I was fascinated by Stevie Wonder’s “Journey Through The Secret Life Of Plants”, and I found the song Power Flower haunting and mesmerizing. I had forgotten the name of the song and which Stevie Wonder album it was on, but I finally tracked it down last October — and it still haunts and mesmerizes me. And last year, it uplifted me.

15. Stevie Wonder – Power Flower

Still not out of the thick of yet — feeling much better, but still hurting and recovering.

16. Danger Mouse – The Rose With The Broken Neck – feat. Jack White

But, hey, you gotta pick yourself up and move on.

17. Deerhunter – Don’t Cry

Finally unfolding out of the gloom, slowly but surely.

18. Rachel’s – Water from the Same Source

And back up, able to smile again and see joy in the world.

19. Jónsi – Around Us
20. The Cure – This. Here And Now. With You

I formed this “cycle” in October and November last year, and listened to it again and again, revisiting and processing my feelings on each listen, drawing more strength each time. The very act of organizing the portion of the playlist in this way helped me immensely — essentially it was a constructive way to channel my emotions, and it produced something that allowed me to continue to process and understand them.

Finally, because they are so spot on, here are the lyrics for The Cure’s “Where the Birds Always Sing”:

The world is neither fair nor unfair
The idea is just a way for us to understand
But the world is neither fair nor unfair
So one survives
The others die
And you always want a reason why

But the world is neither just nor unjust
It’s just us trying to feel that there’s some sense in it
No, the world is neither just nor unjust
And though going young
So much undone
Is a tragedy for everyone

It doesn’t speak a plan or any secret thing
No unseen sign or untold truth in anything…
But living on in others, in memories and dreams
Is not enough
You want everything
Another world where the sun always shines
And the birds always sing
Always sing…

The world is neither fair nor unfair
The idea is just a way for us to understand
No the world is neither fair nor unfair
So some survive
And others die
And you always want a reason why

But the world is neither just nor unjust
It’s just us trying to feel that there’s some sense in it
No, the world is neither just nor unjust
And though going young
So much undone
Is a tragedy for everyone

It doesn’t mean there has to be a way of things
No special sense that hidden hands are pulling strings
But living on in others, in memories and dreams
Is not enough
And it never is
You always want so much more than this…

An endless sense of soul and an eternity of love
A sweet mother down below and a just father above
For living on in others, in memories and dreams
Is not enough
You want everything
Another world
Where the birds always sing
Another world
Where the sun always shines
Another world
Where nothing ever dies…

Follow

Get every new post delivered to your Inbox.

Join 2,213 other followers